id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
16328724
|
pes2o/s2orc
|
v3-fos-license
|
Age and Gender Differences in Relationships Among Emotion Regulation, Mood, and Mental Health
Objective: We investigated the effects of age on mood and mental health-mediated emotion regulation, such as cognitive reappraisal and expressive suppression, and examined whether these relationships differ according to gender. Method: We recruited 936 Japanese participants. They comprised six age groups ranging from 20 to 70 years old, with 156 participants in each age group and equal numbers of men and women. Results: Structural equation model analysis showed that older participants were more likely to use cognitive reappraisal, further enhancing positive mood and reducing negative mood, whereas, age did not affect expressive suppression. Moreover, expressive suppression had a smaller impact on mood than cognitive reappraisal. A multi-group analysis showed significant gender differences. In men, cognitive reappraisal increased with age and influenced mood more positively than in women. Discussion: Our findings indicated gender differences in aging effects on emotion regulation. We discussed about these results from the cognitive process, motivation to emotion regulation, and cultural differences.
information (negativity bias; Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001). Other studies have reported no such negativity bias and found that older adults, in contrast to young adults, paid greater attention to positive information and recalled it more frequently than negative information (Charles, Mather, & Carstensen, 2003;Mather & Carstensen, 2005). These phenomena are referred to as the positivity effect and support the SST, which acknowledges that older adults are motivated by emotion regulation. Studies on the positivity effect have focused on aging effects on cognitive processing, such as attention and memory for emotioneliciting information. However, few studies have examined how control functions (emotion regulation) of information-elicited emotions change with age. Therefore, the present study aimed to focus on emotion 637022G GMXXX10.1177 1 Kobe University, Japan 2 Osaka University, Japan 3 Kinki University, Osaka, Japan regulation and examine aging effects on emotion regulation, as well as relationships among emotion regulation, mood, and mental health.
The process model of emotion regulation is widely accepted (Gross, 2001). This model divides emotion regulation into two categories: antecedent-focused emotion regulation, which occurs prior to emotion generation, and response-focused emotion regulation, which occurs following emotional responses. Many emotion-regulation strategies have been investigated; however, Gross and John (2003) focused exclusively on the cognitive reappraisal strategy ("reappraisal" hereafter) and expressive suppression strategy ("suppression" hereafter). Reappraisal is an antecedent-focused strategy that alters emotional impact by cognitively changing how people perceive emotion-eliciting situations. Suppression is a response-modulated strategy that inhibits elicited emotions and emotion expressive-behaviors.
Based on previous studies that have focused on SST and emotion regulation, the present study used the hypothetical model indicated in Figure 1. It has been reported that reappraisal is an adaptive strategy for reinterpreting a given situation and improving negative emotions, even under stressful situations, thereby enhancing positive emotions and psychological well-being (Gross & John, 2003;Haga, Kraft, & Corby, 2009) and reducing negative emotions, depression, and anxiety (Dennis, 2007;Gross & John, 2003;Spaapen, Waters, Brummer, Stopa, & Bucks, 2014). Suppression can control negative mood expression, but cannot lower the frequency of negative mood experiences. The discrepancy between inner experiences and outer expressions caused by suppression leads to a sense of self-inconsistency; decreases positive emotions, psychological well-being, and subjective happiness (Gross & John, 2003); and increases negative emotions, anxiety, and depression (Fresco et al., 2007;Gross & John, 2003;Nolen-Hoeksema & Aldao, 2011;Spaapen et al., 2014). Suppression is recognized as a maladaptive strategy and a psychopathological risk factor (Aldao, Nolen-Hoeksema, & Schweizer, 2010). Following these studies' hypothesis model (Figure 1), we expect that reappraisal will enhance positive emotions and reduce negative emotions, enhancing mental health. Conversely, suppression will reduce positive emotions and increase negative emotions, decreasing mental health. As previous research has not consistently found an association between reappraisal and suppression (Haga et al., 2009;Spaapen et al., 2014), a corresponding path was not assumed in the hypothesis model.
Regarding the emotional regulation, it is expected that reappraisal will increase with age for the following reasons. First, the positivity effect means that older adults are typically more aware of positive aspects of situations-even when those situations are stressful. This would promote positive meaning in negative situations. Second, young adults may have difficulty reinterpreting situations positively due to their negativity bias. Thus, they will use reappraisal less. With respect to suppression, following the SST, older adults are motivated toward maintaining emotional well-being, the use of suppression that lead to a sense of self-inconsistency is expected to decrease. However, some studies focusing on older adults have reported inconsistent findings. In one study, age was not found to affect the use of reappraisal or suppression (Spaapen et al., 2014), but in another study, older adults were found to make use of suppression more frequently (Diehl, Coyle, & Labouvie-Vief, 1996).
In addition to age, previous studies have reported that gender affects the use of emotion-regulation strategies (Gross & John, 2003;Kwon, Yoon, Joormann, & Kwon, 2013;Nolen-Hoeksema & Aldao, 2011;Thomsen, Mehlsen, Viidik, Sommerlund, & Zachariae, 2005). For example, men use suppression more frequently than do women (Gross & John, 2003;Spaapen et al., 2014) and women use reappraisal more frequently than do men (Spaapen et al., 2014). In addition, Nolen-Hoeksema and Aldao (2011) reported that only women increased their use of suppression with increasing age, further suggesting there are gender differences in the correlations between age and emotion-regulation strategies. McRae, Ochsner, Mauss, Gabrieli, and Gross (2008) and Domes et al. (2010) revealed sex differences in brain activity for reappraisal. Domes et al. (2010) suggested possibility that men enhanced the brain activity in emotion processing areas more efficiently. McRae et al. (2008) indicated that men use reappraisal automatically with less effort than do women. Following their findings, because automatic cognitive processing is retained and effortful cognitive processing is declined in the elderly, aging has a larger influence on reappraisal for women than for men. The present study was designed to first examine the compatability of the hypothetical model described in Figure 1 with the survey data and then investigate whether there are gender differences in the paths between each variable.
Participants and Procedures
The aim of the present study was to first elucidate correlations between age and emotion-regulation strategies and then explore gender differences in those correlations. To avoid biases in the age and gender of participants, we conducted a panel survey using a professional survey research agency (Macromill, Inc.). We recruited 936 Japanese participants (age = 20-79 years, M = 49.09, SD = 16.57). There were six age groups ranging from 20s to 70s, with 156 participants in each age group and equal numbers of men and women. We emailed a survey invitation to 6,213 adults between the ages of 20 and 79 years and closed the survey after collecting the allocated number of participant responses.
The survey was conducted online, and an email with a URL for the survey request and survey page was sent to participants. On the first page of the survey, we clearly indicated that (a) the survey intends to ask personal information, (b) filling out the questionnaire implies agreement to participate in the study, and (c) responses would be analyzed as personally unidentifiable statistical information and used for research purposes only. Participants were rewarded with points equivalent to 100 JPY (approximately US$1.20) through the research agency. Macromill, Inc., guaranteed that the company would not provide personally identifiable information to any third party without a participant's prior consent. Before conducting the survey, we obtained ethical approval from the research ethics committee at the researchers' institution.
The entire questionnaire was presented in Japanese, displayed each basic attribute and scale on the website, and was programmed so that participants could only proceed to the next page if they responded to all questions on the current page. To increase the reliability of responses, a message was displayed requesting participants to double check or correct responses if necessary, when a participant selected the same response number for all question items on the current page.
Survey
Emotion regulation. We used the Japanese version (Yoshizu, Sekiguchi, & Amemiya, 2013) of the Emotion Regulation Questionnaire (ERQ-J) developed by Gross and John (2003). The ERQ-J consists of 10 items: six items for reappraisal factors and four items for suppression factors. The internal consistency, testretest reliability, and construct validity of the Japanese version have been verified in previous studies with undergraduate students. Each item was rated on a 7-point scale ranging from 1 (strongly disagree) to 7 (strongly agree).
Mood condition. To measure positive and negative moods, we used the Japanese version (Sato & Yasuda, 2001) of the Positive and Negative Affect Schedule (PANAS; Watson, Clark, & Tellegen, 1988). The PANAS is designed to assess participants' current moods toward each item using a 6-point scale ranging from 1 (strongly disagree) to 6 (strongly agree). The scale consists of 16 items with eight items each for positive and negative moods. The present study calculated the total score of the eight items for each mood and used these results for analysis.
Mental health. To measure mental health, we used the 12-item version of the General Health Questionnaire (GHQ-12; Goldberg & Williams, 1988;Fukunishi, 1990). The GHQ-12 is designed to rate each item using a 4-point scale, with higher scores indicating poor mental health. The present study used the total score of the 12 items as the mental health index.
Results
First, to check the reliabilities of the ERQ-J's factor items, we calculated Cronbach's alpha. The results showed adequate internal consistency (α = .83 for the reappraisal factor, and α = .75 for the suppression factor). Table 1 shows the means, standard deviations, gender differences, and correlations for the measured variables. To determine gender differences, a t test for each variable revealed that suppression and negative mood scores for males were significantly higher than for females-suppression, t(934) = 4.80, p < .001, r = .16; negative mood, t(934) = 2.89, p < .01, r = .10. The correlational analysis showed significant moderate correlations among the measured variables, except for the correlations between suppression and mood and between age and suppression.
Structural Equation Model Analysis
We examined the fit of the data with the model (Figure 1), which was hypothesized based on previous studies, using an analysis of covariance. We used the goodness-of-fit statistic (GFI), comparative fit index (CFI), root mean square error approximation (RMSEA), and Akaike information criterion (AIC) as goodness-of-fit indices (for review, see Hoorper, Coughlan, & Mullen, 2008). The model is acceptable when GFI and CFI are .90 or more and RMSEA is .10 or less, which depends on the field under analysis. Smaller AIC indicate appropriate fit; AIC is used in comparison between models that explain the same data. Results showed a low goodness of fit of the model's indices, χ 2 (4) = 206.79, p < .01, GFI = .935, CFI = .761, RMSEA = .233, AIC = 240.785. Therefore, based on the results of the Wald test and modification indices, we created a new model with a path from reappraisal to suppression strategies. We considered the path from reappraisal to suppression as valid because (a) a balance between appraisal and suppression is important for emotion regulation (Arens, Balkir, & Barnow, 2013), and (b) reappraisal is a strategy that people implement before their emotions are fully generated and suppression is a strategy to inhibit the expression of generated emotions (Gross & John, 2003).
The results for the new model ( Figure 2) indicated sufficient goodness of fit, χ 2 (3) = 19.92, p < .01, GFI = .993, CFI = .981, RMSEA = .075, AIC = 54.920. The path values show standardized coefficients, and all paths were significant, except the path from age to suppression. Age affected mood and mental health directly, and the positive effects were greater among older participants. In addition, age affected mood and mental health through emotion regulation. Older participants were more likely to use reappraisal, further enhancing positive mood and reducing negative mood. In contrast, age did not affect suppression. Suppression decreased positive mood and increased negative mood regardless of age.
In addition, the comparison of standardized coefficients of appraisal and suppression showed significant differences between emotion-regulation strategies for each mood (positive mood, p < .05; and negative mood, p < .05). These results indicate that reappraisal has a larger impact on mood than does suppression.
Multi-Group Analysis on Gender Differences
To examine gender differences in the effects of age on emotion regulation and emotion regulation on mood, we conducted a multi-group analysis. First, we simultaneously tested a model across male and female groups without imposing any equality constraints (baseline model: Figure 3). Next, 12 different multi-group models, with different sets of constraints between the gender groups, were compared with the baseline model (M0) and chi-square tests were conducted. Table 2 presents the series of nested models that were tested. A significant chi-square difference between two models suggests gender differences on the constrained path. Results revealed that M7 and M11 were significantly different from M0; in M7, the path from reappraisal to mood was constrained to be equal between gender, and in M11, the paths from age to reappraisal and age to mood were constrained. Compared with females, males were more likely to use reappraisal as they aged, and their moods were improved by reappraisal.
Discussion
The aim of the present study was to examine the effects of aging on emotion-regulation strategies and gender differences in those effects. Regarding reappraisal, in light of the positivity effect in older adults (Charles et al., 2003) and negative bias in young adults (Baumeister et al., 2001), we expected that the use of reappraisal would increase with age. In addition, the SST predicts that older adults with limited time horizons are more likely to be motivated toward maintaining emotionally meaningful goals. In this respect, we expected that maladaptive suppression less frequently with aging. The results indicated gender differences in aging effects on emotion regulation. Men were more likely to use reappraisal as they aged, but no correlations were found between suppression and age. However, aging did not affect reappraisal or suppression in women. In regard to reappraisal, the effect of aging was observed in men, but not in women. This gender difference in reappraisal can be explained by a gender difference in the cognitive process of reappraisal. McRae et al. (2008) investigated brain activity during reappraisal by showing negative emotion-eliciting images to participants. According to their findings, compared with women, men demonstrated down-regulated amygdala activity related to emotional responses and less prefrontal activity related to cognitive and emotion control. Prefrontal activity is commonly considered to reflect the effortful and conscious process of cognitive and emotional control. McRae et al. (2008) pointed out the possibility that men use reappraisal automatically with less effort than do women. Unlike McRae et al. (2008), Domes et al. (2010) reported that small clusters within prefrontal cortex were more activated in men than in women; however, men also showed activation in other brain regions (e.g., the mid-temporal gyrus, amygdala, insula, and fusiform gyrus). Domes et al. (2010) therefore suggested that men may reappraise situations more effectively than women by using a widespread brain network. Aging-related decline in brain function is more apparent in the prefrontal area (Raz et al., 2005). For men, the use of reappraisal is an automatic process and/ or a widespread network process that does not depend only on the prefrontal area, which declines with age. Although research has not reached consensus regarding sex differences in the cognitive processes underlying reappraisal, these findings may explain why reappraisal was observed to increase more with age among men than in women. Moreover, given that men depend on an automatic process and women depend on a conscious process, we predicted that there would be no aging effect in men, but that the use of reappraisal would decrease in women. However, aging did not negatively affect the use of reappraisal. Reappraisal was likely to be promoted by the positivity effect (Mather & Carstensen, 2005) commonly seen among older adults, which involves paying greater attention to and recalling positive information more than negative information.
Suppression controls expressions of not only positive emotions but also negative emotions. However, emotions themselves are not removed; therefore, suppression generates a sense of inconsistency between the inner experience of emotions and the outer expression of emotions or behaviors. For this reason, suppression is considered a maladaptive strategy (Gross & John, 2003) and we predicted that the use of suppression would decrease in late adulthood. However, no relationships between suppression and aging were observed, regardless of gender. Although an aging effect was specifically observed in men for reappraisal, it was not observed for suppression. This further suggests that the effects of aging differ by type of emotion regulation. Shiota and Levenson (2009) conducted a study on young (20s), middle-aged (40s), and older (60s) adults to examine the effects of reappraisal and suppression on (a) subjective emotional experience; (b) physiological indices, such as heartbeat and blood pressure; and (c) facial expression while participants viewed sad and disgusting film clips. The findings from their study elucidated that older adults are better at positive reappraisal compared with other generations, but no differences between age groups were found for suppression. Moreover, reappraisal had a modest effect in reducing facial expression in all ages. From these results, Shiota and Levenson (2009) stated that aging effects on emotion regulation may be observed in the internal aspects of emotions (subjective experience and peripheral physiology), but not in the expression of emotional responses. In our study, therefore, the use of suppression did not increase with age.
In the initial hypothesized model, reappraisal and suppression were set as independent variables, as previous research has found no association between reappraisal and suppression (e.g., r = −.09, Spaapen et al., 2014;r = −.03, Haga et al., 2009). However, this model was dismissed on the basis of the covariance structure analysis. In response, we adopted another model with a path from reappraisal to suppression, repeated the analysis, and achieved sufficient goodness-of-fit indices. This result implies that, whereas aging does not directly affect suppression, reappraisal increases with age and suppression increases through reappraisal. Reappraisal enhances positive mood expression and reduces negative mood expression (Gross & John, 2003). In addition, older adults positively reappraise conflicts to control negative emotions (Diehl et al., 1996). The correlation between reappraisal and suppression observed in the present study suggests that, by using reappraisal or reinterpretation of a negative experience, individuals attempt to prevent an inconsistency between the subjective emotional experience and the expressed emotion.
In addition, the effects of suppression on mood were smaller compared with the effects of reappraisal in the present study. A meta-analysis on the correlations Note. CFI = comparative fit index; NFI = normed-fit index; RMSEA = root mean square error approximation; AIC = Akaike information criterion. *p < .05. Note. Standardized path coefficients are presented for male (in bold) before the slash and for female after the slash. Non-significant paths are denoted by n.s.
between regulation strategies and mental health by Aldao et al. (2010) reported that adaptive strategies were weakly correlated with depression, whereas maladaptive strategies, including suppression, were strongly correlated with depression. The reason for such different findings for the effects of emotion regulation on mood and mental health is presumably due to cultural differences in emotion regulation. A study that focused on undergraduate students across 23 countries examined the effects of cultural differences on reappraisal and suppression (Matsumoto, Yoo, Nakagawa, & Cul, 2008). The study reported that (a) the effect of cultural differences was larger for suppression than reappraisal, and (b) suppression is necessary to determine emotional responses that best fit a social context. In countries such as Japan, people value interpersonal relationships and place a high importance on self-control of thoughts and behaviors that could hinder social solidarity and traditional order. A study with Japanese undergraduates (Yoshizu et al., 2013) did not confirm any correlation between suppression and negative emotional experience or well-being. The participants in our study were also Japanese, and the correlations between suppression and mood were not significant. Based on these findings, suppression is not a maladaptive strategy in a culture where people frequently use suppression. In addition, our results that age and gender did not affect suppression indicate that the participants did not experience major discomfort using suppression because they use suppression in their culture, regardless of age and sex.
Limitations of the Study and Future Directions
The present study provided evidence that the effects of aging on emotion regulation differ for reappraisal and suppression and that there are gender differences in those effects. Nonetheless, this study's methodology introduced some limitations. First, we did not directly examine the efficacy of each strategy. Therefore, we cannot say that older men use reappraisal more effectively than do women or other generations. Experimental methods should be utilized to determine whether aging affects the efficacy of strategies. Second, the significant results for the direct paths from age to mood and age to mental health indicated that factors other than reappraisal and suppression influence mood and mental health. Third, although general mental health was defined as an outcome in this study, gender differences may change if we use more specific outcome such as depression or positive outcome such as happiness. In addition, we conducted an online survey, and the rate of Internet use among the elderly is lower than among other populations. Some selection bias may therefore have affected the sample. This research suggests key directions for future research. Our findings regarding suppression were different from previous studies. The use of emotion-regulation strategies differs by ethnic group (Arens et al., 2013;Consedine, Magai, & Horton, 2005;Flynn, Hollenstein, & Mackey, 2010) and culture (Kwon et al., 2013;Turliuc & Bujor, 2013). The emotion-regulation strategies that people value differ between cultures that place importance on protecting and promoting the pursuit of individual happiness and those that place value on social order and human relationships (Matsumoto et al., 2008). Future studies should consider cultural influences, such as how much people value the pursuit of individual happiness or human relationships, to elucidate cultural effects on emotion-regulation strategies and correlations between aging and emotion-regulation strategies.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by a Grant-in-Aid for JSPS (Japan Society for the Promotion of Science) KAKENHI (grant numbers 15K04066, 26870365, 26780406).
|
2018-04-03T04:52:26.660Z
|
2016-03-10T00:00:00.000
|
{
"year": 2016,
"sha1": "49e85e48ac18d8cd21e398e9534a0f994f60a2cd",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/2333721416637022",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49e85e48ac18d8cd21e398e9534a0f994f60a2cd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
86197761
|
pes2o/s2orc
|
v3-fos-license
|
TOMATO SEEDLING GROWTH, EARLINESS, YIELD, AND QUALITY FOLLOWING PRETRANSPLANT NUTRITIONAL CONDITIONING AND LOW TEMPERATURES
-1 and in 1989 with N at 50 to 100 mg·liter -1 , with no further increases from 100 to 200 mg·liter -1 . Low-temperature exposure had no effect on earliness, yield, or quality. A PNC regime combining at least 200 mg N/liter and up to 10 mg P/liter should be used to nutritionally condition 'Sunny' tomato seedlings to enhance yield. Most tomato fieIds in coastal South Carolina are established using greenhouse-grown transplants. Once planted, the seed- lings may be exposed to near chilling temperatures for weeks before fields warm. Low-temperature stresses may delay flow- ering, fruiting, and possibly reduce total yields and quality. We hypothesized that earliness and overall yields would be im- proved using pretransplant nutritional conditioning (PNC), whereby seedlings are nutritionally conditioned during green-
Most tomato fieIds in coastal South Carolina are established using greenhouse-grown transplants. Once planted, the seedlings may be exposed to near chilling temperatures for weeks before fields warm. Low-temperature stresses may delay flowering, fruiting, and possibly reduce total yields and quality. We hypothesized that earliness and overall yields would be improved using pretransplant nutritional conditioning (PNC), whereby seedlings are nutritionally conditioned during greenhouse transplant production to enable them to better tolerate transplant stresses and enhance earliness.
PNC has been shown to have long-term effects on muskmelon and tomato (Dufault, 1986;Weston and Zandstra, 1989). Although muskmelon transplant shock increased as PNC levels increased, recovery from shock was faster with higher PNC regimes (Dufault, 1986). Earliness was affected by high vs. low PNC for muskmelon (Dufault, 1986) and tomato (Weston and Zandstra, 1989), but total yields were not affected.
The influence of PNC on improving cold tolerance of tomatoes at near freezing temperatures is unknown. Low temperatures may reduce survival and possibly delay early tomato yields (Brasher and Westover, 1937). However, seedlings conditioned with a 48-h low-temperature exposure (12.5C) were more coldtolerant than those conditioned for 3 h at the same temperature (Wheaton and Morris, 1968). Tomato productivity increased in greenhouse studies by exposing seedlings to chilling (10-13C) temperatures, especially in conjunction with high N nutrition (Wittwer and Teubner, 1957). The objectives of our study were Received for publication 14 Aug. 1990. Technical contribution no. 3154 of the South Carolina Experiment Station, Clemson Univ. This study was funded from the USDA grant (CSRS-85-CSRS-2-2541) "Agricultural Adjustment in the Southeast through Alternative Cropping Systems". The cost of publishing this paper was defrayed in part by the payment of page charges. Under postal regulations, this paper therefore must be hereby marked advertisement solely to indicate this fact. l Graduate Student. 'Associate Professor.
to determine the effects of N and P PNC and low temperature on tomato seedling growth, earliness, yield, and fruit quality.
Materials and Methods
Influence of NP and low temperatures on seedling growth. 'Sunny' tomato seeds were planted on 2 Mar. 1988 in quartered Todd flats (Speedling, Sun City, Fla.), (size 150, volume 30.5 cm 3 ) filled with Sogemix #3 peatmoss/vermiculite medium (Sogevex, Apopka, Fla.). A soil test indicated (in mg·.liter -1 ) 8N-28P-103K and a 5.7 pH. The range of pretransplant nutritional regimes were based on a previous greenhouse study (Melton and Dufault, 1991). Solutions consisted of factorial combinations of N from calcium nitrate at 100, 200, and 300 mg·liter -1 and P from calcium phosphate at 10, 40, and 70 mg·liter -1 . Additionally, the following were included in all nutrient solutions: K (potassium sulfate at 100 mg·liter -1 ); magnesium sulfate (70 mg·liter -1 ); and calcium carbonate (471 mg·liter -1 , included to prevent a calcium compounding effect). Micronutrients were supplied in all nutrient solutions with Soluble Trace Element Mix (Peter's Fertilizer Products, W.P. Grace & Co., Allentown, Pa.) at the recommended rate of 313 mg·liter -1 . The pH of each solution was adjusted to 7.0 using H 2 SO d or NaOH.
The flats were placed in a greenhouse and maintained at a mean of 19C/26C (night/day). Each replication (a quartered flat) consisted of 32 seedlings. The nine NP PNC treatments were replicated four times and arranged in a randomized complete block design. The first nutrient application was made on 16 Mar. 1988 at the second true-leaf stage, 14 days after seeding.
The flats were floated in nutrient solutions in 38 × 25 × 9cm plastic storage boxes (Max Klein Co., Baraboo, Wis.) for 1 h, then drained for 1 h, and returned to their respective bench locations. Nutrient solutions were applied three times/week until 25 Mar., when at least one treatment across all replications had reached minimal transplanting stage (≈ 15 to 20 cm height with five true leaves). A total of five PNC treatments were made.
On 27 Mar., the seedlings in all the PNC treatments were placed for eight consecutive nights in darkness at ≈2 C in a cooler from 1900 to 0700 HR and returned to the greenhouse (26 ± 6C) from 0700 to 1900 HR to simulate low-temperature extremes possible in the field. The seedlings (including those held in the greenhouse) were not nutritionally conditioned during Iow-temperature exposure but were irrigated with tap water only.. Although we attempted to illuminate the seedlings in the coolers during low-temperature stress treatments, heat from the lights within the coolers increased unit temperatures significantly. Hence, Iow-temperature imposition was scheduled to occur in darkness to reduce the number of hours of light deprivation. The effect of the increased periods of darkness imposed during low-temperature stress on seedling growth is unknown.
Nine plants were randomly chosen from each flat for seedling growth analysis on 4 Apr. 1988. Variables measured included: shoot fresh weight; stem diameter (at cotyledonary node); expanded true leaf number (leaves with clearly visible petioles); leaf area per seedling, including petiole (LI-COR LI-3100 leaf area meter; LI-COR, Lincoln, Neb.); and shoot and root dry weights per treatment plot (dried for 24 h at 65 C). One leaf disk (0.31 cm') from the second true-leaf tip of five plants each per treatment was removed with a hole punch, composite, and total chlorophyll was determined (Moran, 1982). Chlorophyll content was used to quantify the visible differences in greenness among the PNC treatments.
Growth data were subjected to a factorial analysis of variance (ANOVA). The relative importance of N and P on tomato seedling growth was determined by partitioning the total sum of squares for treatments into main and interaction effects and by expressing their individual contribution to variation as a percentage of the total sum of squares for the model (composed of only those sources of variation in the ANOVA).
Refinements of the experimental treatments used in 1988 were necessary in 1989 to clarify the meaning and precision of the experiments. Ineffectual treatments, such as the 70 mg P and 300 mg N/liter were abandoned. Generally, similar experimental procedures were followed in 1989 as in 1988, with the following modifications. The Seeds were planted on 17 Mar. 1989. Nutrient concentrations included N rates at 50, 100, and 200 mg·liter -1 and P rates at 10 and 40 mg·liter -1 . The Ca level was adjusted to 309 mg·liter -1 . In addition, two flats per PNC treatment were used; one flat was exposed to low temperatures at the end of the PNC applications, while the other flat remained in the greenhouse. This procedure allowed for separation of the effect of .PNC on seedling response to high-and low-temperature environments near the time of field planting. The first nutrient application was made on 29 Mar. 1989 at the first trueleaf stage. A total of five PNC treatments were used. On 7 Apr. 1989, four replications (flats) of each nutrient treatment were placed in darkness in a cooler at ≈2 C from 1900 until 0700 HR for 8 consecutive days. Low-temperature-stressed plants were returned to a greenhouse at a daytime mean of 26 ± 6C from 0700 until 1900 HR daily. Another complete set of PNC treatment flats remained in the same greenhouse until field planting and were not exposed to low-temperature treatments. Seedling growth data was taken on 14 Apr., 28 days from seeding on nine randomly chosen plants from each flat.
Influence of NP and low temperatures in the field. Seedlings from each treatment flat were hand-transplanted on 5 Apr. 1988 and 14 Apr. 1989 at the Clemson Univ. Coastal Research and Education Center, Charleston, S.C. Soil type was a Yauhannah loamy fine sand, an Aquic Hapudults. The field was fertilized 422 with (in kg·ha -1 ) 165N-76P-179K before planting. Dolomitic limestone was added as indicated by soil tests." Beds on 1.8-m centers were fumigated with methyl bromide at 220 kg·ha -1 and mulched with 0.3-mm (1.25 mil) black plastic. Plants were spaced 0.5 m apart within rows, and each 4.6-m-long test plot contained 10 plants. Each treatment was replicated four times in a randomized complete block design. Plots were drip-irrigated as necessary when tensiometers (30 cm deep) were at 0.2 Pa in 1988 and at 0.1 Pa in 1989. Tomatoes were staked, tied, suckered, and sprayed with pesticides using standard commercial recommendations (Cook, 1980). The fruits from the first flower cluster were harvested from a plant in the center of each treatment plot the day before the first harvest to determine if PNC and/or low-temperature exposure affected the weight and quality in the first cluster. All of the remaining fruit were harvested at the breaker stage (Ware and McCollum, 1980). The fruit was graded using USDA standards of small (<5.5 cm), medium (5.5-7.0 cm), large ( >7.0 cm), and cull fruit. The cull fruit was categorized separately by major blemish factors as follows: cracking, blossom-end rot (BER), green shoulders, and seams. There were three weekly harvests in 1988 and four in 1989. Data were analyzed by harvest date and also pooled over the entire harvest season each year. Data analysis was the same as outlined in the greenhouse study.
Results
Influence of PNC on seedling growth. The effect of N and P nutritional conditioning on seedling growth differed between 1988 and 1989. Plant height, stem diameter, leaf number, leaf area, and fresh and dry shoot weights were higher at 200 than at 100 mg N/liter in 1988, with no further increases above 200 mg·liter -1 (Table 1). Root dry weight increased with N from 100 to 300 mg·liter -1 . Total chlorophyll content was higher at 300 than at 100 or 200 mg N/liter, substantiating an apparent visual enhancement of greenness with high N.
Plant height, stem diameter, leaf number, leaf area, and fresh and dry shoot weights responded similarly in 1989 as in 1988, but with N in the range of 50 to 200 mg·liter -1 (Table 1). Total chlorophyll was higher with 200 mg N/liter than at the lower concentrations. Root dry weight increased only with N from 50 to 100 mg·liter -1 . Nitrogen interacted with low-temperature exposure to affect all of the variables; however, only 5% to 8% of the variation was attributable to this effect, which we considered to be negligible.
Phosphorus PNC affected all of the growth variables measured in 1988 (Table 1). Stem diameter, leaf number, leaf area, fresh shoot weight, and dry shoot and root weights were higher with 40 than with 10 mg P/liter, with no further effect above 40 mg·liter -1 . Plant height increased with P concentration from 10 to 70 mg·liter -1 . Total chlorophyll was lower with 40 than with 10 mg P/liter, with no further decrease at 70 mg·liter -1 . Phosphorus had less effect in 1989 than in 1988; at 40 mg·liter -1 , it increased stem diameter, leaf area, and fresh and dry shoot weights relative to 10 mg·liter -1 . The higher P concentration caused a decrease in total chlorophyll. Phosphorus did not interact with N or low-temperature exposure to affect any growth variable.
Low temperatures in 1989 significantly reduced all growth variables in comparison to those not exposed to chilling ( Table 1).
Influence of PNC and low temperature in the field. Yield and quality of fruit harvested from the first flower cluster were not affected in 1988 or 1989 by nutrient treatment or Iow-temperature exposure (data not shown).
In 1988, early yield in the first harvest was unaffected by N and P regime (data not shown). However, by the last harvest, N accounted for a major portion of variation and also a significant amount was assigned to the main effect of P, indicating that PNC has long-term effects on productivity (Table 2). In 1988, yields from the third harvest increased by 17% for N PNC rates at 200 and 300 mg·liter -1 relative to those conditioned with 100 mg N/liter. Increased P, like N, produced higher yields in the third harvest with 40 mg N/liter than with 10; however, increasing P to 70 mg·liter -1 did not increase yield further.
Nitrogen PNC in 1989 accounted for a significant portion of variation in early yields in comparison to the other main effects and interactions; however, the effect of N PNC diminished by the last harvest (Table 2). Seedlings conditioned with N at 200 mg·liter -1 yielded more fruit in the first harvest than those conditioned with the 50-or 100-mg·liter -1 rates. In the second and third harvests, seedlings conditioned with N at 100 mg·liter -1 yielded more fruit than those conditioned with 50 mg·liter -1 ; N at 200 mg·liter -1 did not increase yield further. Pretransplant conditioning with N, P, or cold stress had no significant effect by the last harvest.
Analysis of the pooled yields over all harvests in 1988 indicated that total marketable yield was significantly higher at 200 than at 100 mg N/liter (Table 2); however, N at 300 mg·liter -1 did not increase yield further. In 1989, total marketable yield from plants conditioned with N at 100 or 200 mg·liter -1 was significantly greater than N at 50 mg·liter -1 . In 1989, neither P nor low-temperature exposure affected any yield variables (data not shown).
Quality blemishes were significantly affected by PNC in 1989 (Table 3), but not in 1988 (data not shown). BER was a severe problem in 1989, affecting 30% to 40% of all cull fruit produced in this growing season. BER was most severe at a PNC rate of 50 mg N/liter. At 100 mg N/liter, the incidence of concentric cracking increased in comparison to the other N rates. The percentage of seamed fruit increased from 2% to 6% at 50 and 200 mg N/liter. Green shoulders were more common at 200 mg N/ liter than at lower levels. Neither low-temperature exposure nor P had any effect on incidence of fruit blemishes in 1989 (data not shown).
Discussion
A traditional practice to harden seedlings has included withholding nutrients before transplanting. These practices are thought to be detrimental to the production potential of tomatoes. Our results indicated that high N and P PNC enhanced seedling shoot and root growth. Weston and Zandstra (1989) recommended that 400 mg N/ liter should be considered optimal for the production of large seedlings that produce increased early and total yields. In 1988, 300 mg N/liter did not yield more fruit than the 200-mg·liter -1 PNC rate. We suggested that N at 50 mg·liter -1 is deficient and may permanently reduce yield potential. Courter et al. (1977) stated that plants overly hardened with water withdrawal, temperature stress, and/or nutrient withdrawal, resume growth slowly and may never fully recover, mature later, and may have reduced yields.
'1988-First harvest lost to blossom-end rot. y Mean separation within main effect by LSD at P = 0.05. 'Composed of all sources of variation. * , **F value significant at P = 0.05 or 0.01, respectively. Values not followed by asterisks are not significant at P = 0.05 or 0.01. within 3 weeks of cotyledon expansion, coincident with the third oldest leaf being just in excess of 10 mm Iong. Apparently, in our study, first fruit initials were not affected during the time of low-temperature exposure since temperatures at 2C would have arrested their development. Low-temperature exposure decreased seedling growth and likely was stressful. Low temperatures are know to slow plant growth and reduce metabolic rates (Salisbury and Ross, 1985). The plants remaining in the greenhouse during this time were apparently able to continue more active growth in response to moderate night temperatures. However, there were no long-term effects of low-temperature exposure on earliness of fruit set, total yields, and fruit quality.
Nitrogen applied at 50 or 100 mg·liter -1 , in contrast to 200 mg·liter -1 , reduced yields. Although cultivar differences probably exist in response to PNC, nutritionally conditioning seedlings with at least 200 mg N/liter may enhance marketable yields relative to lower N rates. In tomato transplant production, the nutritional regimes used to produce seedlings have long-lasting effects on earliness and total yields. Therefore, a PNC regime combining at least 200 mg N and 10 mg P/liter should be used to nutritionally condition seedlings during the transplant production phase to promote earlier and higher yields.
'Mean separation within main effect by LSD at P = 0.05. y Composed of all sources of variation. * , **F value significant at P = 0.05 or 0.01, respectively. Values not followed by asterisks are not significant at P = 0.05.
Our research suggested that concern over the effects of low temperature experienced in the field foIIowing transplanting may be unwarranted with the 'Sunny' tomato. Hurd and Cooper (1970) found that flower initiation in many tomato cultivars started
|
2019-03-30T13:12:06.893Z
|
1991-05-01T00:00:00.000
|
{
"year": 1991,
"sha1": "5a55bd3c469a320802dd13099065f066c4bcd288",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/jashs/116/3/article-p421.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f1b0c1e15e68eda39ea55b4bb0e526ba085829c4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
37513302
|
pes2o/s2orc
|
v3-fos-license
|
Genomic analysis of exceptional responder to regorafenib in treatment-refractory metastatic rectal cancer: a case report and review of the literature
We present the case of a 53-year-old male with metastatic rectal cancer who was treatment resistant to FOLFOX and FOLFOXIRI. Due to a Kirsten rat sarcoma viral oncogene homolog (KRAS) mutation, regorafenib was given in the third line setting. Surprisingly, the patient had a prolonged partial response that lasted 27 months. Mutational status was extensively evaluated to identify potential alterations that might play a role as predictive markers for this unusual event. A poorly characterized but nontransforming mutation in Fms-like tyrosine kinase 4 (FLT4) was present in the tumor. Prior to and at the time of clinical progression, we found amplification of fibroblast growth factor receptor 1 (FGFR1) and epidermal growth factor receptor (EGFR), loss of the FLT4 mutation, and gain of KIT proto-oncogene receptor tyrosine kinase (KIT) G961S suggesting potential roles in acquired resistance.
INTRODUCTION
Regorafenib, an oral multikinase inhibitor, is used in treatment refractory metastatic colorectal cancer (mCRC) after failure of fluoropyrimidine, irinotecan, and oxaliplatin based therapies. In RAS wild type patients, progression following EGFR targeted therapy should also occur before use of regorafenib. The CORRECT clinical trial [1] demonstrated an overall survival (OS) benefit of regorafenib over placebo in treatment-refractory mCRC (6.4 months vs 5 months, Hazard ratio (HR) 0.77, 95% confidential interval (CI), 0.64-0.94, P = 0.0052). Mean duration of regorafenib treatment was 2.8 months with an objective response rate only being 1% (5/505). The benefit of regorafenib was also reported in Asian populations in the CONCUR trial [2], which demonstrated an extended OS in regorafenib treated patients compared to placebo (8.8 vs 6.3 months, HR = 0.55, 95%CI 0.44-0.77, P = 0.00016). However, there are no biomarkers predicting response to this drug and many patients suffer early progression during treatment with regorafenib. An extensive analysis of circulating tumor DNA and proteins from the CORRECT trial attempted to identify biomarkers able to predict response, however was unsuccessful [3]. Here, we report a case of an unusual deep and longterm response to regorafenib and present the molecular characterization of this patient to help elucidate potential determinants of this exceptional response.
CASE REPORT
A 53-year-old male presented with lower abdominal pain, constipation, intermittent episodes of bright red blood per rectum, and significant weight loss of 20 pounds over 3 months. He had no significant past medical or family history, and physical examination was normal. The patient underwent a colonoscopy which demonstrated an exophytic mass in the rectum causing partial obstruction. Biopsy revealed moderate to poorly differentiated adenocarcinoma arising from a villous adenoma with high grade dysplasia. Staging investigations revealed liver limited multiple metastases, with the largest mass measuring 12 centimeters. Carcinoembryonic antigen (CEA) was within normal limit.
A 200 gene next generation sequencing (NGS) panel was performed on the biopsied primary and identified a KRAS mutation in codon G12S, a tumor protein p53 (TP53) mutation in codon R273C, an adenomatous polyposis coli (APC) mutation in codon R1450* and I742fs*, a protein phosphatase 1 regulatory subunit 3A (PPP1R3A) mutation in codon E271D, and a FLT4mutation, in codon F131S. FLT4, also known as vascular endothelial growth factor receptor 3 (VEGFR3) [4], is a member of the VEGFR family which can be targeted by regorafenib [5]. Since high VEGFR protein expression has been reported on colorectal cancer cells [6], we assessed the functional significance in the Ba/F3 cell reporter assay. This screen showed no IL-3 independent growth which is a surrogate for the transforming ability of this variant in FLT4. Molecular characterization of the tumor is shown in Table 1. CpG island methylator phenotype (CIMP) was high (abnormal methylation in 6/6 target genes) and microsatellite instability testing by immunohistochemistry demonstrated a microsatellite stable tumor.
Due to the patient's prior rectal bleeding and insitu primary malignancy, FOLFOX was initiated with bevacizumab omitted. After 4 cycles of treatment, interval CT scan showed progression of the hepatic metastases and rectal mass according to the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 guideline [7]. The patient's treatment was changed to FOLFIRINOX, with an initial partial response (PR) after 4 cycles. However, after 8 cycles the patient once again demonstrated progressive disease in the liver and rectum. The patient was subsequently started on regorafenib at a dose of 120 mg per day for 3 weeks each 28-day cycle as per MD Anderson's institutional dosing practice. Interval CT scan of abdomen after 2 months showed a dramatic response. Hepatic metastases decreased in size from 9.8 to 7.7 in the left lobe and 11.6 to 9.3 centimeters (cm) in the right lobe which was confirmed after 4 months. He continued on treatment without any dosing modifications. After 10 months of regorafenib, he required a dose reduction due to grade 2 hand-foot skin reaction (HFSR) which was most pronounced on the third week of each cycle. Subsequently, his dose was changed to 120 mg per day for the first two weeks and 80 mg per day for the third week. After 15 months of treatment, a flexible sigmoidoscopy was performed and showed an ulcerative non-obstructive mass at the site of the primary tumor which was biopsied and confirmed residual poorly differentiated adenocarcinoma. A repeat 200 gene NGS panel was performed on this biopsy and identified KRAS G12S, TP53 R273C, and APC I742fs* which were previously reported at time of diagnosis. However, new gene alterations were identified in ataxia telangiectasia and Rad3 related (ATR) gene at codon I774fs*; and gene amplifications in v-myc avian myelocytomatosis viral oncogene lung carcinoma derived homolog (MYCL), cyclin dependent kinase 4 (CDK4), and KRAS. Figure 1 showed the maximum response of the liver metastases after 17 months of regorafenib treatment. 11.6x9.8cm in the right lobe (B) At best response, the liver masses were 5.4x4 cm in the left lobe and 8.3x9.5 cm in the right lobe.
Oncotarget 57884 www.impactjournals.com/oncotarget Treatment with regorafenib was continued with good tolerance. After 20 months of regorafenib a CT scan of the abdomen showed stable liver metastases but increased size of the rectal mass. Re-biopsy of the rectal tumor was obtained to assess for mechanisms of resistance and sequencing identified FGFR1 and EGFR gene amplifications; and an E1A binding protein p300 (EP300) mutation in codon L1755V, and a Wolf-Hirschhorn Syndrome Candidate 1-Like 1 (WHSC1L1) mutation in codon E123Q. Concurrent chemo-radiation (CCRT) therapy with capecitabine 650mg/m2 twice daily with a total of 50.4Gy was initiated and regorafenib was placed on hold. Upon completion of CCRT, regorafenib was re-initiated with continued disease control in the liver. Unfortunately, after 27 months of regorafenib treatment, an abdominal CT revealed progression of the liver metastases. Re-biopsy of the liver was attempted but there were no viable cells to characterize. Therefore, circulating tumor DNA (ctDNA) sequencing was used to characterize alterations after regorafenib progression. Analysis revealed a mutation in KIT at codon G961S, PIK3CA and MYC gene amplifications that were not noted on prior testing and confirmed FGFR1 and EGFR amplifications which were previously identified in the progressed rectal tumor tissue. The mutational profile is summarized in Table 1 and Supplementary Table 1.
As the patient had not received any other prior anti-VEGF therapy, he was started on irinotecan plus aflibercept. Restaging CT scans after 2 and 4 months showed stable disease, however the patient developed grade III diarrhea during therapy leading to the omission of subsequent irinotecan after 4 months. The patient continued aflibercept for a further 2 months at which point he was found to have hepatic progression. The patient was subsequently transitioned to best supportive care.
DISCUSSION
We report the case of an exceptional responder to regorafenib in mCRC and describe the alterations identified through molecular testing, anticipating to elucidate a potential mechanism of sensitivity in this patient.
Regorafenib, an oral mutikinase inhibitor, can inhibit activity of several protein kinases, including those involved in tumor proliferation (KIT, PDGFR and RET), tumor angiogenesis (VEGFR1-3, TIE2), and tumor microenvironment (PDGFR-B, FGFR) [5,8,9]. The Food and Drug Administration approved regorafenib in 2012 for the treatment of mCRC after failure of standard therapies, including fluoropyrimidine, oxaliplatin, and irinotecanbased chemotherapy, anti-VEGR therapy, and anti-EGFR therapy in KRAS wild type tumors. Regorafenib showed benefit in both KRAS-wild-type and KRAS-mutant subgroups [1,2]. Prior attempts to identify a useful biomarker to select patients who will benefit from regorafenib have assessed stereotypic mCRC aberrant genes including KRAS, BRAF, PIK3CA, and MMR status and failed to correlate mutations in any of these genes to treatment response [2,3]. Teufel et al suggested a benefit of regorafenib therapy in patients with high-risk molecular characteristics defined by gene expression clusters (HR = 0.10; 95%CI 0.02 -0.35) compared to a lower-risk subgroup (HR = 0.58; 95%CI 0.44 -0.77) although this has not yet been validated [3]. Moreover, markers of angiogenesis may have potential utility in identifying responders. Eisen et al reported higher baseline TIMP metallopeptidase inhibitor 2 (TIMP2) and soluble tyrosine kinase with immunoglobulin like and EGF like domains 1 (TIE1) which were correlated with regorafenib treatment response [10]. Data from CORRECT [3] also demonstrated that high levels of soluble protein TIE1 were associated with OS benefit in the regorafenib group. Additionally, Giampieri et al reported that patients who harbored VEGF-A rs2010963 germline polymorphism showed better PFS (HR = 0.49, 95%CI 0.33-0.81) and OS (HR 0.52, 95%CI 0.34-0.99) when treated with regorafenib compared to those without these polymorphisms [8]. While hypothesis generating, all of these angiogenic markers suffer from limited power due to multiple comparison and require further studies.
The patient reported here had an exceptional response to regorafenib of 27 months, which has never been reported previously. The most recent published data from Japan [11] reported 18 months of partial response in a patient with mCRC treated after progression on FOLFOX, FOLFIRI, and XELOX regimens. However, they did not report any molecular analysis. In this study, we utilized sequential molecular testing before, during, and upon progression of regorafenib treatment. We found several gene mutations, including KRAS codon G12S, TP53 codon R273C, and APC codon I742fs*which persisted from diagnosis through treatments. We also found several transient mutations that occurred during regorafenib treatment including APC codon R1450*, PPP1R3A codon E271D, ATR gene in codon I774fs*, EP300 codon L1755V, and WHSC1L1 codon E123Q. However, these genes do not have biologic rationale to support their use as a predictive biomarker, and instead likely reflect clonal diversity over time.
FLT4 mutations are rare in CRC and have been reported in only 2.4 % (5/212) in sequenced CRC in the Cancer Genomic Atlas (TCGA) dataset [12,13]. FLT4 F131S located within the extracellular region of FLT4 protein [14]. Previously FLT4 F131S has not been functionally characterized; therefore this mutation was functionally analyzed in the Ba/F3 system which revealed that this mutation does not induce growth factor independent cell growth and thus is characterized as likely non-transforming/benign. Nevertheless, it cannot be ruled out that FLT4 mutations might sensitize cells/ tumors to regorafenib treatment. Further experiments that characterize this mutation with regards to its therapeutic effect might be needed.
The mechanisms of pre-existing and acquired resistance to regorafenib are unknown. Recent data from a pre-clinical study demonstrated Notch-I upregulation in regorafenib resistant tumor cells and inhibition of Notch-I in resistant cells partially restored sensitivity to the regorafenib treatment in vitro. These results suggest Notch as potential mechanism of acquired resistance [15]. Gene amplifications are a common mechanism of acquired resistance to targeted therapies in CRC. Examples include BRAF gene amplification in MEK inhibitor treated tumor [16] , HER2 and MET amplification in anti-EGFRab treated tumor [17,18] . Many acquired gene amplifications were identified in the patient's tumor profile ( Table 1); several of these amplifications were present in the responding tumor (MLCL, CDK4, and KRAS) and are less likely to be associated with resistance. Others such as MYC and PIK3CA were only present in the cfDNA but not seen in the progressing rectal primary. In contrast FGFR1 and EGFR were present at the time of progression of the rectal primary and later upon progression of the liver metastases, and are therefore candidate resistance mechanism.
FGFR1 is a gene that encodes a member of FGFR family which includes four receptor tyrosine kinases, FGFR1-4 [19]. FGFR1 gene amplification has been reported in numerous malignancies including breast cancer and squamous cell carcinoma of lung cancer, head and neck cancer, and esophageal cancer [20][21][22][23]. FGFR1 amplifications have been reported in 2.8% (6 cases) of 212 sequenced CRC in the TCGA dataset [12,13]. Although FGFR1 has recently emerged as a promising target in non-small cell lung cancer, data from CRC are limited. EGFR belongs to a family of cell signaling receptors and is known to activate a cascade of multiple signaling pathways. The presence of an EGFR abnormality; including mutation, amplification, and overexpression, can result in over activity of EGFR protein and excessive proliferation [24]. EGFR amplifications have been reported in 0.5% (1 case) of 212 sequenced CRC in the TCGA dataset [12,13]. Although EGFR mutations have been reported to predict sensitivity to EGFR tyrosine kinase inhibitors in lung cancer [25], little is known about the impact of EGFR amplifications in either for selecting patient to anti-EGFR treatment or as a role in resistance.
Oncotarget 57886 www.impactjournals.com/oncotarget Each of the above amplifications were noted in pathways that are adjacent or in line with a pathway targeted by regorafenib and our molecular characterization shows multiple concurrent potential resistance mechanisms induced by regorafenib. However, to our knowledge, no gene amplification has previously been established as a potential resistance mechanism for regorafenib.
KIT was the only mutation noted upon tumor progression during regorafenib. KIT encodes the human homolog of the proto-oncogene c-kit that belongs to the type III tyrosine kinase receptor family [26]. Binding of its endogenous ligand, stem cell factor (SCF) initiates multiple downstream signaling pathways [27][28][29]; including mitogen-activated protein kinase (MAPK) pathway, phosphatidylinositol 3-kinase (PI3K)/ AKT pathway, Janus kinase/signal transducers and activators of transcription (JAK/STAT) pathway, PLC-γ signaling transduction pathway and Src kinase signaling transduction pathway, leading to cell proliferation, survival and migration. KIT mutations have been reported in 2.8% (6/212) of sequenced CRC in the TCGA dataset [12,13]. KIT G961S alteration has not been functionally characterized. It is located at the C-terminal end of the protein, outside of any known function domain [14]. Although, KIT G916S has not been reported with any clinical significance, the acquisition of any mutation in a kinase targeted by regorafenib suggests that KIT G961S might play a role in acquired resistance.
CONCLUSION
We report a case with an unusually prolonged response to regorafenib in mCRC and we highlight the development of FGFR1/EGFR amplifications and a KIT G961S mutation as potential mechanisms of acquired resistance in this patient. The molecular features of this exceptional responder may provide insight into genomic alterations that develop during regorafenib treatment which may lead to acquired resistance.
T200 gene panel
The T200 is a next generation sequencing panel that provides sequencing coverage of all exons for 201 cancer related genes. The panel consists of 4874 exons encoding 938607 bases and was designed with a higher read depth in order to provide the ability to call mutations at lower allele frequencies (down to 1%). Detailed methods associated with this assay have been previously published [30].
Guardant 360TM assays
The Guardant 360TM is a commercially available next generation sequencing panel developed for use with circulating tumor DNA (ctDNA). The panel consists of 68 cancer related genes and is able to identify mutations and copy number alterations. Cell free DNA is extracted from plasma and genomic alterations are analyzed by massively parallel sequencing of amplified target genes. The minimum detectable mutant allele is dependent on the concentration of ctDNA in a patient's serum at the time of blood draw [31].
CIMP methylation
Assay is performed using either formalin-fixed, paraffin-embedded tissue blocks or frozen tissue samples. DNA extracted from formalin-fixed, paraffinembedded tissue or frozen tissue samples is treated with bisulfite to convert unmethylated cytosine to uracil. PCR amplification of both unmethylated and methylated MINT1, MINT2 and MINT31 loci, and promoter sequences of p14, p16 and hMLH1 genes is performed and methylation status is assessed by pyrosequencing. The tumor is considered CIMP High, if at least 40% of markers tested show methylation, and CIMP Low if < 40% markers show methylation.
IL3 dependency Ba/F3 assay
An IL-3 dependent murine Ba/F3 cell reporter model was used to evaluate the functional impact of a FLT4 F131S mutation. The procedure was same as described previously [32] with few modifications. Briefly, this pro-B cell line is dependent on IL-3 for proliferation. Oncogenic transformation with a mutation results in IL-3 independent growth, thus highlighting a functionally significant mutation. Ba/F3 cells were introduced with FLT4 F131S mutant using lentivirus approach and incubated in medium with 0.5 pg/ml IL-3 which is 0.01% of regular IL-3 concentration used in cell line maintenance. Trace amount of IL-3 in medium delays IL-3 depletion-mediated cell death and gives time to the cells to adapt oncogenic mutant. Cell viability was measured after 1, 1.5, and 2 weeks.
|
2018-01-24T17:25:33.048Z
|
2017-06-03T00:00:00.000
|
{
"year": 2017,
"sha1": "4d6bff85013080c824b9bc59b7b6a8643eceb737",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=18357&path[]=58870",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4d6bff85013080c824b9bc59b7b6a8643eceb737",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253428729
|
pes2o/s2orc
|
v3-fos-license
|
Aging Study of Plastics to Be Used as Radiative Cooling Wind-Shields for Night-Time Radiative Cooling—Polypropylene as an Alternative to Polyethylene
: Polyethylene has widely been used in radiative cooling applications because of high transmittance values in the atmospheric window. However, it presents optical and mechanical degradation when exposed to environmental conditions and must be replaced every few months. This paper aims to find an alternative to polyethylene to be used in a unique device, the Radia-tive Collector and Emitter (RCE), that combines solar collection and night-time radiative cooling. The aging evolution analysis of five cheap and market available plastic films (two low density polyethylene, one high density polyethylene, one polypropylene, and one fluorinated ethylene propylene) exposed to environmental conditions was performed. FT-IR spectra and mechanical traction tests were performed before and after 90 days of exposure to the environment. Results confirm that polyethylene undergoes a degradation process both when it is covered by a glass and when it is uncovered. However, it maintains high average transmittance values in the atmospheric window. Polypropylene has average transmittance values slightly lower than polyethylene, but its aging behaviour is better since no oxidative processes are detected when the material is covered with glass. For all this, PP-35 is an interesting candidate for night-time radiative cooling wind-shields.
Introduction
Energy consumption has been increasing worldwide due to modern society energy demands. In the European Union, 40% of the total energy consumption is in buildings. The Eurostat [1] concludes that space heating represents 64.1% of the total consumption in buildings, domestic hot water (DHW) 14.8%, and space cooling 0.3%. A recent report by the International Energy Agency predicts also that refrigeration demands will triple worldwide by 2050 if no action is taken [2]. Thus, there is a need for covering heating and cooling energy demands by using more efficient systems.
Solar thermal collectors are a mature and commercially implemented technology to produce hot water from renewable energy. Highly efficient solar collectors for exploiting solar irradiation in an optimum way have been developed in the last decades [3]. Most current cooling systems run on compression cooling cycles, consuming high amounts of electricity, especially in the summer heat peaks. An alternative to produce cold is solar cooling, combining solar thermal collectors with an absorption heat pump. Absorption heat pumps reduce the electrical energy consumption but presents drawbacks such as low efficiency, the lack of small capacity units, and the need of high temperatures (>100 • C) to increase their efficiency [4]. Moreover, auxiliary equipment is required, such as the absorption chiller and the cooling tower, which increases the cost of the installation and can result in health problems such as Legionella. Another renewable alternative to produce cold is radiative cooling. reported that photodegradation of LDPE films is basically due to the ultraviolet (UV) radiation, which can be considered the most harmful factor of plastic degradation. Balocco et al. [28] also demonstrated that photooxidation of polyethylene plastic films implies a process of bonds breaking, increasing the amount of low molecular weight material, as well as an increase of its hydrophilicity by the presence of carbonyl groups. Plus, an embrittlement of the material is also detected [29]. Fourier transform infrared spectroscopy (FT-IR) and mechanical tests have been reported by different authors as suitable techniques to follow degradation of polymers [30][31][32][33][34]. Ali et al. [35] reported a substantial decrease of transmittance when exposing a 50 µm polyethylene film to outdoor conditions for 100 days, which led to a reduction of the radiative cooling system cooling power by 33.3%. Martorell et al. [36] presented an experimental study where a decrease between 3.5% and 9% in the polyethylene average transmittances in the atmospheric window for three winter months of exposure to the environment was measured.
According to Zhang et al. [19] the wind-shield needs to have high mechanical strength to withstand outdoor weather conditions, such as strong winds, strong rain, and even hail. Mechanical tests such as traction tests give insight in this issue. Some authors [37][38][39] have focused research on finding the relationship between the change of the chemical structure and the change in mechanical properties. Carrasco et al. [30] show that the replacement of the C-H bonds for C=O bonds turns into an increase of the Young's modulus, producing a stiffness of the material.
Thus, it is important to find a material transparent to long-wave radiation (with high thermal transmittance in the wavelength range between 7-14 µm), highly resistant to abrasion and moisture, durable, with zero or very low degree of hygroscopicity, with a certain degree of hardness, certain tensile strength, and high degree of elasticity. As explained previously, polyethylene is widely used as a wind-shield for radiative cooling but it shows optical degradation when exposed outdoors and has poor mechanical performance. This work focuses on experimentally studying the optical and mechanical behavior of different high available and cheap plastic wind-shield candidates to find an alternative to polyethylene to be used as radiative cooling wind-shield for a solar collection and night-time radiative cooling applications.
Materials and Methods
The RCE prototype consists of a modified regular flat plate solar collector (2 m 2 ), with an adaptive cover to produce hot water during the day and cold water (below ambient temperature) at night (Figure 1). values for 50 μm low-density polyethylene (LDPE) films around 80% are found [25,26]. However, it is well known that polymers show degradation (thermal, oxidative, chemical, radiative and mechanical) when exposed to outdoor conditions. Abdelhafidi et al. [27] reported that photodegradation of LDPE films is basically due to the ultraviolet (UV) radiation, which can be considered the most harmful factor of plastic degradation. Balocco et al. [28] also demonstrated that photooxidation of polyethylene plastic films implies a process of bonds breaking, increasing the amount of low molecular weight material, as well as an increase of its hydrophilicity by the presence of carbonyl groups. Plus, an embrittlement of the material is also detected [29]. Fourier transform infrared spectroscopy (FT-IR) and mechanical tests have been reported by different authors as suitable techniques to follow degradation of polymers [30][31][32][33][34]. Ali et al. [35] reported a substantial decrease of transmittance when exposing a 50 μm polyethylene film to outdoor conditions for 100 days, which led to a reduction of the radiative cooling system cooling power by 33.3%. Martorell et al. [36] presented an experimental study where a decrease between 3.5% and 9% in the polyethylene average transmittances in the atmospheric window for three winter months of exposure to the environment was measured. According to Zhang et al. [19] the wind-shield needs to have high mechanical strength to withstand outdoor weather conditions, such as strong winds, strong rain, and even hail. Mechanical tests such as traction tests give insight in this issue. Some authors [37][38][39] have focused research on finding the relationship between the change of the chemical structure and the change in mechanical properties. Carrasco et al. [30] show that the replacement of the C-H bonds for C=O bonds turns into an increase of the Young's modulus, producing a stiffness of the material.
Thus, it is important to find a material transparent to long-wave radiation (with high thermal transmittance in the wavelength range between 7-14 μm), highly resistant to abrasion and moisture, durable, with zero or very low degree of hygroscopicity, with a certain degree of hardness, certain tensile strength, and high degree of elasticity. As explained previously, polyethylene is widely used as a wind-shield for radiative cooling but it shows optical degradation when exposed outdoors and has poor mechanical performance. This work focuses on experimentally studying the optical and mechanical behavior of different high available and cheap plastic wind-shield candidates to find an alternative to polyethylene to be used as radiative cooling wind-shield for a solar collection and night-time radiative cooling applications.
Materials and Methods
The RCE prototype consists of a modified regular flat plate solar collector (2 m 2 ), with an adaptive cover to produce hot water during the day and cold water (below ambient temperature) at night (Figure 1). The adaptive cover ( Figure 2) combines materials with different (almost opposite) optical properties for the solar collector and the radiative cooler. While the solar collection mode requires a cover with high transmittance of radiation in 0.2-4 μm wavelengths and a low transmittance for the rest of wavelengths, the radiative cooling mode requires a high The adaptive cover ( Figure 2) combines materials with different (almost opposite) optical properties for the solar collector and the radiative cooler. While the solar collection mode requires a cover with high transmittance of radiation in 0.2-4 µm wavelengths and a low transmittance for the rest of wavelengths, the radiative cooling mode requires a high transmittance in the wavelength range between 7-14 µm (to radiate to the outer space through the atmospheric window).
Materials
Wind-shield candidates are presented in Table 1. All plastics chosen are visually transparent, highly available, and cheap. Two low density polyethylene samples of 100 µm and 60 µm (LPDE-100 and LPDE-60, respectively) were chosen with the objective of testing the thickness variable. Comparisons between LDPE-60 and high-density polyethylene (HDPE-60) were also studied. In addition, two widely used polymeric plastics, polypropylene (PP-35), and fluorinated ethylene propylene (FEP-50), were considered. Average thickness of the films was tested following UNE-ISO 4593:2010 by measuring 20 random equidistant samples per film with a micrometer (Heudenhain ND287, Traunreut, Germany) ( Figure 3). transmittance in the wavelength range between 7-14 μm (to radiate to the outer space through the atmospheric window).
Materials
Wind-shield candidates are presented in Table 1. All plastics chosen are visually transparent, highly available, and cheap. Two low density polyethylene samples of 100 μm and 60 μm (LPDE-100 and LPDE-60, respectively) were chosen with the objective of testing the thickness variable. Comparisons between LDPE-60 and high-density polyethylene (HDPE-60) were also studied. In addition, two widely used polymeric plastics, polypropylene (PP-35), and fluorinated ethylene propylene (FEP-50), were considered. As an example, Figure 4 shows the thickness distribution for LDPE-100. Thickness differences along the plastic film were observed. The average tested thickness of this sample is 96.3 µm with a micrometer error of ±0.5%.
Energies 2022, 15, x FOR PEER REVIEW 5 of 14 As an example, Figure 4 shows the thickness distribution for LDPE-100. Thickness differences along the plastic film were observed. The average tested thickness of this sample is 96.3 μm with a micrometer error of ±0.5%.
Experimental Setup
Plastic films were mounted in wood frames of 13 × 15 cm 2 each ( Figure 5). The bottom of the frames was painted black. To study the influence of the glass present in the RCE solar collection mode, two experimental sets of frames were mounted in parallel, one without glass and the other with glass. Samples were cut in a longitudinal (L) or transversal (T) direction according to the extrusion process, as shown in Figure 5, and samples were extracted after 90 days. An experimental campaign of three months was performed because authors observed in a previous experimental campaign an average transmittance drop of 0.7% for 500 μm PE exposed during two months in summer. According to this, the more significant transmissivity drop happens after the second month. In addition, Chabira et al. [32] show a dramatic drop after three months in both elongation at break and tensile strength for LDPE samples exposed 8 months to environmental conditions. The frames were located, avoiding shadows on the roof of the CREA Building at the University of Lleida (Catalonia, Spain), and the experimental campaign was performed from October 2020 to January 2021. Condensed water and accumulated dust were removed manually twice a week. During this period, plastics were exposed at maximum temperatures of 25 °C and minimum temperatures of 0 °C. Thus, a wide ambient temperature spectrum was covered during the experimental campaign.
Experimental Setup
Plastic films were mounted in wood frames of 13 × 15 cm 2 each ( Figure 5). The bottom of the frames was painted black. To study the influence of the glass present in the RCE solar collection mode, two experimental sets of frames were mounted in parallel, one without glass and the other with glass. Samples were cut in a longitudinal (L) or transversal (T) direction according to the extrusion process, as shown in Figure 5, and samples were extracted after 90 days. An experimental campaign of three months was performed because authors observed in a previous experimental campaign an average transmittance drop of 0.7% for 500 µm PE exposed during two months in summer. According to this, the more significant transmissivity drop happens after the second month. In addition, Chabira et al. [32] show a dramatic drop after three months in both elongation at break and tensile strength for LDPE samples exposed 8 months to environmental conditions.
As an example, Figure 4 shows the thickness distribution for LDPE-100. Thickness differences along the plastic film were observed. The average tested thickness of this sample is 96.3 μm with a micrometer error of ±0.5%.
Experimental Setup
Plastic films were mounted in wood frames of 13 × 15 cm 2 each ( Figure 5). The bottom of the frames was painted black. To study the influence of the glass present in the RCE solar collection mode, two experimental sets of frames were mounted in parallel, one without glass and the other with glass. Samples were cut in a longitudinal (L) or transversal (T) direction according to the extrusion process, as shown in Figure 5, and samples were extracted after 90 days. An experimental campaign of three months was performed because authors observed in a previous experimental campaign an average transmittance drop of 0.7% for 500 μm PE exposed during two months in summer. According to this, the more significant transmissivity drop happens after the second month. In addition, Chabira et al. [32] show a dramatic drop after three months in both elongation at break and tensile strength for LDPE samples exposed 8 months to environmental conditions. The frames were located, avoiding shadows on the roof of the CREA Building at the University of Lleida (Catalonia, Spain), and the experimental campaign was performed from October 2020 to January 2021. Condensed water and accumulated dust were removed manually twice a week. During this period, plastics were exposed at maximum temperatures of 25 °C and minimum temperatures of 0 °C. Thus, a wide ambient temperature spectrum was covered during the experimental campaign. The frames were located, avoiding shadows on the roof of the CREA Building at the University of Lleida (Catalonia, Spain), and the experimental campaign was performed from October 2020 to January 2021. Condensed water and accumulated dust were removed manually twice a week. During this period, plastics were exposed at maximum temperatures of 25 • C and minimum temperatures of 0 • C. Thus, a wide ambient temperature spectrum was covered during the experimental campaign.
Experimental Instruments and Sample Preparation
FT-IR spectra of plastic films were collected using a FT-IR spectrometer (Jasco FT-IR 6300 (Easton, MD, USA) with a diamond/ZnSe crystal) containing a DLATGS detector. The Jasco FT-IR 6300 counts with a wavenumber accuracy of ±0.01 cm −1 , a resolution of 4 cm −1 and a sensitivity of S/N = 50,000:1. Each spectrum were recorded with 32 scans, in the 2500-15,384 nm range ( Figure 6). 2500-15,384 nm range ( Figure 6).
Three repetitions of each sample in Transmission mode were performed in the spectroscope. Brand new samples with no environmental exposure (0 days) and samples after three months of exposure to the environment (90 days) were analyzed. Samples (6 × 6 cm) to be analyzed by FT-IR were cut out from the plastic films and wiped with a dry cloth to remove accumulated dust. No other treatment was carried out.
Traction mechanical tests were performed in a ZwickRoell BZ1-MMZ2.5.ZW01 (Ulm, Germany) with a tolerance range of ±10%, following ISO 527-1, ISO 527-2, and ISO 527-3. Five specimens of 1 × 15 cm for each material were cut in a small press ( Figure 6). Measurements were limited to the central part of the sample and each specimen was wiped dry to remove dust. Next, specimens were pinned to the jaw and force was exercised in the longitudinal or the transversal axis until the sample broke. Five repetitions of each specimen were conducted. Figure 7 shows FT-IR spectra for the five samples studied before being exposed to environmental conditions (0 days). No differences were observed in the three polyethylene samples analyzed, as expected. Polypropylene shows a lower transmittance than polyethylene samples and behaves different, with sharp absorption peaks. FEP sample is the one with lower transmittances values with wide absorption peaks. Three repetitions of each sample in Transmission mode were performed in the spectroscope. Brand new samples with no environmental exposure (0 days) and samples after three months of exposure to the environment (90 days) were analyzed. Samples (6 × 6 cm) to be analyzed by FT-IR were cut out from the plastic films and wiped with a dry cloth to remove accumulated dust. No other treatment was carried out.
Optical Properties and Chemical Structure
Traction mechanical tests were performed in a ZwickRoell BZ1-MMZ2.5.ZW01 (Ulm, Germany) with a tolerance range of ±10%, following ISO 527-1, ISO 527-2, and ISO 527-3. Five specimens of 1 × 15 cm for each material were cut in a small press ( Figure 6). Measurements were limited to the central part of the sample and each specimen was wiped dry to remove dust. Next, specimens were pinned to the jaw and force was exercised in the longitudinal or the transversal axis until the sample broke. Five repetitions of each specimen were conducted. Figure 7 shows FT-IR spectra for the five samples studied before being exposed to environmental conditions (0 days). No differences were observed in the three polyethylene samples analyzed, as expected. Polypropylene shows a lower transmittance than polyethylene samples and behaves different, with sharp absorption peaks. FEP sample is the one with lower transmittances values with wide absorption peaks.
Experimental Instruments and Sample Preparation
FT-IR spectra of plastic films were collected using a FT-IR spectrometer (Jasco FT-IR 6300 (Easton, MD, USA) with a diamond/ZnSe crystal) containing a DLATGS detector. The Jasco FT-IR 6300 counts with a wavenumber accuracy of ±0.01 cm −1 , a resolution of 4 cm −1 and a sensitivity of S/N = 50,000:1. Each spectrum were recorded with 32 scans, in the 2500-15,384 nm range ( Figure 6).
Three repetitions of each sample in Transmission mode were performed in the spectroscope. Brand new samples with no environmental exposure (0 days) and samples after three months of exposure to the environment (90 days) were analyzed. Samples (6 × 6 cm) to be analyzed by FT-IR were cut out from the plastic films and wiped with a dry cloth to remove accumulated dust. No other treatment was carried out.
Traction mechanical tests were performed in a ZwickRoell BZ1-MMZ2.5.ZW01 (Ulm, Germany) with a tolerance range of ±10%, following ISO 527-1, ISO 527-2, and ISO 527-3. Five specimens of 1 × 15 cm for each material were cut in a small press ( Figure 6). Measurements were limited to the central part of the sample and each specimen was wiped dry to remove dust. Next, specimens were pinned to the jaw and force was exercised in the longitudinal or the transversal axis until the sample broke. Five repetitions of each specimen were conducted. Figure 7 shows FT-IR spectra for the five samples studied before being exposed to environmental conditions (0 days). No differences were observed in the three polyethylene samples analyzed, as expected. Polypropylene shows a lower transmittance than polyethylene samples and behaves different, with sharp absorption peaks. FEP sample is the one with lower transmittances values with wide absorption peaks. Average transmittances in the atmospheric window (7-14 µm) were calculated using the weighted average by integration of incoming spectrum [40]. Results for 0 days show transmittances around 79% for the three polyethylene, 75.97% for polypropylene (PP-35) and 37.78% for fluorinated ethylene propylene (FEP-50). These values are in good accordance with the literature [25,26]. Polypropylene shows also high transmittance in the atmospheric window and may be a good candidate for radiative cooling applications (Table 2). However, FEP-50 shows very low transmittance for radiative cooling applications, and it was discarded as a candidate for radiative cooling wind-shields. When analyzing Table 2 from an aging perspective, it is seen that average transmittances decrease during 90 days for both polyethylene and polypropylene, as expected and reported by [35]. Transmittance drops may vary depending on the sample cleaning process (manually and maybe not identical for all samples) and the origin of the samples (purchased from different manufacturers). To determine whether this observation is statistically significant, two-tailed t-Student tests have been performed (Table 3). When comparing the 0 day samples with 90 days for the three polyethylene films, it is seen that after three months of environmental exposure, LDPE and HDPE samples show statistically significant decreases in average transmittance in the atmospheric window. This decrease is observed independently of the thickness and the fact of having the material covered with glass or uncovered. PP-35 shows a significant decrease in transmittance only when the sample was not covered. Last column in Table 3 focuses on samples after 90 days and compares the effect of being covered with glass or uncovered. Both LDPE and HDPE with a thickness of 60 µm show statistical differences, demonstrating the protection role of the glass. Although the aging process exists, samples with glass (higher transmittance values) behave better than those without glass. The protective role of the glass was not observed for LDPE-100, and PP-35, since no statistically significant differences were detected.
Optical Properties and Chemical Structure
FT-IR spectra for each material considering 0 days, 90 days with glass and 90 days without glass are presented (Figures 8-11). produced in a degradation process. Thus, no carbonyl groups were detected after 90 days of exposure to the environment. Figure 10 represents spectra for HDPE-60. The same CO bond absorption frequency area as the one found for LDPE-60 was observed. In addition, narrow peaks at 1714 cm −1 were detected for 90 day samples, meaning that oxidation process existed. Finally, absorption peaks corresponding to double bonds (900 cm −1 ) are shown in both 90 day samples.
Finally, when analyzing Figure 11 for PP-35 it is seen that curves in the atmospheric window for 0 days and 90 days with glass are very similar and this result matches with the one presented in Table 3, where no significant differences were observed between these two samples. Curves for 0 days and 90 days without glass present slight differences. These differences are statistically significant according to Table 3, meaning that PP-35 suffers some degradation when it is not covered with glass. This is also corroborated by the absorption peak in 1714 cm −1 for 90 days without glass. Unlike what was seen with the other materials, no absorption peaks in the double bond (C=C) absorption band were observed for PP-35.
Mechanical Properties
Young's modulus (E t ) and maximum tensile strength (σ M ) for the four plastic films before exposition and after 90 days covered with glass or without glass are presented in Table 5. There was no common pattern when analyzing the effect of the glass in the aging process after 90 days. When looking at Young's modulus it is seen that while LDPE-100 and HDPE-60 show higher values for samples with glass, the opposite behaviour was observed for LDPE-60 and PP-35. The same trend was detected for maximum tensile strength. However, it is worth noticing the different behaviour of PP-35, showing higher differences than polyethylene, between with glass and without glass for both the Young's modulus (120.99%) and maximum tensile strength (171.00%).
Young's modulus and maximum tensile strength before exposition (0 days) tested in the longitudinal and the transversal axis are compared in Table 6 Comparisons between samples without environmental exposure (0 days) and after 90 days of exposure are shown in Table 7. Averages for 0 days longitudinal and transversal samples are calculated. Averages for 90 days samples are also evaluated. No common pattern was observed when comparing Young's modulus after 90 days of exposition for each of the four plastic films studied. Average Young's modulus decreases by 51.91% for LDPE-100 while it increases by 15.69% for LDPE-60. The average Young's modulus remains almost constant (decreases 4.11%) for HDPE-60 and increases only 7.99% for PP-35. It is well known that the larger the Young's modulus, the stiffer the sample. According to Meseguer et al. [41] there is a relationship between sample porosity and Young's modulus. When the Young's modulus increases there is a densification effect of the sample and therefore the porosity is lower. Applying this, there is a large increase of sample porosity for LDPE-60.
When analyzing the variation of the maximum tensile strength after 90, we observed a decrement in the three polyethylene samples, being 57.68% for LDPE-100, 11.01% for LDPE-60, and 32.18% for HDPE-60. It is worth noticing the different behaviour of the PP-35, for which an increase of 16.56% was observed. A decrement in the tensile strength demonstrates the presence of scission reactions. According to Chabira et al. [38] chain scissions lead also to a drop of the elongation at break (ε B ) and affect adversely the tensile strength. This behaviour is shown in Table 8 where the elongation at break decreases for the three polyethylene samples and increases for PP-35.
Conclusions
This article presents a study of plastic films with thickness between 35-100 µm to determine their suitability to be used as wind-shields for radiative cooling applications. Plastic films samples covered with glass or uncovered and exposed to environmental conditions during 90 days have been studied. FT-IR spectra and traction mechanical tests have been used to study materials degradation.
A decrease in the average transmittance in the atmospheric window between 3.5% and 6.5% for LDPE-100 and LDPE-60, respectively and 9% for HDPE was calculated. PP-35 shows the lowest decrease in transmittance, with a value of 3%.
Polypropylene 35 µm (PP-35) does not show a significant aging process when covered with glass. When the plastic is exposed without glass, the decrease in the average transmittance is only 3%. In addition, FT-IR spectra only show a carbonyl absorption peak for the 90 days samples without glass, confirming a good aging behaviour.
Polypropylene (PP-35) is stiffer than polyethylene (higher Young's modulus and maximum tensile strength). PP-35 after 90 days presents a low increase of Young's modulus (of 7.99%), meaning that there is only a low stiffness of the sample. This result matches with the absence of double bonds and C-O groups in the FT-IR spectra, indicating the good aging behaviour of PP-35. Finally, an increase of 16.56% of the maximum tensile strength for 90 days was observed, indicating that no scission reactions occurred. This explains also while the elongation at break of the PP-35 increases after 90 days.
To sum up, polyethylene has been confirmed as a good candidate to be used as a wind-shield for radiative cooling. Polypropylene has been presented as an alternative because it is also transparent to long-wave radiation but presents better hardness, tensile strength, and elasticity than polyethylene.
|
2022-11-10T17:21:20.246Z
|
2022-11-08T00:00:00.000
|
{
"year": 2022,
"sha1": "1bd1111487102f7118cfdd51c7f9e40469b4990f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/15/22/8340/pdf?version=1668585471",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "50991dc43340b8cbfe3a6a89205fb5a4c9f91edb",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
155450535
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis of nano-crystalline forsterite based on amorphous silica powder from natural sand by mechanical activation method
The synthesis of nano-forsterite powders has been succeeded using an amorphous SiO2 powder base material from the purification of natural silica sands from Tanah Laut and MgO Merck with a combination of duration of mechanical activation and calcination temperature. The silica and magnesia powders are mixed and mechanically activated using ball mill for 1, 2, 3 hours. The mixture was then calcined at a temperature of 950, 1050, and 1150°C for 4 hours to form a forsterite powder. Phase characterization was performed using XRay Diffraction (XRD), while the crystalline size using Transmission Electron Microscopy (TEM). The analysis of diffraction data was done using the Rietica software method. Overall the phases formed after calcination are forsterite, periclase, cristobalite, and protoenstatite. The highest percentage of forsterite weight was obtained in the sample with a 3-hour mechanical activation treatment with a calcination temperature of 950 °C, i.e. 87.9 wt. At all temperatures, the forsterite content increases with increasing time of mechanical activation. An important invention in this study, when compared with earlier literature, is that high concentrations of forsterite can be formed at lower calcination temperatures. Observations with TEM show that the size of forsterite crystals is reduced along with the increase in the time of mechanical activation. The size of the forsterite crystals in the calcined samples of 950° C after mechanical activation for 3 hours was about 81 nm, whereas in the calcined samples 1050 °C without mechanical activation about 94 nm.
Some forsterite applications in the advent technology are used as a dielectric for millimeter waves [7] and insulator material for Solid Oxide Fuel Cell (SOFC) because it has a good thermal expansion coefficient and high stability [8]. In the field of medical forsterite is used for tissue engineering applications [9,10], radiotherapy [11], and as bone implants [12]. Therefore, some efforts are made to produce forsterite with high purity, both regarding basic materials and methods of manufacture. [13], coprecipitation [14], solid reaction [12] and mechanical activation [1,15]. Although Fathi and Kharaziha [1] have performed forsterite synthesis with mechanical activation, the basic ingredients used are magnesium carbonate (MgCO3) and amorphous SiO2. Tavangarian and Emadi [3] have also performed forsterite synthesis with mechanical activation using talc base material (Mg3Si4O10(OH)2) and magnesium carbonate (MgCO3). The results of the research are claimed to have high purity forsterite, milling time of more than 10 hours, high calcination temperature (>1200°C) and the purity percentage quantity is not specified. It is, therefore, necessary to make a breakthrough by enriching the raw material to empower the potential around us given that forsterite can be synthesized from precursors containing silica oxide and magnesium oxide. The amorphous silica base material (ATL) purified from the silica sand as a source of the silica oxide was successes synthesis to form nanoforsterite powder by mechanical activation method [16], but the periclase was still encountered (~11 wt%). Therefore, it is urgent to reduce MgO composition on starting materials to remove the appearance of periclase as a secondary phase. This paper reports the influence of the milling time (1, 2 and 3h) at various low calcination temperature (950, 1050 and 1150°C) to the purity of forsterite.
Methods
ATL and MgO powder were weighed using a digital balance sheet with a composition of 49.2 wt% SiO2 and 50.8 wt% MgO, a preformed composition approximating a 1: 2 mol ratio of stoichiometry between SiO2 and MgO, then added 3wt% PVA of the total weight of SiO2 and MgO. Here, PVA acts as an additive. The mixing process is carried out with or without mechanical activation. The process without mechanical activation is done by mortar. Mechanical activation is carried out for 1, 2, and 3 hours. Then in each sample calcination was done at temperature 950, 1050, and 1150ºC for 4 hours holding time. There are 12 samples with FTL 950-1150 0-3 nomenclature, code number hundreds and thousands of letters behind the significance of the calcination temperature while the number of units that follow behind means the time of milling. All the calcined samples were then tested by X-ray diffraction (XRD) for phase composition analysis, whereas for grain size analysis only 2 samples were selected in the TEM test. Figure 1 presents the diffraction patterns of all synthesized samples showing that forsterite has been formed at all calcination temperatures with or without mechanical activation. Sanosh et al. [13] state that forsterite begins to form at a temperature of 800°C. Despite the stoichiometric ideal 1:2 molarity ratio, the forsterite phase formed is always followed by secondary phases. The presence of secondary phase is caused because forsterite can also be formed because of the reaction between MgO on the surface of SiO2 to form the enstatite, so it is natural that the enstatite phase (MgSiO3) or polymorphic (proto-enstatite and clinoenstatite) occurs. After that, the forsterite formation phase can continue when there is excess MgO diffusing through the surface of the enstatite [17]. So the mechanism of the formation reaction of forsterite in this study can be formulated in the following equation:
Result and Discussion
XRD data for FTL 950 3 samples showed fairly wide forsterite peaks and indicated the formation of nano-forsterite. To confirm this formation, observations were made with TEM. Based on the TEM image in Fig. 2 it can be estimated that the distribution of crystal size in FTL 950 3 and FTL 1050 0 samples is approximately 81 nm and 94 nm, respectively. The increasing of the crystal size as shown in Fig. 2 (B) is due to the higher calcination temperature so that grain growth is increasing. On the other hand in Fig. 1A, mechanical activation is carried out for 3 hours and calcined at a lower temperature so that the crystal size is smaller. Thus the purpose of the mechanical activation has been achieved that is to increase the reactivity so that the desired phase can be formed at a lower calcination temperature. The formation of the phase is due to the increased diffusion rate and the homogeneity of the particles. Table 1. Cristobalite is a SiO2 polymorph, and proto-enstatite is MgSiO3 polymorph. The presence of SiO2 and MgO indicates that the reaction between the two compounds is not yet perfect overall. Two things that are suspected to be the cause of the non-reaction of the two compounds are (1) the homogeneity of the second particle distribution of the compound and (2) the unavailability of energy for the reaction as in equation 1 as a result of the too low calcination temperature used. Meanwhile, SiO2 and MgO can react to form MgSiO3 according to equation 2 with several structures, one of which is proto-enstatite. This phase is formed at a temperature of calcination above 1000°C ( Fig. 1B and 1C) as uncovered by Foster [18].
Based on the results obtained from the smoothing by using Rietica, each temperature increase at the same mechanical activation time, the forsterite content tends to rise, except for samples with a 3-hour mechanical activation time. The increase in percentage weight of the forsterite is due to the increasing availability of energy for the reaction along with the increasing temperature of calcination. The weight percentage decrease occurs when mechanical activation is 1 hour compared with no mechanical activation at all calcination temperatures. MgO and SiO2 have not fully reacted due to the inhomogeneity of particle distribution. Therefore, it can be said that the homogeneity of the particles in the sample without mechanical activation is better than the sample with 1-hour mechanical activation. At each temperature, the forsterite content always increases with the addition of mechanical activation time. The increase in percentage weight of the forsterite is due to changes in grain size, so homogeneity becomes better and more comfortable to react. The decrease in proto-enstatite and periclase content and increased forsterite with increasing mechanical activation time indicate the reaction between periclase and proto-enstatite according to equation 2 and equation 3. MgO reacts with SiO2 to form the enstatite, then MgO diffuses on the enstatite to form the forsterite as disclosed by Brindley and Hayami [17].
The highest forsterite content with mechanical activation is the treatment of mechanical activation of 3 hours with a calcination temperature of 950 ° C that reaches 87.9% wt. The high percentage of weight in the treatment was due to improved homogeneity and proto-enstatite phase as well as mechanical activation with a temperature of calcination of 1050 and 1150°C which formed the protoenstatite phase of 16.8 and 17.7wt% respectively. The highest forsterite content of the entire sample was in the treatment without mechanical activation with a temperature of calcination of 1150°C which reached 94.8wt%. This result is more effective than that obtained by [1] which is optimum when its mechanical activation for 10 hours with a calcination temperature of 1200°C. Especially when compared with the results obtained by [3] that produce forsterite with high purity after being treated with 100 hours of mechanical activation with a temperature of calcination 1200ºC. Although in [7] improved the results of forsterite synthesis by mechanical activation using the same basic ingredients. The second study optimally resulted in forsterite in 60-hour mechanical activation treatment with a 1000 ºC calcining temperature. While there is a marked improvement in the quality of milling time and calcination temperature, the research is considered never before to be effective and efficient regarding forsterite production.
Conclusion
Success in synthesizing nano-forsterite with a fairly high percentage of weight (above 80wt%) although calcined at relatively low temperatures (below 1000°C) in this study. The treatment of mechanical activation time together with the calcination temperature in the fabrication process greatly influences the percentage of the purities of forsterite formed. Increased mechanical activation time in addition to increased the percentage of forsterite, it also decreases the calcination temperature and decreases the size of the crystals. The highest percentage of forsterite weight in the sample with mechanical activation was formed at 3-hour mechanical activation with a calcination temperature of 950°C, of 87.9wt%. Distribution of crystal size in samples with mechanical activation for 3 hours with a calcination temperature of 950°C and a sample without mechanical activation with a calcination temperature of 1050°C of approximately 81 nm and 94 nm.
|
2019-05-17T14:23:49.349Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "cf48be2b6fbbedcb7b858f5cb7692c5d411be7d9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1170/1/012069",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7a200716b4e44d8b0512b75d23d8fa598cdb4bde",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
11335559
|
pes2o/s2orc
|
v3-fos-license
|
Crystal structure of raptor adenovirus 1 fibre head and role of the beta-hairpin in siadenovirus fibre head domains
Background Most adenoviruses recognize their host cells via an interaction of their fibre head domains with a primary receptor. The structural framework of adenovirus fibre heads is conserved between the different adenovirus genera for which crystal structures have been determined (Mastadenovirus, Aviadenovirus, Atadenovirus and Siadenovirus), but genus-specific differences have also been observed. The only known siadenovirus fibre head structure, that of turkey adenovirus 3 (TAdV-3), revealed a twisted beta-sandwich resembling the reovirus fibre head architecture more than that of other adenovirus fibre heads, plus a unique beta-hairpin embracing a neighbouring monomer. The TAdV-3 fibre head was shown to bind sialyllactose. Methods Raptor adenovirus 1 (RAdV-1) fibre head was expressed, crystallized and its structure was solved and refined at 1.5 Å resolution. The structure could be solved by molecular replacement using the TAdV-3 fibre head structure as a search model, despite them sharing a sequence identity of only 19 %. Versions of both the RAdV-1 and TAdV-3 fibre heads with their beta-hairpin arm deleted were prepared and their stabilities were compared with the non-mutated proteins by a thermal unfolding assay. Results The structure of the RAdV-1 fibre head contains the same twisted ABCJ-GHID beta-sandwich and beta-hairpin arm as the TAdV-3 fibre head. However, while the predicted electro-potential surface charge of the TAdV-3 fibre head is mainly positive, the RAdV-1 fibre head shows positively and negatively charged patches and does not appear to bind sialyllactose. Deletion of the beta-hairpin arm does not affect the structure of the raptor adenovirus 1 fibre head and only affects the stability of the RAdV-1 and TAdV-3 fibre heads slightly. Conclusions The high-resolution structure of RAdV-1 fibre head is the second known structure of a siadenovirus fibre head domain. The structure shows that the siadenovirus fibre head structure is conserved, but differences in the predicted surface charge suggest that RAdV-1 uses a different natural receptor for cell attachment than TAdV-3. Deletion of the beta-hairpin arm shows little impact on the structure and stability of the siadenovirus fibre heads.
Adenoviruses are non-enveloped viruses with a linear, double-stranded DNA genome, packed into an icosahedral capsid with a vertex-to-vertex diameter of about 100 nm [14]. Fibres are non-covalently attached to each pentameric penton base at the vertices [15,16]. Typically, one fibre protrudes from each pentameric vertex. However, either two distinct or two or three identical fibres protruding from each vertex have also been reported [17,18]. Adenovirus fibres are trimeric proteins, which consists of three different domains: an N-terminal penton base binding domain, a thin central shaft domain and a C-terminal globular fibre head domain [19,20]. Adenovirus infections are usually initiated by the interaction of their fibre heads with primary receptors, followed by internalization mediated by penton bases interacting with integrins [21].
Adenoviruses from the Siadenovirus genus are characterized by a short genome of around 26 kb [12,13]. Apart from generally conserved adenovirus proteins, five open reading frames potentially encode novel proteins unique to the Siadenovirus genus. For instance, at the left end of the genome, a gene encoding a putative sialidase (neuraminidase) was found, which has given the genus its name (Siadenovirus). Current members of the Siadenovirus genus with published full genome sequences are frog adenovirus 1 (FrAdV-1), turkey adenovirus 3 (TAdV-3), raptor adenovirus 1 (RAdV-1) and South Polar skua adenovirus 1 (SPSAdV-1). TAdV-3 is associated with specific diseases in different hosts: haemorrhagic enteritis in turkey, marble spleen disease in pheasants and splenomegaly in chickens [22,23]. RAdV-1 was discovered and its genome was sequenced by PCR-based methods, without virus isolation [24,25]. RAdV-1 was identified as the causative agent of an outbreak of adenoviral disease in the United Kingdom in 2004. RAdV-1 was identified in different birds of prey, including a Harris's hawk (Parabuteo unicinctus), a Bengal eagle owl (Bubo bengalensis) and a Verreaux's eagle owl (Bubo lacteus) [26]. A single fibre gene has been found in the RAdV-1 genome. It encodes a 464 amino acid protein of which the N-terminal pentonbase binding sequence is predicted to contain amino acids 1-75 and the shaft domain residues 76-319, including fifteen triple beta-spiral repeats [20]. The C-terminal fibre head domain is expected to be composed of amino acids 324-464.
Modification of the adenovirus fibre head domain has been performed with the aim of retargeting adenovirusbased vectors to specific cell types [27]. Although many human adenovirus fibre heads are characterized [14], only a few animal adenovirus fibre head structures are known [28][29][30][31][32][33]. Animal adenovirus fibres may provide novel receptor binding and targeting functions, while humans may also have less pre-existing immunity to them. Recently, the structure of the first fibre head of an adenovirus from the genus Siadenovirus was reported, of TAdV-3 [34]. The structure revealed the insertion of a beta-hairpin embracing a neighbouring monomer when compared to known adenovirus fibre head structures. The TAdV-3 fibre head structure was found to bind sialyllactose, which may function as a (co-)receptor. Here we present the structure of the raptor adenovirus 1 fibre head, which does not appear bind sialyllactose, and show that deletion of the beta-hairpin insertion does not significantly affect the stability of siadenovirus fibre head domains.
Results and discussion
Expression, purification, crystallization and structure solution of the raptor adenovirus 1 fibre head Sequence analysis of the putative fibre protein suggested that the fibre head domain likely comprises residues 324 to 464. An expression vector was constructed containing residues 324-464 and the resulting protein was expressed with an N-terminal purification tag (MGSS HHHHHH SSGLV PRGSH MASMT GGQQM GRGSG). Codon analysis of the RAdV-1 fibre gene suggested the presence of a significant amount of rare codons (17 %) and a relatively low GC-content for the fibre head-coding region (just over 30 %). Therefore, the protein was expressed in the Escherichia coli Rosetta2(DE3)pLysS strain. The histidine-tagged RAdV-1 fibre head protein was purified by metal affinity chromatography and anion exchange chromatography as described in the Methods section. About 16 mg of purified protein could be obtained per litre of expression culture. The protein was concentrated and stored in a buffer containing L-arginine and glycerol as stabilizing agents.
Vapour diffusion crystallization trials were performed and well-diffracting crystals were obtained after 3-4 days when a solution containing 1.5 M sodium chloride and 10 % (v/v) ethanol was used as a reservoir. A highquality dataset was collected from one of these crystals and indexed to space group P2 1 3, with one protein monomer per asymmetric unit. The structure could be solved by molecular replacement, using a monomer of the TAdV-3 fibre head structure [34] as a search model, despite of them having low sequence identity (19 %; as a rule-of-thumb, 25-30 % sequence identity is usually considered necessary for successful structure solution by molecular replacement). The final refined model contains residues 327-462, plus ordered solvent (water and chloride molecules). No reliable density was observed for the N-terminal purification tag, for residues 324-327 or for the C-terminal threonine and alanine residues (amino acids 463-464), suggesting that these are disordered. Data collection, phasing and refinement statistics are shown in Table 1.
Structure description
When looked at from the side, each RAdV-1 fibre head monomer has an elongated shape and is about 5 nm high and 2.5 nm at its widest point with an obtuse triangular longitudinal cross-section ( Fig. 1). When viewed from the top, the cross-section is oval and about 2.5 nm long and 1.5 nm wide. A beta-hairpin, made up of residues 359-373, sticks out of each monomer. Together, three crystallographically related monomers form a compact globular trimer. Like the other siadenovirus fibre head structure, that of TAdV-3 [34], the monomer has an ABCJ-GHID beta-sheet topology with kinked Cand J-strands resembling reovirus fibre head structures [35,36]. The protruding C'C" beta-hairpins, embracing neighbouring monomers, appear to be unique to siadenovirus fibre heads. Despite of the low sequence identity (only~19 %), the structures are very similar and can be superposed with a root mean square difference (r.m.s.d.) of 1.8 Å 2 (131 superposed C-alpha atoms, Z-score of 18.6 [37]).
A structural feature present in both siadenovirus fibre head structures is the short alpha-helix in the CD-loop. Apart from being structurally conserved, it is also one of the few regions where the sequence is conserved between the RAdV-1 and TAdV-3 fibre heads, together with the AB-loop and the end of the D-strand and beginning of the DG-loop. These regions are close together in space and may well be important for folding or function of the fibre head. This sequence similarity extends to another avian siadenovirus fibre head sequence known, that of South Polar skua adenovirus fibre head [38], but not to the frog adenovirus 1 fibre head, the fourth siadenovirus fibre head sequence known [39].
Apart from the similarities between the RAdV-1 and TAdV-3 fibre head structures, some differences are also observed. The calculated surface charge distribution is very distinct between the two fibre heads (Fig. 2). Mainly positively charged patches are observed for the TAdV-3 fibre head, while both negatively and positively charged patches are present in the RAdV-1 fibre head. The electrostatic potential surface charges showed that strong negatively charged patches are observed at the top surface of the RAdV-1 fibre head, while negatively and positively charged patches are found on the sides. The mainly positively charged surface of the TAdV-3 fibre head led us to search for carbohydrate ligands, and 2,3-and 2,6-sialyllactose were identified as potential ligands in glycan micro-array experiments and confirmed by NMR spectroscopy, isothermal calorimetry, co-crystallization and site-directed mutagenesis [34]. In the TAdV-3 structure complexed with 2,3-sialyllactose (PDB entry 4D62), the ligand is found wedged between a negatively (Glu392 is an interaction partner) and a positively charged patch (with Lys421 as a confirmed interaction residue). In the RAdV-1 structure, these charges are inverted and none of the interaction partners is conserved (underlined residues in Fig. 1e). Indeed, we could not identify carbohydrate ligands of the RAdV-1 fibre head by glycan micro-array experiments and co-crystallization with 2,3-and 2,6-sialyllactose was unsuccessful. This supports the idea that the binding of the adenovirus fibre heads with sialylated carbohydrates is charge-dependent, which was proposed previously [40]. Our observations implicate that the cell receptor of these two adenoviruses may be different. Alternatively, perhaps both RAdV-1 and TAdV-3 bind the same, as yet unidentified protein receptor, and only TAdV-3 uses sialyllactose as a co-receptor.
Stability of siadenovirus fibre heads
In the trimeric form, the interaction interface of the RAdV-1 fibre head contains many hydrophobic interactions, with 23 amino acids from each monomer involved, accounting for 17 % of all residues. Four of them are residues from the beta-hairpin arm (which is composed of residues 359-372) interacting with a neighbouring monomer. The basic framework of the trimeric form is also secured by eighteen intermonomer hydrogen bonds (five of these involve the beta-hairpin arm) and intermolecular salt bridges between Arg419 of one monomer and Asp388/Glu389 pairs of a neighbouring monomer. This compares to the TAdV-3 fibre head trimer, with twenty amino acids (15 % of all residues) involved in hydrophobic interactions (three from the beta-hairpin arm), sixteen intermonomer hydrogen bonds (six involving the betahairpin arm) and an intermolecular salt bridge between Arg318 and Asp340. The RAdV-1 fibre head trimer has a solvent accessible surface area of 16.5 × 10 3 Å 2 , with 8.1 × 10 3 Å 2 of buried surface; this means that 33 % of the total surface gets buried upon trimer formation. The calculated dissociation energy (ΔG diss ) is 70 kcal/mol, which indicates that the fibre head is very stable. This observation is consistent with the thermal denaturation assay results, in which no denaturation was observed at up 94°C (continuous line in Fig. 3a). In comparison, the TAdV-3 fibre head trimer has a solvent accessible surface area of 16.8 × 10 3 Å 2 , with 7.4 × 10 3 Å 2 of buried surface; this means that 31 % of the total surface gets buried upon trimer formation; i.e., very similar to the RAdV-1 fibre head. However, the calculated dissociation energy (ΔG diss ) of the TAdV-3 fibre head is 35 kcal/mol, only half that of the RAdV-1 fibre head. In a thermal denaturation assay (continuous line in Fig. 3b), the unfolding temperature was estimated to be about 86°C [34], so the TAdV-3 fibre head appears to be less stable than the RAdV-1 fibre head, although it is still a very sturdy protein.
Deletion of the protruding beta-hairpin
For investigation of the impact of the beta-hairpin on the structure and stability of siadenovirus fibre heads, deletion mutagenesis was performed. The beta-hairpin arms of the RAdV-1 and TAdV-3 fibre heads (residues 359-NYGLR VVNGE LQNTP-373 and 349-DNIGV IENPT FYRNK S-364, respectively) were replaced by a short fragment containing two amino acids (EF and VD in the structure of RAdV-1 and TAdV-3 fibre heads, respectively). About 16 mg and 13 mg of mutated fibre head protein from RAdV-1 and TAdV-3 could be obtained per each litre of bacterial culture, respectively. Mutated fibre head proteins were concentrated up to 10 mg/ml in the same buffer as for their native counterparts. Welldiffracting crystals of the RAdV-1 fibre head beta-hairpin deletion mutant were obtained when 0.1 M 2-[4-(2-hydroxyethyl)piperazin-1-yl]ethanesulfonic acid-NaOH pH 7.5, 20 % (v/v) Jeffamine M-600 (O-(2-Aminopropyl)-O'-(2methoxyethyl) polypropylene glycol 500) was used as a reservoir solution. Data to 1.7 Å resolution was collected from one of the crystals and processed. This structure could be solved by difference Fourier synthesis and directly refined, as the crystal form was isomorphous to the non-mutated fibre head. Unfortunately, crystals of the TAdV-3 fibre head with the beta-hairpin arm removed could not be obtained.
Structure superposition of the native RAdV-1 fibre head and the beta-hairpin arm deletion mutant showed that the main architecture is highly conserved (Fig. 4), apart from the mutation introduced. The beta-hairpin arm is replaced by a short loop and the conformations of the BC-and HI-loops are slightly affected. In the case of the HI-loop this can be explained by the fact that in the trimer, it is close to the end of the beta-hairpin arm of a neighbouring monomer. In the trimeric form, the interaction interface of the mutant RAdV-1 fibre head now only contains hydrophobic interactions involving fourteen amino acids from each monomer, accounting for 11 % of all residues. Only seven intermonomer hydrogen bonds are present, although the intermolecular salt bridge is conserved. The mutated RAdV-1 fibre head trimer has a solvent accessible surface area of 16.9 × 10 3 Å 2 , with 5.9 × 10 3 Å 2 of buried surface, which means that now only 25 % (instead of 33 %) of the total surface gets buried upon trimer formation. The calculated dissociation energy (ΔG diss ) is 55 kcal/mol instead of 70 kcal/mol.
The thermal stability of the mutated RAdV-1 fibre head protein was assessed by a thermofluor experiment and compared to the native versions (Fig. 3). Deletion of the beta-hairpin appears to destabilize the structure somewhat, because now some unfolding is observed at high temperature (dotted line in Fig. 3a). The melting temperature of the deleted mutant was estimated to be about 90°C, although unfolding was not complete at the maximum temperature at 94°C reached in the experiment. The somewhat lower unfolding temperature can be explained by the loss of hydrophobic interaction surface and some of the intermolecular hydrogen bonds made by the beta-hairpin arm, and is consistent with the assembly stability prediction. In the case of the TAdV-3 fibre head, deletion of the beta-hairpin appears to lead to a slight increase in stability, from an unfolding temperature of 86°C to 89°C (Fig. 3b). In this case, perhaps the mutant is more compact than the native protein. This could be caused by a certain flexibility of the beta-hairpin arm, a flexibility that may be absent in the RAdV-1 fibre head.
Conclusion
The high-resolution structure of RAdV-1 fibre head is the second known structure of a siadenovirus fibre head domain. The structure shows that the siadenovirus fibre head structure is conserved, including the alpha-helix in the CD-loop as well as the beta-hairpin insertion in the C-strand. Differences in the predicted surface charge suggest that RAdV-1 uses a different natural receptor for cell attachment than TAdV-3. Deletion of the betahairpin arm shows little impact on the structure of the RAdV-1 fibre head, although it slightly destabilizes the very sturdy structure. In contrast, deletion of the betahairpin arm in the TAdV-3 fibre head stabilizes its structure somewhat. Further information on the infection mechanism of RAdV-1 may come from infection studies, but for that the virus must first be isolated and a suitable cell culture system be established.
Cloning, expression and purification
A DNA fragment coding for residues 324-464 of the fibre protein was amplified by PCR from the complete DNA genome (GenBank accession number NC_015455.1) and cloned into the expression vector pET28a(+) (Novagen, Merck, Darmstadt, Germany), previously digested with the restriction enzymes BamHI and XhoI. The resulting expression vector was called pET28-RAdVFib(324-464). Codon and GC-content analysis were performed using the OptimumGene tool from GenScript (Piscataway NJ, U.S.A.). For protein expression, Escherichia coli strain Rosetta2 (DE3)plysS (Novagen, Merck Millipore, Madrid, Spain) was transformed with pET28-RAdVFib(324-464) and grown aerobically at 37°C in growth medium containing 2 % (w/v) tryptone (pancreatic digest of casein), 1 % (w/v) yeast extract and 20 mM glucose. When the optical density at 600 nm reached 0.5-0.8, the culture was cooled on ice for 30 min and isopropyl-beta-D-1-thiogalactopyranoside was added to a final concentration of 0.5 mM for induction of protein expression. The culture was then incubated overnight at 16°C with shaking. Cells from two litres of culture were harvested by centrifugation (10 min at 5,000 × g), resuspended in buffer A (10 mM Tris-HCl pH 7.5, 0.5 M sodium chloride, 10 % (v/v) glycerol) plus 20 mM imidazole and stored at −20°C. After thawing, cells were lysed by two passes through a French press at about 7 MPa. Cell rests were removed by centrifugation for 30 min at 20,000 × g.
For purification, two ml of nickel-nitrilotriacetic acid resin slurry (Jena Bioscience, Jena, Germany) was added to the protein-containing supernatant and incubated with occasional gentle shaking for 30 min on ice. The resin was then transferred to a column and washed with 30 ml of buffer A with 20 mM imidazole. RAdV-1 fibre head was eluted using a step-gradient of imidazole in buffer A (50 mM, 100 mM, 250 mM and 500 mM imidazole; steps of 5 ml). After analysis by denaturing gel electrophoresis, fractions containing 100 mM, 250 mM and 500 mM imidazole were pooled, dialysed against 10 mM bicine-NaOH pH 9.0 and loaded onto a Resource Q6 column (GE-Healthcare Biosciences, Uppsala, Sweden) equilibrated in the same buffer. The protein was eluted with a linear gradient of 0-1 M sodium chloride in 10 mM bicine-NaOH pH 9.0. Fractions containing pure protein were concentrated to 26 mg/ml using an Amicon Ultra concentrator with a molecular weight cut-off of 10 kDa (Millipore, Madrid, Spain). Three washes with protein storage buffer (10 ml of 10 mM bicine-NaOH pH 9.0, 50 mM magnesium chloride, 5 % (v/v) glycerol and 5 mM L-arginine) were applied. The sample was stored at 4°C prior to crystallization trials.
Mutagenesis
The pET28-RAdVFib(324-464) plasmid was used as a DNA template for PCR-based mutagenesis, using the QuikChange procedure (Agilent Technologies, Waldbronn, Germany). For the beta-hairpin deletion mutation, a pair of primers was designed with the insertion of an EcoRI restriction site at the 5' end (primers 5'-TAT GAA TTC CTT ACA TTT AAA GGG GCA GAT-3' and 5'-ATA TAG AAT TCT CCA AGG ATT GTA ATC TTA-3'). A standard PCR procedure was performed to obtain a linear DNA product, which was digested with the restriction enzyme EcoRI before being self-circulated by T4-DNA ligase and transformed into Escherichia coli Top 10. Mutagenesis was confirmed by DNA sequence analysis (Secugen, Madrid, Spain). The protein was produced in the same way as the non-mutated version. The same procedure was performed for beta-hairpin deletion of the TAdV-3 fibre head (using the primers: 5'-CTA TAG TCG ACA TTG AAT TAA GAT CTG CTG ATT TC-3' and 5'-CTA TAG TCG ACT ATA AAC TGT ATG ATT AAC AGA GC-3' with the restriction enzyme SalI) and protein was produced using previously established methods [41].
Thermal unfolding assay
Thermal unfolding assays [42] were carried out in an iCycler iQ PCR Thermal Cycler (Bio-Rad, Hercules CA, USA) in the presence of the fluorescent dye SYPRO Orange (Life Technologies SA, Madrid, Spain). Reaction volumes of 30 μl were prepared in 200 μl eppendorf tubes, containing 30 μg of protein and 5X SYPRO Orange from the supplied 5000X stock solution. Thermal Fig. 4 Superposition of the native and mutant RAdV-1 fibre head domain structures. a The RAdV-1 fibre head monomer in native (with the beta-hairpin arm; green) and mutant form (without the beta-hairpin arm; yellow), viewed from the side. b The RAdV-1 fibre head trimer in native (with the beta-hairpin arm; monomer in green, magenta and cyan) and mutant form (without the beta-hairpin arm; in yellow), viewed from the top. The end of the beta-hairpin of monomer A is marked with an asterisk in both panels denaturation curves were obtained by heating the samples from 4 to 94°C with a ramp rate of 1°C/min and monitoring the fluorescence at every 0.5°C increment. The melting temperature T m is defined as the point where the slope of the fluorescence increase is maximal.
Crystallization, crystallographic data collection and structure solution The RAdV-1 fibre head proteins were crystallized using the sitting drop vapour diffusion method, using either a robotic setup (Genesis RSP 150 workstation; Tecan, Männedorf, Switzerland) or manually. In either case, 50 μl reservoirs were employed, and drops were prepared containing 0.2 μl of protein sample and 0.2 μl of the respective reservoir solution for robotic setups and 0.6 μl of protein plus 0.6 μl of reservoir solution for manual setups. Crystals were harvested with Litholoops (Molecular Dimensions, Newmarket, England) or Micromounts (Mitegen, Ithaca NY, U.S.A.), transferred to cryo-protection solution (reservoir solution containing 20 % (v/v) glycerol) and flash-cooled in liquid nitrogen.
Crystallographic data were collected at the BL13-XALOC beamline of the ALBA synchrotron [43], integrated using iMosflm [44] and further processed using POINTLESS, AIMLESS and TRUNCATE [45] from the CCP4 suite [46]. Molecular replacement was performed using PHASER [47], using the TAdV-3 fibre head structure (PDB entry 3ZPE) [34] as a search model. The model obtained from PHASER was used as input for automated model building in ARP/ WARP [48]. This model was completed using COOT [49] and refined using REFMAC5 [50]. Validation was done with MOLPROBITY [51]. Structure comparisons were performed using the DALI server [37]. Structure figures were made using PYMOL (The PYMOL Molecular Graphics System, Schrödinger, LLC). Protein assembly parameters were calculated using PISA [52]; interaction residues were identified using the PIC server [53].
|
2017-08-03T02:12:42.757Z
|
2016-06-22T00:00:00.000
|
{
"year": 2016,
"sha1": "11333f21f6bd96e9e9bef392d6ad9514972f5f16",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-016-0558-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11333f21f6bd96e9e9bef392d6ad9514972f5f16",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
1863349
|
pes2o/s2orc
|
v3-fos-license
|
Dysregulation of locus coeruleus development in congenital central hypoventilation syndrome
Human congenital central hypoventilation syndrome (CCHS), resulting from mutations in transcription factor PHOX2B, manifests with impaired responses to hypoxemia and hypercapnia especially during sleep. To identify brainstem structures developmentally affected in CCHS, we analyzed two postmortem neonatal-lethal cases with confirmed polyalanine repeat expansion (PARM) or Non-PARM (PHOX2B∆8) mutation of PHOX2B. Both human cases showed neuronal losses within the locus coeruleus (LC), which is important for central noradrenergic signaling. Using a conditionally active transgenic mouse model of the PHOX2B∆8 mutation, we found that early embryonic expression (<E10.5) caused failure of LC neuronal specification and perinatal respiratory lethality. In contrast, later onset (E11.5) of PHOX2B∆8 expression was not deleterious to LC development and perinatal respiratory lethality was rescued, despite failure of chemosensor retrotrapezoid nucleus formation. Our findings indicate that early-onset mutant PHOX2B expression inhibits LC neuronal development in CCHS. They further suggest that such mutations result in dysregulation of central noradrenergic signaling, and therefore, potential for early pharmacologic intervention in humans with CCHS. Electronic supplementary material The online version of this article (doi:10.1007/s00401-015-1441-0) contains supplementary material, which is available to authorized users.
Introduction
Congenital central hypoventilation syndrome (CCHS) is a classic disorder of autonomic respiratory control characterized by alveolar hypoventilation and monotonous respiratory rates despite abnormal pCO 2 and pH concentrations, especially during non-rapid eye movement (NREM) sleep [57]. Patients with CCHS lack behavioral responsiveness to hypoxemia and hypercarbia without symptoms of shortness of breath or respiratory distress [8], and will not automatically adjust spontaneous ventilation or awaken from sleep despite progressive physiologic compromise [57]. A subset of CCHS patients have Hirschsprung disease (HSCR; absence of ganglion cells from variable lengths of distal bowel), and/or solid extracranial tumors of neural crest origin [6,53], and additional symptoms of autonomic nervous system dysregulation are reported [18,20,42,48].
Autonomic respiratory networks are stimulated when specialized neuronal sensors (chemosensors) detect low levels of O 2 and/or high levels of CO 2 in the blood. These chemosensors include the carotid bodies (CB), located in the peripheral nervous system (PNS) near the bifurcation of the carotid artery, and several central nervous system (CNS) nuclei [22]. Specialized neurons and astrocytic populations in brain stem contribute to central CO 2 chemosensation [9,19]. Classical pharmacological studies show that catecholaminergic neuron depletion in the brain stem results in decreased ventilatory response to elevated CO 2 levels [31]. The rodent retrotrapezoid nucleus (RTN), located ventral to the facial nerve nucleus, drives respiration in response to decreased pH, resulting from elevated blood CO 2 concentrations [34]. Interestingly, such hypercapnic ventilatory responses were diminished in adult rodents after chemical ablation of the locus coeruleus (LC) [7], a major central noradrenergic structure that is located in dorsal brainstem with extensive connections to other local nuclei as well as forebrain.
In rodents, RTN development requires Phox2b function [12], and mouse models of CCHS, either expressing the 27-polyalanine repeat PARM or NPARM mutations of Phox2b, prevent RTN formation [13,35]. Although Phox2b is a well-known regulator of motor and noradrenergic neuronal specification [39,40], the precise basis of respiratory control in CCHS remains incompletely understood. Indeed, abnormal development or injury to central noradrenergic structures is suggested by prior work. The LC is the major source of central noradrenergic signaling [5]; it is thus a major regulator of arousal state and is also thought to function in cognition [49]. Several lines of evidence support the role of LC as a central chemosensor [11,17], but owing to the lack of neuropathological information from CCHS patients with confirmed PHOX2B mutations, it remains unclear whether CCHS-associated PHOX2B mutations primarily affect LC development.
To achieve further insight into the pathobiology of CCHS, we analyzed two postmortem cases of neonatallethal CCHS with confirmed PHOX2B mutations. Proband 1 was a full-term neonate with a heterozygous NPARM deletion/frameshift mutation (PHOX2B∆8), resulting in severe hypoventilation and total intestinal aganglionosis. Proband 2 was born preterm and had the most common heterozygous CCHS PHOX2B 20/27 mutation (PARM) with less severe phenotype. Interestingly, both cases showed loss or severe diminishment of noradrenergic LC neurons. We modeled the NPARM case in vivo by generating a cognate conditional transgenic mouse line. Early embryonic conditional activation of Phox2b∆8 in mouse brainstem (<E10.5) caused loss of a functional LC and abnormalities in all central noradrenergic (NA) neurons (e.g., A1/C2, forebrain projections to hypothalamus) tested. In contrast, later-onset (E11.5) activation of Phox2b∆8 expression spared NA neuron development and was not perinatal respiratory-lethal, despite loss of the RTN. Our findings demonstrate that LC development is compromised, and suggest abnormal central noradrenergic signaling, as a component of human CCHS.
Human neuropathological studies
Postmortem human samples (proband 1) were obtained using University of California, San Francisco (UCSF) guidelines with oversight of the Committee for Human Research and Gamete, Embryo and Stem Cell Research (GESCR) committee. Tissue from PHOX2B 20/27 proband 1 3 2 was obtained from Rainbow Babies and Children's Hospital (Cleveland, OH, USA) under an IRB-approved protocol at The Ohio State University. The entire formalin-fixed brainstems were serially sectioned for microscopic evaluation and compared to samples obtained from four roughly age-matched (control) patients that expired from other diseases. For case histories of controls, histological and immunohistochemical (IHC) procedures, see Suppl. Data.
Phox2b∆8 mouse model generation and animal husbandry
We generated a transgenic mouse line carrying a cre-loxPinducible allele of human PHOX2B∆8 exon 3 by BAC recombineering and homologous recombination in embryonic stem cells (ES cells; See Suppl. Data). For early-onset (germline) activation of Phox2b∆8 allele, Hprt-cre mice (JAX 004302 on C57/Bl6 background) were intercrossed to Phox2b∆8 heterozygotes. For late-onset CNS activation of Phox2b∆8 allele, we used Blbp-cre [23]. Mutant mice in each genotype were compared to cre-negative littermate controls. Animal procedures were approved by Institutional Animal Care and Use Committees at Washington University (St. Louis, MO, USA), University of Connecticut Health Center (Farmington, CT, USA) and UCSF (San Francisco, CA, USA).
Mouse tissue processing, and histology
Embryos were collected from time-pregnant females (E0.5 at time of plug recognition) under deep anesthesia; those older than E16.5 were perfused with PBS followed by When challenged with persistent hypercarbia and hypoxemia during CPAP, the proband showed no increase in respiratory effort. Heart rate variability during the challenge was minimal. EKG electrocar-diogram, SpO 2 oxygen saturation, E T CO 2 end-tidal CO 2 , Nasal nasal airflow, Chest chest wall movements from respiratory inductance plethysmography. CPAP pressure of 5 cm H 2 O was used. SIMV rate was 45 breaths/min. "Early" refers to 45 s after switching to CPAP, "late" refers to 75 s after switching to CPAP. Time scale is shown. c Targeting construct of patient-specific mouse model. Human PHOX2B exon 3 containing patient-specific PHOX2B mutation (denoted in blue color) is inserted following unmodified, non-mutated mouse Phox2b exon 3 flanked by loxP sites, to allow conditional expression of mutant gene by cre recombinase. For detailed generation of transgenic mouse line see Figure S4. d Endogenous respiratory output. Integrated C4 inspiratory activity from E18.5 control and Hprt-cre, Phox2b∆8 mutant mice under baseline (left) and stimulated (1 μM substance P, right) conditions. Note lack of response in Phox2b∆8 mouse brainstem (n = 4, a representative recording shown) 1 3 4 % PFA under hypothermia anesthesia. For IHC, tissues were post-fixed in 4 % PFA and cryoprotected, frozen in OCT and sectioned at 14 μm. Cryosections were subjected to antigen retrieval in citrate buffer, pH 6.0 for 10 min at 90 °C as necessary, blocked with 5 % donkey serum in PBS with 0.3 % Triton X-100, incubated with primary antibodies overnight at 4 °C, followed by appropriate secondary antibodies (see Suppl. Data) for 1 h at room temperature prior to imaging on a Nikon 80i microscope equipped with Hamamatsu CCD camera.
In vitro mouse explant respiratory physiology
In vitro explant respiratory physiology was performed as described [26]. Brainstem-spinal (en bloc) preparations with an anterior transection near diencephalon-midbrain junction were made using E18.5 embryos as detailed in Suppl. Data.
Quantifications and statistical analysis
All statistical analysis was performed using Microsoft Excel (Mac Office 2008) or R version 2.11.1. Transgenic mouse phenotypes were analyzed with Student's t test.
Human CCHS proband 1: clinical history and neuropathological findings
A full-term male presented with respiratory depression at birth, apnea and oxygen desaturation that required mechanical ventilation. Polysomnography demonstrated normal baseline oxygen saturation (SpO 2 ) and end-tidal CO 2 (E T CO 2 ) while mechanically ventilated (Fig. 1b, Figure S1). However, when challenged by removal of the ventilator-generated respiratory rate, proband 1 showed hypoventilation resulting in oxygen desaturation (nadir 57 %) and rise of E T CO 2 (peak 82 mmHg) during both wakefulness and sleep. Persistent and profound hypoxemia and hypercapnia failed to induce chemoreceptor reflexes (i.e., increased breathing rate/effort or a variation of heart rate); there was no arousal response from sleep ( Fig. 1b, Figure S1). Magnetic resonance imaging and spectroscopy of the brain were normal. The electroencephalogram showed normal brain activity, both during wakefulness and sleep, without seizures. Proband 1 had permanently dilated pupils with non-measurable response to light, suggesting autonomic dysfunction. Because of enteral feeding intolerance, he was dependent on total parenteral nutrition. The intestine showed pervasive aganglionosis from 10 cm distal to the ligament of Treitz to the rectum ( Figure S2), indicating HSCR disease. Genomic DNA analysis demonstrated an eight-nucleotide deletion in exon 3 of PHOX2B ( Fig. 1a; cDNA position 691-GGCCCGGG-698; heretofore called PHOX2B∆8). This caused a frameshift that removed the alanine repeatgenerated elongated aberrant residues from amino acid 230 to the C-terminus ( Figure S3a). Maternal DNA showed intact copies of PHOX2B; however, our analysis does not rule out possible low-level somatic mosaicism [28]. Paternal DNA was unavailable. Together, these clinical, genetic, and pathological findings confirmed a NPARM PHOX2B mutation and diagnosis of CCHS with intestinal aganglionosis. Following withdrawal of life support at 6 weeks of age, an autopsy was performed.
Postmortem examination showed a dramatic loss of LC neurons that express dopamine β-hydroxylase (DBH) (Fig. 2a, b). The dorsal median raphe (dMnR), a major source of central serotonergic innervation, was severely diminished. We found additional losses of the hindbrain mesencephalic trigeminal nucleus (MesV) and dorsal motor nucleus of the vagus (DMNV), which derive from Phox2b+ progenitors [12,24] (Fig. 2a, b; Table S1). In contrast, there were no gross or microscopic abnormalities detectable in cerebral cortex, striatum or thalamus (not shown), or significant abnormalities of the medullary arcuate nucleus, CNVII (facial nucleus), or surrounding areas including inferior olive, area postrema or nucleus prepositus ( Figure S3b).
Dysregulated brain stem development in PHOX2B∆8 conditional mouse model
To assess function of PHOX2B∆8 during development in vivo, we generated a conditional transgenic mouse line by targeted homologous recombination in ES cells (Fig. 1c). Note because human and mouse exon 3 are identical at the amino acid level we used mutant human exon 3 for conditional expression of PHOX2B∆8 in mouse. In this line, the engineered PHOX2B∆8 allele is activated by bacteriophage P1 cre recombinase, which initiates expression of PHOX2B∆8 proteins and a downstream green fluorescent protein (GFP) reporter gene, as shown in Figure S4a.
We crossed this line with the germline driver Hprt-cre to activate recombination in all tissues from early embryonic stages. In PHOX2B∆8 mutant mice, we first confirmed faithful reporter GFP expression in all known Phox2b-expressing regions (e.g., hindbrain nuclei, enteric neurons) using co-labeling with an antibody for the unmutated N-terminal region of Phox2b protein ( Figure S4b). No GFP expression was observed in WT littermates. Finally, we used IHC with an antibody for the C-terminal region of Phox2b protein (a region affected by PHOX2B∆8 mutation) to confirm expression of the non-mutant allele in heterozygotes (not shown). These findings confirmed all expected characteristics of PHOX2B∆8 expression in vivo.
Heterozygous Hprt-cre, Phox2b∆8 pups showed perinatal lethality and died before P1. Harvest just prior to birth at embryonic day 18.5 (E18.5) revealed that only 33 % of mutants took one spontaneous breath (vs. 100 % in control, n = 8 mutants, 15 controls). No mutants showed further spontaneous respiratory effort; thus, all died within minutes of delivery. Electrophysiological recording from E18.5 ex vivo brain stem preparations showed depression of endogenous respiratory motor root output under baseline conditions and in response to the excitatory neuropeptide, substance P, confirming abnormal respiratory phenotype in Hprt-cre, Phox2b∆8 mice (Fig. 1d), in keeping with other mouse models of CCHS [14,26,44,54].
Abnormal noradrenergic structures in brainstem of Hprt-cre, Phox2b∆8 mice
Generation of a patient-specific NPARM CCHS mouse model and findings from our human proband provided an opportunity for cross-species analysis to identify conserved neuropathological features (Table S1, Figs. 2, 3). In the mouse, we observed that the LC was also abnormal and failed to express tyrosine hydroxylase (TH), indicating a synthetic defect in noradrenergic pathway (Fig. 3a). Absence of TH neurons within the LC was correlated with sparse and small neuronal cell bodies, suggesting cellular loss/attrition rather than selective reduction of TH expression ( Figure S5a). In addition, we observed consistent losses in the DMNV and mesencephalic trigeminal nucleus nuclei (MesV) (Fig. 3c, d). Neuronal precursors of the DMNV were detectable at E13.5 ( Figure S5b) in the mouse model, suggesting PHOX2B∆8 prevents DMNV formation despite progenitor specification. In contrast, while the dMnR showed severe attrition in the human proband (Fig. 2b) (Fig. 3b).
No gross abnormalities in forebrain were observed. In summary, abnormal development of the LC was prominent consistently across species.
PHOX2B∆8 inhibits LC noradrenergic neuronal specification
The LC is the major source of noradrenergic neurotransmitters in the CNS [5], and it projects to circuits in forebrain, midbrain and hindbrain [45]. As shown (Figs. 3a, 4b), early activation of PHOX2B∆8 in the brainstem of Hprt-cre, Phox2b∆8 mice resulted in developmental failure of TH + LC neurons. As we observed normal-sized populations of Phox2b-GFP+ precursors (Fig. 4a, discussed below), we conclude that early-onset PHOX2B∆8 expression inhibits LC specification. Consistent with this, we observed widespread abnormalities in noradrenergic circuits, including caudal hindbrain nuclei A1/C2 and the forebrain projections of LC to hypothalamus (Fig. 4b, A1/ C2 in Figure S5c).
Our conditional Phox2b∆8 mouse model and cre-driver lines permitted introduction of PHOX2B∆8 at two distinct time points in noradrenergic neuronal development. Whereas Hprt-cre introduces the mutation in the early embryo (<E10.5), Blbp-cre [23] results in CNS-restricted activation of the conditional PHOX2B∆8 allele at E10.5 and later, after most neurogenesis. Fate mapping, using the conditional reporter function of the PHOX2B∆8 mutant locus (Fig. 1c), showed robust onset of GFP expression in Phox2b+ cells at E10.5 with Hprt-cre, but only confined GFP+ population in Blbp-cre fate-mapped hindbrain (Figure S5d). In contrast, Blbp-cre targeting in brainstem was robust after E11.5 (Fig. 4a). Differences in prenatal viability between these two lines were noted ( Figure S5e). We next assessed consequences of early (<E10.5) versus later onset (>E11.5) of PHOX2B∆8 expression for LC development. As shown (Fig. 4a), at E10.5 TH+, presumptive noradrenergic neurons were detectable in LC of control mice; moreover, these cells co-expressed Phox2b, consistent with previous findings [24]. Such TH+ populations were absent in the early-onset Hprt-cre, Phox2b∆8 mice. Contrasting this, the late-onset Blbp-cre, Phox2b∆8 model showed normal LC TH+ populations co-expressing Phox2b (Fig. 4a, b). The TH+, Phox2b+ populations did not express GFP, suggesting these cells expressed wildtype Phox2b allele. Together, these findings suggest that PHOX2B∆8 inhibits LC noradrenergic differentiation in a stage-specific manner. That is, early-onset mutant protein expression derails LC differentiation in a dominant-toxic manner, whereas later-stage expression of the PHOX2B∆8 allele does not interfere with acquisition of TH expression in Phox2b+-derived LC neurons.
One possibility to account for these differences was differential effects of early versus late Phox2b∆8 expression on RTN development [34]. However, histological examination of the brainstems from both Hprtcre-and Blbp-cre-driven models showed loss of RTN and CNVII nuclei, as shown by IHC and quantitative analysis of the markers Phox2b, Islet1 and neurokinin 1 receptor (NK1R) (Fig. 5a-d) -5). N/A not assessed due to absence of respiration abnormal formation of CNVII was due to failure of precursor migration in the Hprt-cre, Phox2b∆8 mouse model (Fig. 6a-c), in keeping with reported findings in other CCHS mouse models [10,13,26,35]. A similarly mis-located putative CNVII was found in Blbpcre, Phox2b∆8 mouse ( Figure S5f). Analysis of MnR in Blbp-cre, Phox2b∆8 mouse showed normal number of serotonergic neurons expressing TryptH (17.833 ± 1.815 SEM cells per area, p = 0.682, n = 3, and 5HT ( Figure S5g) (27.166 ± 1.249 SEM cells per area, p = 0.793, n = 3, Student's t test). Thus, our findings in late-onset Blbp-cre, Phox2b∆8 mice indicate that the RTN is dispensable for generation of minimal perinatal respiratory rhythm, in keeping with the proposal that self-evoked respiration is possible without the RTN [26,44].
Human CCHS proband 2: clinical history and neuropathological findings
Proband 2 was a 27 5/7-week gestation age preterm infant male. He was intubated at birth due to poor respiratory effort and received surfactant. Although there was no evidence for chronic lung disease of prematurity, the patient remained ventilator dependent. At approximately 4 weeks of age, lack of respiratory drive despite persistent hypercapnia prompted genetic testing for CCHS. Proband 2 carried a heterozygous PARM PHOX2B mutation, resulting in the most common, 7-residue alanine expansion (PHOX2B 20/27 genotype). After withdrawal of life support at ~41week corrected gestational age, autopsy was performed with postmortem analysis of brainstem. levels of rostral (R) and caudal (C) at E14 is shown. At E13.5, stalled migration of CNVII/RTN, detected by Islet1 and Phox2b antibodies, was found at rostral levels (denoted by solid arrowheads) of hindbrain than normally found at caudal level (denoted by empty arrowheads) in control littermates. c The same pattern described in b was observed at E15.5, implicating permanent migration defect of CNVII. The number of putative Phox2b+ Islet1+ CNVII cells found in the mutants declined from 37.6 to 16.2 % of WT at E13.5 and E15.5, respectively. Scale bar unit µm While the pons contained a cluster of cells morphologically and anatomically consistent with the LC, as shown in Figs. 2c and S6, they failed to express normal levels of DBH or TH, indicating LC dysfunction and noradrenergic synthesis. We did not observe gliosis or other signs indicative of hypoxic damage in the brain stem. In contrast to the NPARM PHOX2B∆8 CCHS proband 1 (Fig. 2b), the MnR, MesV, and DMNV hindbrain nuclei in the PARM subject appeared normal ( Figure S6). Taken together, our findings indicate that both NPARM and PARM PHOX2B mutations result in defects of LC neuronal populations in human neonates with CCHS.
Discussion
The underlying pathobiology of CCHS remains unclear, reflecting in part the complexity of central and peripheral centers that interact to control respiratory drive. We used a combinatorial approach incorporating human neuropathological analysis from two (NPARM and PARM) human probands and a conditionally activated NPARM patientspecific transgenic mouse model to study temporal effects of mutant protein on brainstem development. Our study is the first to describe CNS neuropathological findings in two human cases of CCHS with confirmed PARM and NPARM mutations of PHOXB2B, which are summarized in Table S1. Further, we modeled the proband-specific NPARM mutation introduced at several stages of mouse hindbrain development. While several structures are affected variously in these cases, a focus on conserved abnormalities between species revealed abnormalities in LC populations, which probably result from failure to specify LC neurons in the embryonic brain. Our findings suggest that disruption of LC noradrenergic neuron development and function may be a common pathobiological feature of CCHS.
Brainstem pathological findings in two human CCHS cases with confirmed NPARM and PARM PHOX2B mutations
The LC is the major source of noradrenergic innervation to both rostral brain regions as well as the brainstem. Abnormal noradrenergic signaling has been implicated in clinical features of CCHS [32,55]. Studies of CCHS by diffusion MRI (albeit without confirmed PHOX2B mutations) showed altered diffusivity (decreased fractional anisotropy and increased axial and radial diffusivity) in several brainstem regions as well as other potentially connected regions of the mid-hindbrain [41]. Such late structure/function studies in adolescents carry the caveat that injury to LC or dMnR could have accrued from cumulative effects of repeated episodes of hypoxemia.
In contrast, the two NPARM and PARM probands we studied were neonates that were intubated and/or managed in an NICU from birth with stringent monitoring and interventions to prevent hypoxemic events after birth. Proband 1 carried a NPARM mutation of (PHOX2B∆8) and demonstrated severe CCHS and intestinal aganglionosis (Haddad syndrome). Postmortem analysis showed several brainstem nuclei were abnormal (Table S1) including almost total absence of LC neurons. Furthermore, a second PHOX2B 20/27 PARM proband 2 also showed defects in DBH and TH expression in the context of a well-formed LC, indicating deficient numbers of functioning noradrenergic neurons. The correspondence of abnormalities in the LC in both human cases is striking. While further confirmation in additional cases of CCHS would be useful, such pathological specimens in confirmed cases of neonatal CCHS with PHOX2B mutation are extremely rare. We note these data are consistent with a previously reported case of CCHS (albeit without a confirmed PHOX2B mutation), showing significant defects in noradrenergic cell number [52]. Together, these findings suggest that abnormal development of the LC is common in CCHS.
NPARM PHOX2B∆8 permits brainstem noradrenergic neuron precursor allocation but inhibits differentiation in a stage-restricted manner
Our findings demonstrate that inhibitory effects of PHOX2B∆8 proteins during LC development are stage restricted. The murine LC is formed during E9-E11 [50] and by E11.5 it is a clearly identifiable structure [3]. Human equivalent developmental stage for mesencephalic TH neurons occurs 6.5-8 weeks postconception [16,36]. We found that early mutant protein expression (using Hprt-cre) prevented LC neuron specification/differentiation to a TH+ state. In contrast, delayed expression of the mutation (with Blbp-cre) permitted LC neuron differentiation to the TH+ stage. Together, these findings indicate PHOX2B∆8 proteins inhibit early LC neuronal specification, rather than the program of expression characteristic of mature LC neurons.
Abnormal respiratory arousal during NREM sleep is associated with dysregulation of central adrenergic [38] and serotonergic [25,43] signaling. Caudally, the LC densely innervates the serotonergic dorsal raphe nucleus [30]. The dorsal raphe nucleus does not express PHOX2B. Therefore, the finding in human proband 1 that the dorsal raphe was lost is consistent with the possibility of longterm failure of normal feedback mechanisms. For example, classical ultrastructural studies have demonstrated in experimental animals innervation of serotonergic neurons by noradrenergic locus coeruleus neurons [4]. This circuitry between noradrenergic and serotonergic systems raise the possibility that disease affecting the noradrenergic system could cause secondary effects to the serotonergic neurons through transneuronal degeneration mechanisms similar to neurodegenerative diseases [15]. In keeping with this possibility, the NPARM early-onset mouse model did not show defects in the dMnR. Moreover, we observed that noradrenergic circuit formation in hypothalamus and A1/C2 nuclei of brain stem were also rescued in the Blbp-cre, Phox2b∆8 mice, suggesting that PHOX2B∆8 generally exerts its impact on noradrenergic neuron development at an early stage. Further work is needed to identify precise gene targets affected in the early-onset phenotype (see discussion below).
Evidence that RTN is dispensable for perinatal respiratory drive in the NPARM CCHS model
The RTN is generally thought to have critical roles in perinatal respiratory control in rodents [21]. However, while conditional targeting of Phox2b function resulted in failure of RTN development and lethal respiratory compromise in one study [14], another study that selectively targeted disruption of the RTN with Egr2-cre did not cause perinatal respiratory lethality and suggested its importance might be specific to chemosensation [44]. We found that while late-onset PHOX2B∆8 expression caused failure of RTN and CNVII development, animals showed near-normal perinatal respiration, indicating dispensability of the RTN for this function. Unfortunately, such Blbp-cre, PHOX2B∆8 animals did not survive past P1 due to oropharyngeal problems preventing feeding and so further testing was not performed. Nevertheless, our studies suggest the RTN is dispensable as an early regulator of respiratory drive. In the human, the equivalent structure to RTN has been suggested [29,47], but its existence remains controversial. Despite exhaustive efforts, we failed to identify an RTN-like structure in our proband cases or specimens from five age-matched unaffected subjects, and the facial nucleus (CNVII) was normal in appearance in the human PHOX2B∆8 proband ( Figure S3b).
Dysregulation of locus coeruleus development might be general feature of human CCHS
Understanding mechanisms that underlie CCHS has general implications for development of human respiratory control [22] and other disorders of respiratory and autonomic regulation including Rett Syndrome [37], sudden infant death syndrome (SIDS) and apnea of prematurity. Findings from two human CCHS cases indicate that NPARM and PARM PHOX2B mutations disrupt development of LC noradrenergic populations, a finding that is phenocopied in the NPARM CCHS mouse model we generated, but not in another previously reported mouse model of PHOX2B 20/27 PARM [13], which failed to show defects in the LC. Why the mouse PHOX2B 20/27 model fails to capture aberrant LC neuron differentiation is unclear, but might reflect differences between mouse and human development in the context of 20/27 PARM mutations.
Several lines of evidence support the role of LC as a central chemosensor [17]. First, Phox2a function is required for differentiation of LC but leaves other noradrenergic centers (locus subcoeruleus and groups A7, A5, A2 and A1) intact, and loss of Phox2a function results in depressed central respiratory drive [60]. Thus, it is possible that mutant PHOX2B proteins act in a "dominantnegative" fashion to inhibit LC neuron specification. Second, loss of the LC is associated with markedly decreased breathing frequency [56]. Third, in a mouse model of Rett syndrome, caused by mutations of methyl-CpGbinding protein 2 (MECP2), there is loss of LC neurons [46,51], and breathing dysfunction with decreased CO 2 chemosensitivity [61]. Together, these findings suggest that LC dysfunction might explain, at least in part, central CO 2 chemo-insensitivity in CCHS as well as failure of normal respiratory arousal. Further research is needed to evaluate the utility of pharmacological approaches that might target noradrenergic signaling imbalance so that the deleterious effects of PHOX2B mutations on CCHS patients are attenuated. Indeed, maturational decrement in ventilatory slope in response to hypercarbia/hypoxia is observed in CCHS [8] suggesting that early pharmacologic noradrenergic stimulation might assuage disease progression.
|
2016-05-15T14:16:30.752Z
|
2015-05-15T00:00:00.000
|
{
"year": 2015,
"sha1": "aada4763aa81d44a4a66d4417d77ef1e84e569ee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s00401-015-1441-0",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "aada4763aa81d44a4a66d4417d77ef1e84e569ee",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
55126259
|
pes2o/s2orc
|
v3-fos-license
|
LEAN AND SIX SIGMA METHODOLOGIES IN NHS SCOTLAND : AN EMPIRICAL STUDY AND DIRECTIONS FOR FUTURE RESEARCH
The quality and efficiency of manufacturing and ser vices have been greatly improved through continuous improvement methodologi es such as Lean and Six Sigma over the last 25 years or so (Antony et al., 2012; McAdam et al., 2011; Lindsay and Kumar, 2012) . However the applications f uch methodologies in the healthcare industry are in their infancy stages in many countries including Scotland (Lindsay and Kumar, 2012). Moreover there is substantial evidence in the application of both Lean and Six Sigma from man uf cturing sector, however, there is limited empirical evidence in the current literature demonstrating the applications of these methodologies within NHS, UK.
INTRODUCTION
The quality and efficiency of manufacturing and services have been greatly improved through continuous improvement methodologies such as Lean and Six Sigma over the last 25 years or so (Antony et al., 2012;McAdam et al., 2011;Lindsay and Kumar, 2012) .However the applications of such methodologies in the healthcare industry are in their infancy stages in many countries including Scotland (Lindsay and Kumar, 2012).Moreover there is substantial evidence in the application of both Lean and Six Sigma from manufacturing sector, however, there is limited empirical evidence in the current literature demonstrating the applications of these methodologies within NHS, UK.
A recent study carried out by the American Society for Quality (ASQ) has shown that the correlation between deployment of Lean and Six Sigma within 77 hospitals and improved clinical outcomes and financial performance appeared equivocal (ASQ, Lean Six Sigma Hospital Study Advisory Committee, 2009).The study has also revealed that a high percentage of hospitals, especially without Lean or Six Sigma deployments, do not track common operational metrics (length of stay and patient complaints, for example) and financial metrics (cost per patient for example).
Although the authors have identified over 200 papers relevant to Lean and Six Sigma in Healthcare, it was found that the current focus on evidence-based management to improve quality in healthcare cautions that current trends in evidence-based management are largely based on conceptual arguments and there are very few empirical studies carried out to understand the benefits of Lean and Six Sigma methodologies to improve clinical outcomes, patient safety, efficiency and financial performance.The purpose of this research was to examine the role that Lean and Six Sigma have within NHS Scotland to improve the efficiency and performance of the organization and the care provided to its patients.The study involved collecting data from a survey questionnaire distributed to various hospitals in Scotland followed by a number of semistructured interviews with people who were involved in the use of Lean and Six Sigma methodologies in the NHS Scotland.
The remainder of the paper is structured as follows: first we provide a literature review on Lean and Six Sigma with a greater focus on empirical studies carried out by other authors in the context of healthcare, followed by the research methodology used for the research, a report on key findings and finally our conclusion and agenda for future research.
LITERATURE REVIEW
Both Lean and Six Sigma are two powerful methodologies for improving the efficiency and effectiveness of healthcare processes.Lean is based on long-held practices advanced by the Toyota Motor Corp. with an emphasis on removing waste from organisations while focusing on and delivering more value to customers.Six Sigma, coined by Motorola Corporation, focuses on the application of powerful statistical methods to understand, quantify and reduce process variation (Kumar et al., 2011).The purpose of Lean Thinking in healthcare is to create an environment for improving flow and eliminating waste.Six Sigma on the other hand, helps to identify and quantify problems that are related to variation in processes.Both are powerful strategies to focus efforts in the areas that offer the most potential improvement.Despite their disparate roots, it is quite clear that Lean and Six Sigma encompass many common features such as an emphasis on customer satisfaction, a culture of continuous improvement, comprehensive employee involvement and search for root causes.Lean always asks the question, "Why does this process exist at all?What is the value and the value stream?".Six Sigma starts with "How can we improve this process?"It does not ask "Why does it exist at all? (Antony and Banuelas, 2001).
The following are some of the commonalities and fundamental differences between Lean and Sigma methodologies (Kumar et al., 2006;Dahlgaard and Dahlgaard-Park, 2006;Snee, 2010).
Commonalities include:
• Both are continuous business process improvement methodologies • Both focus on business needs defined by the customer • Both are practical methods, proven to work in many organisations • Both involve a comprehensive toolkit for tackling process related problems Fundamental and critical differences include: • Lean is primarily good for quick and initial round of improvements whereas Six Sigma is suitable for long-term and complex problems where the solutions are either unknown or vaguely known.
• Lean requires low investment due to the nature of the training and the skills to be developed as a result of this training whereas Six Sigma demands high investment and is not suitable for fixing common sense problems in the business.• Lean has less emphasis on statistical tools and techniques whereas Six Sigma requires the use of applied statistical methods for understanding and reducing variation in processes.• No formal organizational infrastructure for Lean implementation and deployment whereas Six Sigma has a well defined organizational infrastructure (yellow belts, green belts, black belts, master black belts, deployment champions and sponsors in some cases).• Lean looks into mapping of end to end process and uses value stream exercises to understand the interactions between processes whereas system interaction between processes is not considered in a typical Six Sigma problem solving scenario and this would possibly sub-optimize the overall process performance.
According to George (2002), Six Sigma does not directly address process speed and so the lack of improvement in lead time in companies applying Six Sigma methods alone is understandable.These companies also generally achieve modest improvement in Work in Process (WIP) and finished goods inventory turns.In a similar manner, those companies engaged in Lean methodology alone show limited improvements across the organization due to the absence of Six Sigma organizational infrastructure.In essence, an integrated approach utilizing the best of Six Sigma and Lean Strategies will maximize shareholder value by accomplishing dramatic improvements in customer satisfaction, cost, quality, speed and invested capital.The companies practicing the integrated approach will gain four major benefits (George, 2002): Become faster and more responsive to customers; strive for Six Sigma capability level; operate at lowest costs of poor quality; and achieve greater flexibility throughout the business.
In the case of a patient visiting a medical facility, it is important that the patient receives due attention at the earliest possibility in a predetermined flow.If one were implementing Lean Thinking alone, the solution could lead to very fast process in a flow, but a dissatisfied patient due to lack of attention from a physician.If one were implementing Six Sigma alone, the patient will have a great visit, but the medical facility may not be able to keep up with a required number of patients in order to be a financially viable organisation.
According to Bisgaard and De Mast (Biasgaard and De Mast, 2006), an integrated framework for Lean Six Sigma consists of the following elements: • A structured approach -the deployment infrastructure is based on a task force consisting of champions, Black belts and Green Belts.The authors also argued that if hospitals wish to deliver world-class healthcare in the face of constrained resources and greater demand, they need to develop a long-term vision and world-class leadership to sustain the initiative and get Lean embedded into the DNA of the healthcare organisations.
De Souza and Pidd (2011) in their study explored the barriers to Lean healthcare based on experience of applying Lean thinking in the UK's NHS.The authors concluded that many of the barriers are people-based or organizational apart from inappropriate jargon and a worry that people will be treated like widgets.Perception that Lean is primarily meant for manufacturing can also be a major barrier to Lean implementation.The authors also observed that functional and professional silos are seen as a major barrier to Lean implementation.
RESEARCH METHODOLOGY
The fundamental purpose of this study was to "examine the extent to which Lean and Six Sigma methodologies are being implemented within NHS Scotland".In order to do this effectively, the general objectives were further divided into a number of specific research questions as follows.In order to achieve the research objectives, a survey questionnaire was initially constructed drawing upon prior literature.Given majority of the research questions are 'What' type questions, an exploratory survey research strategy was adopted for data collection (Saunders et al., 2010;Yin, 2009).Survey is perhaps the dominant form of data collection in the social sciences, providing for efficient collection of data over broad populations, amenable to administration in person, by telephone, or over the Internet (Easterby-Smith et al. 2008, Saunders et al. 2010, Fowler 2002).Survey questionnaire will allow for the largest amount of data and the most thorough amount of data that can be collected within the boundaries of this study (Fowler, 2002).
Survey questionnaire was first pilot tested with ten participants from NHS and Academics.Based on their comments, five questions were dropped and other eight questions were reworded.A Likert scale of 1-5 was used for critical success factors and tools & techniques section of the questionnaire.The final questionnaire was mailed out to 800 people in various regions of the National Health Service in Scotland.Of the 800 questionnaires mailed, 90 completed questionnaires across 18 Health Board Regions (HBRs) were returned.This represented a response rate of over 11%, which was rather regarded as satisfactory (Saunders et al., 2010).A total of 12 responses were not useable due to incomplete data.This resulted to only 78 completed questionnaires used in the final analysis of this paper.Table 1 presents the breakdown of respondents who have completed the questionnaire.
Current Status of Lean and Six Sigma
The first part of our investigation was to understand the current quality and process improvement related initiatives utilized by the NHS Scotland.It was found that a number of hospitals are using Kaizen related activities as part of process improvement.Moreover we also found that although Lean has been embraced by a number of NHS Trusts in Scotland.However there was a clear lack of evidence in the use of Lean thinking to change the culture of a particular NHS Trust.For many hospitals, we felt that Six Sigma is still new as they are currently tackling several quick-win projects across many hospitals.AS expected, ISO 9001 is the main quality improvement initiative being implemented by the Health Service Executives in many hospitals.Lean has been used for tackling following types of projects in various hospitals, e.g.waiting time reduction in A & E; length of stay in A & E; throughput of operating theatres; turnaround times at the operation theatres; improved patient flow across the hospital (i.e., streamline cycle from referral to admission); improved discharge management; and improved patient safety and patient satisfaction.
The issue of the key indicators that comes into play when prioritising Lean/Six Sigma projects within the hospitals resulted in many more responses.In terms of the key indicators patient requirements was reported to be important or very important by many of the respondents.At the same time, poorly performing areas in the organisation and multi-disciplinary projects were also indicated as being important indicators for Lean efforts.
It was also noted that many participating hospitals (approximately 70%) using Lean have had an experience of between 2 and 5.About 20% of hospitals have been using Lean between 5 and 7 years and 10% hospitals have been practicing Lean for over 8 years.Moreover it was interesting to observe that less than 5% of participating hospitals are using Six Sigma methodology for tackling process variability problems in the hospitals.
Tools and Techniques of Lean and Six Sigma
One of the success factors of both Lean and Six Sigma are their ability to use the toolbox in a systematic and disciplined manner.Table 2 illustrates the most commonly used tools and techniques of Lean and Six Sigma within the NHS Scotland.Respondents were asked to rate the application of Lean and Six Sigma tools and techniques (i.e., usage) on a Likert Scale of 1 to 5, where '1' indicates 'never been used' and '5' indicates 'used continuously'.Similarly, the degree of perceived usefulness was also rated on a scale of 1 to 5, where '1' implies 'not useful' and five implies 'extremely useful'.
As can be seen from the
Key Benefits of Lean and Six Sigma
As the success of Lean and Six Sigma initiatives are focused on the project execution, it was important to understand the areas where the projects are carried out across the participating hospitals.Figure 1 shows the typical benefits of Lean projects carried out across 18 Health Board Regions (HBRs).The areas that have experienced the greatest benefits are reduction in operational costs, reduction in patient waiting times, waste reduction in processes and so on.From the 18 NHS Health Board Regions, we found that over 40 projects were Lean related and about 8 were Six Sigma related.The Six Sigma related projects were focused on reducing the number of medication errors, reducing MRI examination time, pathology laboratory turnaround time, X-ray film defects, etc.
Benefits of Lean Projects
Figure 1 -Typical benefits of Lean projects from the participating hospitals
Critical Success Factors (CSFs) for the successful implementation of Lean and Six Sigma Strategies in NHS Scotland
The idea of identifying the CSFs as a basis for determining the information needs of managers was popularized by Rockart (Rockart, 1979).In the context of Lean and Six Sigma methodologies, CSFs represent the essential ingredients without which the initiative stands little chance of success.The leaders of health care industry should consider the application of Lean and Six Sigma from the perspective of improving the flow, quality and capability of current processes as well as the ability of processes to deliver patient care and safety tomorrow.The following CSFs were perceived to be essential for the successful development and deployment of both Lean and Six Sigma initiatives in NHS Scotland.The respondents were asked to rank the CSFs identified from the existing literature on a scale of 1 to 5 (1= least important, 2= less important, 3=important, 4=very important and 5=crucial).The CSFs used in this study were derived from existing literature of TQM and Six Sigma (Adams et al., 2003;Antony and Banuelas, 2002;Antony et al., 2008;Antony et al., 2007;Hilton et al., 2008;Timans et al., 2011;Yusof and Aspinwall, 1999;Badri et al., 1995;Black and Porter, 1996).Table 3 illustrates the CSFs which are essential for the successful introduction of Lean and Six Sigma initiatives in NHS Scotland.Table 3 illustrates the list of Critical Success Factors (CSFs) in terms of importance (this means expected importance of a factor according to people who had participated in the survey) and practice (experienced or perceived importance for a factor).The top five important factors (from a list of 18 factors identified from the literature) perceived by the hospitals based on the study were: • Senior management commitment and involvement • Focusing on critical processes for improvement • Establishing a culture for continuous improvement • Focusing on the needs of patients • Establishing measurement and feedback systems The least important factors perceived by the participating hospitals were: It was very interesting to observe that projects were not selected by critically looking into its alignment with strategic objectives of hospitals or government targets.Moreover, there was very little attention paid to training program related to methodology, tools and techniques of Lean and Six Sigma for solving process and quality problems.We also found that many hospitals did not have a Lean or Continuous Improvement champion to identify, monitor and review the progress of continuous improvement projects.The authors noticed that the participating hospitals did not have a model or roadmap for sustaining the Lean initiative which is absolutely essential in our opinion for embedding Lean/Six Sigma practices into the culture of NHS Scotland.
Common Barriers in the successful implementation of Lean and Six Sigma Strategies in NHS Scotland
There are several barriers and challenges lurking below the surface for health care industry for consideration before the implementation and deployment of Six Sigma business strategy.The respondents were asked to mark the five barriers that they thought were the most important in terms of implementing Lean and Six Sigma.The top five barriers identified for both Lean initiative and Six Sigma initiative are shown in Figure 2 and 3.For Lean initiative, culture and resistance to change was considered as the most important barrier whereas availability of resources and time was deemed to be the most important barrier to the Six Sigma initiative.
The cultural issues of NHS are somewhat very difficult to change overnight.However the author believes the starting point for initiatives such as Lean Six Sigma is to execute a one day Workshop covering management aspects of Six Sigma and some of the key challenges in the context of Healthcare while implementing such an initiative.Availability of resources and time are always an excuse for many public sector organizations.In order to minimize the budget and resources, it is best to train about 5 to 10 top talented people in the organization in the first wave of training.The focus must be on the execution of projects and selection of projects which are aligned with strategic business objectives or government targets.In NHS, it appears that there is clear lack of leadership and strategic vision with regard to continuous business improvement methodologies.
The respondents have ranked high on poor training or coaching as one of the common barriers in the execution of Lean and Six Sigma strategy in NHS.In order to gain greater understanding of the status of Lean Six Sigma methodology within NHS Scotland, a number of semi-structured interviews were performed.The participants so far included 2 nurses, 2 Clinical Governance Managers, 1 Clinical Governance Head, 3 Medical Directors and 2 Consultants.Most of the interviews lasted between 30 and 45 minutes.What was interesting, however, from the interviews is that the responses about the use of Lean or Six Sigma within NHS Scotland can be grouped into two categories: either the principles are known but not used fully and effectively or when attempts are made to use these principles, the work goes unnoticed by others in the organization.What is interesting from the results of the interviews is that majority of the interviewees actually reported knowledge of quality management tools and techniques.However the findings of the survey seem to suggest that the majority of tools and techniques being used are not seen as being useful by staff.
CONCLUSION AND AGENDA FOR FUTURE RESEARCH
This paper presents the results of a pilot study on the status of both Lean and Six Sigma initiatives within NHS Scotland.It appears that there is a lack of management commitment within the NHS to institute a culture of using Lean and Six Sigma methodologies and encouraging employees to maintain those efforts.The research on the topic is quite clear that implementing such changes in an organization requires motivation and communication with employees and others who will be directly affected and responsible for taking care of the changes.The results of this study suggest that upper management within the organization is not getting directly involved in the actual implementation of anything that would come close to encouraging widespread use of Lean and Six Sigma across NHS Scotland.The findings of this study serve an important purpose not only for those within NHS Scotland, but really within NHS UK.The ability to successfully implement Lean and Six Sigma cannot be something that is left with only a few people.Instead, it must be something that the entire organization takes seriously.This requires that senior management must provide the resources and training necessary to make it happen.At the same time, there must be encouragement, which can range from financial incentives to simply providing constructive feedback to employees.In the end, it appears that NHS Scotland has quite a long way to go before they can embed Lean or Six Sigma into the fabric of the organisation or even to make Lean or Six Sigma as the way to work.In fact, from the data obtained from this study and the attitudes of staff members, it would seem that major changes in the culture of the organization will be required for any implementation to occur successfully within the next 5 to 10 years.As part of the future research, the authors will be increasing the sample size of the survey and make it as a longitudinal study to assess the status of NHS, Scotland.Moreover we also intend to pursue a number of semi-structured interviews in the forthcoming months to obtain a greater insight into the implementation of these initiatives.The authors would also be keen to develop a bespoke roadmap for the development of Lean and Six Sigma for NHS along with a toolkit which supports the roadmap.
Figure 2 -Barriers to implementation of Lean
Figure 3 -
Figure 3 -Barriers to implementation of Six Sigma Organisational anchoring of solutions -to secure the implementation of solutions and guard against backsliding, tasks and responsibilities are clearly defined, procedures are standardised, etc.• Linking project selection with business strategy -It is important to ensure that projects are aligned with the overall strategic objectives of the business of Six Sigma.The types of projects performed within the healthcare organisations were focused on three categories: Cycle time reduction, process flow improvement; and medical-error reduction Feng and Manuel (2008)ssessed the evidence of Six Sigma and Lean in the Healthcare Industry.They have conducted a structured systematic review of articles on the use of Lean and Six Sigma in healthcare settings that were published between 1999 and 2009.Of the 177 studies published during the 10 year period, it was interesting to note that 70% were related to Six Sigma, 23% were related to Lean and 7% were related to both Lean and Six Sigma.The authors found that the level of evidence supporting a positive relationship between the use of Lean/ Six Sigma and performance improvement was weak.They also found that most studies focused on Lean / Six Sigma to improve processes of care, while few studies focused on Six Sigma /Lean to improve clinical outcomes.The authors also found limited literature on the failures of Lean/Six Sigma.The study carried out by the ASQ Lean Six Sigma Hospital Study Advisory Committee(ASQ, 2009)showed the level of adoption of Lean or Six Sigma practices at US Hospitals.The inability to sustain improvement was cited as one of the greatest challenges to successful deployment of both Lean and Six Sigma deployments.Other challenges include: competition from other initiatives, leadership commitment, availability of resources, motivating employees, expertise and skills, etc.The study also demonstrated that a majority of hospitals participated at the study have applied the following specific tools and techniques of Lean and Six Sigma: Value-stream mapping, 5S, Pareto analysis, Failure Mode and Effect Analysis, Statistical Process Control, Five whys, Seven or Eight Forms of waste, and Visual management.Feng and Manuel (2008)present the results from a national survey of Six Sigma program in US Healthcare organisations.A total of 56 hospitals responded to this survey of which 15 hospitals are practicing Six Sigma while 41 are non-Six Sigma organisations.Most of the Six Sigma organisations have implemented the program for less than four years, which suggests that Six Sigma program is still in its infancy stage in healthcare organisations.The authors also found that the lack of commitment from leadership is the major resistance or barrier for the successful implementation
Table 1 -
Breakdown of respondents to the survey questionnaire
Table 2 -
Most commonly and least commonly used tools & techniques of Lean and Six Sigma
Table 3 -
CSFs for Successful introduction of Lean and Six Sigma
•
Linking Lean/Six Sigma to business strategy, Government targets, etc.
|
2018-12-06T12:27:22.356Z
|
2012-12-18T00:00:00.000
|
{
"year": 2012,
"sha1": "d9341d21da5e8d4a771987b4ef6a5edaaac94e25",
"oa_license": "CCBY",
"oa_url": "https://www.qip-journal.eu/index.php/QIP/article/download/55/45",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d9341d21da5e8d4a771987b4ef6a5edaaac94e25",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
}
|
6584550
|
pes2o/s2orc
|
v3-fos-license
|
Usage and Distribution for Commercial Purposes Requires Written Permission. When Diarrhea Can Become Deadly: Legionnaires' Disease Complicated by Bowel Obstruction
Legionnaires' disease may present with a broad spectrum of illnesses and nonspecific extra-pulmonary symptoms including diarrhea. To our knowledge, bowel obstruction has not been reported as a manifestation of Legionella. We present a unique case of Legionnaires' disease contributing to a small bowel obstruction.
Introduction
Legionella pneumophila is a gram-negative bacillus which can cause atypical pneumonia in human hosts. The pathogen was first discovered in 1976 during an epidemic in Philadelphia among American Legion members, dubbing "Legionnaires' disease" [1][2][3]. There are many species of Legionella, but L. pneumophila is thought to cause more severe disease than the more common bacteria contributing to community-acquired pneumonia [2]. An intracellular and facultative pathogen, this bacterium lives within amoebae as a parasite and freely replicates within host macrophages which gives it the ability to survive the harsh environment and allows dissemination throughout the host [1][2][3]. Incubation time is 2-10 days for development of the pneumonic form. An estimated 25,000-100,000 persons are affected annually in the United States with this dangerous bacterium [4].
Infection with Legionella can be acquired by inhalation or aspiration of a contaminated water source including freshwater ponds and creeks, cooling towers, air conditioning systems, water fountains, respiratory-therapy equipment, and humidifiers -explaining both the community-acquired and nosocomial routes of transmission [2,3]. Patient risk factors for contraction include smoking, alcohol use, chronic lung disease, end-stage renal disease, diabetes, malignancy, and immunosuppression (i.e. HIV/AIDS) [2,3,5]. Community-acquired Legionella patients typically have extrapulmonary manifestations initially and seek medical care much later following initial infection [6]. With delayed or inappropriate antibiotic use against Legionella, mortality has been reported to be around 60-70%, but appropriate and rapid initiation of treatment decreases mortality to 10-20% [7]. A urinary antigen test for L. pneumophila serogroup 1, which accounts for 90% of Legionella infections, has a sensitivity of 70% and a specificity of 100% [1]. Treatment with macrolides and fluoroquinolones is considered the first-line therapy for this bacterium [5].
Legionnaires' disease may begin with a broad spectrum of illnesses and nonspecific symptoms: fever, nonproductive cough, malaise, myalgia, anorexia, and headache. This constellation of symptoms can ultimately progress into respiratory and multi-organ failure [1,8]. The extrapulmonary symptoms include neurologic changes, rhabdomyolysis, acute renal failure, electrolyte abnormalities, and various gastrointestinal manifestations [1,2,8,9]. Multiple gastrointestinal symptoms from Legionella have been reported, including nausea, vomiting, elevated transaminases, secretory diarrhea (20-40%), peritonitis, and even hemorrhage secondary to stress ulcers [2,3,9]. To our knowledge, this is the first reported case of bowel obstruction attributed to Legionella infection.
Case Presentation
A 67-year-old female with osteoporosis, hypothyroidism, and a remote abdominal hysterectomy presented to the emergency department with a chief complaint of 2 days of vomiting, watery diarrhea, confusion, lethargy, and high fever. Her system review was negative for respiratory or other abdominal complaints. Her medications consisted of levothyroxine and vitamin D with calcium supplements. Her social history was benign: she was a nonsmoker, nondrinker, and lived independently with her husband. On initial evaluation in the emergency department, she was hemodynamically stable (heart rate 97, blood pressure 163/76), but febrile at 103°F and mildly tachypneic. Her physical examination demonstrated dry mucous membranes and diminished breath sounds at the left base of her lungs but was otherwise nonfocal -including a benign abdominal examination (soft, nontender, nondistended, normal bowel sounds). Her laboratory examinations demonstrated a moderate leukocytosis (14,000 K/cumm) with 9% bands, elevated creatinine (1.28 mg/dL, baseline 0.8 mg/dL) and BUN (22 mg/dL), lactate 3.74 mmol/L, creatinine phosphokinase 15,445 U/L, hyperglycemia 249 mg/dL, and elevated transaminases (ALT 91 U/L, AST 170 U/L, alkaline phosphatase 137 U/L, total bilirubin 2.0 mg/dL with direct bilirubin 1.2 mg/dL) with negative lipase (9 U/L) and albumin 3.0 gm/dL. Urinalysis was significant for occult blood, protein of 100 mg/dL, and urine ketones.
She was admitted for sepsis, dehydration, and rhabdomyolysis, all from presumed viral gastroenteritis. A chest X-ray (Fig. 1) revealed a questionable left lower lobe pulmonary opacity. After a right upper quadrant ultrasound demonstrated no significant hepatobiliary pathology, she underwent a computerized tomography (CT) of the abdomen and pelvis. The CT failed to pinpoint an etiology for her abdominal symptoms but confirmed a dense left lower lobe pulmonary infiltrate. Community-acquired pneumonia antibiotic coverage was initiated with azithromycin and piperacillin/tazobactam (the latter for possible aspiration with her confusion), and she was aggressively hydrated with normal saline. Hepatitis panel, TSH (1.77 UIU/mL), and hemoglobin A1c (5.7%) were unremarkable. After 2 days on antibiotics with persistent fever and gastrointestinal symptoms, urine Legionella antigen was ordered and subsequently positive on day 3 of admission. Her antibiotics were changed to levofloxacin upon detection, but her nausea and vomiting acutely worsened along with severe abdominal distension. An abdominal X-ray (Fig. 2) demonstrated multiple dilated small bowel loops consistent with intestinal obstruction. A repeat CT of the abdomen and pelvis (Fig. 3) for further characterization confirmed a high-grade partial small bowel obstruction with a transition point in the distal ileum. She clinically improved with conservative management of her intestinal obstruction with nasogastric suction and intravenous fluid resuscitation while completing a course of antibiotics for L. pneumophila infection.
Discussion
The most common risk factors for the development of intestinal obstruction include previous abdominal surgery, neoplasms, hernias, electrolyte abnormalities, and medications [10]. While our patient had a remote history of hysterectomy, other identifiable risk factors causing her distension (including opioid medications) were not present in her care. A normal abdominal CT 3 days before the obstruction makes it unlikely that an anatomical defect was contributory. These findings lead us to believe that her legionellosis was primarily responsible for the development of this potentially fatal complication. As this complication has not previously been reported, the exact mechanism is unknown; however, we hypothesize that this may be directly attributable to specific systemic responses observed in Legionella infection. The host's cell-mediated immune response becomes activated when the bacterium invades macrophages for replication, thus initiating an overwhelming immune cascade triggered by its lipopolysaccharide, lipid A, component on its outside membrane [11]. Lipid A directly stimulates tumor necrosis factor alpha (TNFα) production, which causes massive cytokine release and is primarily responsible for chloride secretion and subsequent development of diarrhea and intestinal inflammation [1,[11][12][13][14]. We hypothesize that our patient's TNFα response directly caused massive gut inflammation and edema leading to hypoperfusion and subsequent intestinal obstruction. It is important that clinicians be aware of this complication of Legionella infection, especially in patients with a history of previous abdominal surgery.
Statement of Ethics
Approval for the use of this case was obtained by the patient and the St. Vincent Institutional Review Board.
|
2018-05-08T18:10:12.356Z
|
0001-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "47da2ceb10c14252dbb103b27d6ed80f8b909673",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/453657",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47da2ceb10c14252dbb103b27d6ed80f8b909673",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
218932524
|
pes2o/s2orc
|
v3-fos-license
|
Times of power, knowledge and critique in the work of Foucault
While Michel Foucault is commonly considered as a thinker with a primary interest in space and spatiality, his use of temporal categories, tropes and metaphors has until recently been only partially reconstructed. Working through different phases of his writings and lectures, this paper argues that Foucault opened a complex and interesting – yet to be acknowledged – analytical perspective on historically dominant, but fundamentally contested forms of social time-regimes, which accounts especially for contingent ruptures, silent continuities and the power-structured contexts of their emergence. Elucidating conceptual tools designed towards the analysis of rationalities and practises of temporal government and approaching social time regimes along the axes of power, knowledge and subjectivity, the aim of this paper is twofold: on the one side, it tries to further contribute to a ‘temporal turn’ in Foucault studies; on the other, it attempts to develop a Foucauldian vocabulary of temporal analysis as an alternative or supplement to established approaches in the field of critical social time studies.
Recent debates in the inter-disciplinary field of social time studies have shown an increased reflexivity with regard to the recognition of a plurality of social times (Adam, 1990;Nowotny, 1992). Emphasizing the impossibility of subsuming the heterogeneity of times under a unitary notion of modern clock-time or History as a collective singular noun (see also Koselleck, 2006), these approaches try to make visible the variety of forms and meanings of social times. These insights were accompanied by a second important trend to situate temporal difference in a broader context of social relations of power (see Adam, 1990;Bastian, 2013;Hassan, 2012;Huebener, 2015;Hutchings, 2008;Nowotny, 1992;Sharma, 2014). Here, working with concepts of influence, power and oppression as analytical tools is of utmost importance, but often these are used in a rather elusive and/or metaphorical sense, abstaining from systematic conceptwork in order to lay the primary focus on empirical description.
Far from playing out one against the other, this paper tries to investigate how Foucault's work can be helpful and inspiring in finding and further developing concepts employed as analytical tools to investigate the historical emergence and transformation of powerful social time-regimes. Here, power is not so much understood as being anchored in a particular form, design or object of time, but rather is fundamentally connected to practices of temporalization and timing, conceived in the immanence of social relations of knowledge-power (Foucault, 1995, see also Elias, 1992;Hom, 2018). Therefore, not only could Foucault scholars profit from a dialogue with pluri-temporal perspectives emergent from the field of critical time studies, but these also could gain central insights by making use of Foucauldian vocabularies of power (see for example Odih, 1999). In this regard, Foucault's research on dominant temporal mechanisms of knowledge-power and his emphasis of the plural, heterogeneous and relational nature of social times are only some aspects of a yet to be acknowledged critical archive of temporal concepts, which have the potential to enrich theoretical reflection and empirical analysis in social time studies. By putting complex dynamics of time, power, knowledge and subjectivity centre stage, social scientists become aware not only of the discursive and relational nature of times in their manifold manifestations, but also of their embeddedness into forms of oppression, marginalisation and inequality. In the same way, making visible the contested nature of temporal forms, the focus shifts from the reification of one dominant time-logic towards bringing the diversity of social and political struggles constitutive for the social organization of time to the centre of attention. Furthermore, a 'temporal turn' 1 in Foucault studies could lead to an increased reflective awareness with regard to the use of concepts of time and temporality not only in Foucault's work, but as an extension of his archaeological and genealogical approach, in the field of the humanities more generally. Bringing time theoretical differentiations of Foucault's work into discussion may also help to open up new and important perspectives on his writings. In general, the focus of earlier investigations has often been limited to the reconstruction of changing concepts of history, while the variety of aspects of time and temporality were subsumed under a general trend towards process-thought in post-structuralist theory. Also, it is not enough to pay attention to the transformation of historical regimes of knowledge-power, but we must analyse the ways and means by which power intervenes into the social organization of time to secure an always fragile trade-off between forms of persistence and change. Here, not only has the exercise of power to be conceived as temporally structured in specific ways, but the social organization of time is pervaded by networks of knowledge-power through and through. Therefore, Foucault can help us finding conceptual tools to reflect and analyse issues of time, power and change in a philosophically compelling and empirically grounded way. Last but not least, a time-theoretical investigation of Foucault could also function as a hinge to connect a diversity of time-theoretical approaches inspired through post-structuralist thought emerging in the field of critical social and political theory, cultural studies, feminist-, queer-and post-colonial studies.
Nevertheless, what makes a time-theoretical investigation of Foucault's work difficult is that in order to avoid being consumed by a particular philosophy of time and history, he tried to blur all traces that would connect him to one single time-philosophical paradigm. 2 On a metatheoretical level, he largely tried to abstain from making specific time-theoretical assumptions, which go beyond a radical historization of knowledge and an emphasis of the plural, stratified, heterogeneous, eventful and relational nature of social times. In a nutshell, time for Foucault is both understood as principle of social order and change, and he investigated historical variations of temporal forms and manifestations in different contexts and according to different research aims. Here, his task was not to integrate the totality of temporal forms and relations into one grand speculative narrative, but to multiply temporal concepts as analytical devices and to complicate our theoretical understanding in order to deal with time and change in a complex and non-reductive manner. Another main reason why time-theoretical aspects of Foucault's work have remained largely unexplored is related to his reception in the context of a 'spatial turn' in the humanities. In a series of interviews, Foucault diagnosed a historical dominance of concepts of time over space ( 2003c: 46, 725), which taken in isolation paved the way for many misunderstandings, so that time and space where often played out against another in a very reductionist way. Therefore, and contrary to many commentators who tried to celebrate Foucault's work for a final revenge of space against time, his critical remarks aimed at making visible the dominance of historically specific concepts of time, like a generalized and continuous time of history and consciousness, or a standardized calendar of human memory and the state (see Foucault, 1994cFoucault, : 571ff, 2002b). Foucault also did not think that concepts of time and space should be integrated into a notion of four-dimensional space-time; on the contrary, he opted for a pluralization and differentiation of both concepts. 3 What is most important with regard to relations of time and space, Foucault strictly positioned himself against Bergson's demand for a pure intuition of time in a non-spatialized form (see Bergson, 2001). On the contrary, the spatialization of time is to a certain extent necessary, if the realm of time, temporality and history is to be made accessible to an analysis of power and knowledge (Foucault, 1994c: 33). To assume that we could stick to a 'neutral', 'non-spatialized', Foucault also called it a 'sterile', conception of time ( 1994c: 29), is an illusion, because this would be to neglect its political nature and its embeddedness into forces of social relations.
Therefore, this paper aims towards a non-reductive and context-sensitive reading of concepts, tropes and metaphors of time and temporality in Foucault's work, hereby putting a special emphasis on aspects of power, knowledge and subjectivity. The multiplicity of temporal references in Foucault's writings can neither be reduced to the influence of one particular philosophical thinker or tradition of time-philosophy, nor can it be subsumed under a generalized notion of process or history. We also should not remain content with emphasizing the multiplicity and diversity of temporal concepts on an abstract level, but instead follow the multiple traces of their linguistic unfolding. Therefore, we should take the opportunity of the last and final publications and try to gain new perspectives on his work, in this case, by focussing on issues of time and temporality. Of course, this present paper can only be an attempt to sketch out some important nodes in a wide net of time-concepts, which have to be further explored in depth. There is still so much time to be found in Foucault's work.
Foucault's temporal manifolds
On a metatheoretical level, what I want to show through a detailed investigation of different phases of his work is that we find in Foucault a double strategy of approaching issues of time, temporality and history: first and foremost, he assumes that in time, we have to 'uncover a principle of manifold relations' (Foucault, 1994d: 222, my translation), not only connected to an order of sequence and succession, but also of simultaneity and copresence. There exist no temporal atoms making up the building blocks of history and becoming. Time is neither discretionary nor continuous, but plural and relational. These multiple time-relations involve different rhythms, durations, speeds, stratas, developments and periodizations, which are bound to discursive and bodily practises. As his concept of the 'event' makes clear (see Foucault, 1994c: 581), no time is just one time, because each form of time always gives expression to an intersection of multiple times. Second, and also of fundamental importance, time is assumed to be fundamentally related to the exercise of power. Here, from his early 'archaeological' writings, Foucault was interested in the historically specific ways temporal regimes emerged, were stabilized, integrated and totalized through historically specific orders of science and knowledge. These formations of knowledge stretch from the micro-level of everyday conduct to specialized scientific and philosophical discourse, and are intrinsically connected to social relations of power. This intersection of knowledge and power makes possible the government of conduct through rationalities and technologies of time, which materialize in the context of powerful 'dispositives', like 'discipline' or 'security'. In this context, social formations of knowledge-power are fundamentally related to processes of temporal subjectivation, which, by way of ritualization, habitualization and internalization of temporal norms, make the individual the 'the principle of his own subjection' (Foucault, 1995: 203). In the later works, time becomes an important element in his investigation of ancient literature on techniques of the self, involving specialized meditations and practises directed towards establishing a certain ethos towards time and temporality. Therefore, time in the work of Foucault is not only pluralized but understood in its fundamental connection to knowledge, power and processes of subjectivations. In sum, and against those early commentators who assumed that Foucault wanted to negate history and dissolve time into a set of spatial arrangements, his work can be described as fundamentally directed to the task of an alternative way of 'writing history' and of 'living time differently' (Foucault, 2003c: 729, 984). Possibilities of resistance against dominant articulations of time, power and knowledge find inspiration in his definition of critique as the refusal to 'be governed like that' 4 (also, but not only) by and through time (Foucault, 1997a: 26). His historical investigations are not done in a temper of nostalgia or in an attempt to do justice to the past, but rather in order to interpret and explain 'who we are today' ( Foucault, 1997a: 147f). All his work, beginning from the archaeological, through the genealogical to the ethical period, is related to this one question, fundamentally inspired through Nietzsche and Kant: 'What are we? In a very precise moment of history' (Foucault, 1982: 785) and how can we extend or even transgress the historical limits, which fundamentally structure our present temporal being (Foucault, 1997a: 127f), therefore also integrating a perspective on time and change.
What Foucault presents us with are historical investigations of dominant regimes of time, temporality and historicity, which are contextualized in a field of social relations of knowledge-power. Over the course of his work, he had to re-evaluate core questions, aims and methods of his philosophical and historical investigations. Building on Foucault's own retrospective interpretation, I will work with the periodization of his work into a phase of 'archaeology' (from the 60s to early 70s), 'genealogy' (until the late 70s) and 'ethics' (80s), because each of these transitions also went in hand with important shifts of his time-theoretical core assumptions. 5
Archaeology: The temporal inside and outside of historical regimes of knowledge
The 'archaeological' writings are fundamentally concerned with time and history, both on a philosophical and on an empirically-descriptive level (Foucault, 1988(Foucault, , 2002a(Foucault, , 2002b. Foucault first finds in structuralism an ally in his attempt of writing history differently, while later he reacts to criticismmainly coming from French Existentialists and Marxists -accusing him of a neglect of history and time with the attempt of temporalizing structuralism from within (Gutting, 1989, Kusch, 1991, Michon, 2002. Hereby, he not only tries to think time and history as plural and heterogeneous along multiple rhythms, durations and systems of reference, but also to move beyond a unitary and reductive notion of change and transformation (Foucault, 2002b). In this regard, Foucault uses spatial metaphors not only to investigate the plurality, relationality and complexity of social times, but also to make visible forms of temporal selectivity and exclusion.
Inspired by structuralism, The Order of Things (2002a, published in 1966) on a methodological level attempts to give priority to a set of copresent structural relations over historical becoming, hereby bracketing aspects of causality in favour of logical and relational thinking (Foucault, 1994a: 821). Approaching history of science through his 'archaeological method', Foucault wanted to know, which fundamental epistemological shifts were necessary to make this kind of empirical knowledge at a given moment in time possible (see Foucault, 2002a: 35). Therefore, in order to reconstruct the radical transformations of the foundational rules of historical regimes of empirical knowledge (so called epistemes), Foucault combined a historical investigation of scientific knowledge in the fields of 'life', 'work' and 'language' since the 16th century with a reflection on philosophical positions, which accompanied these discourses and made them intelligible. According to Foucault, these radical shifts in the 'positive unconscious of knowledge' ( 2002a: xi) went not only in hand with fundamental changes in the structuration and organization of empirical science and philosophy, but also with important transformations with regard to the dominant epistemic schemes of time and history. Beginning with 16th century Renaissance, Foucault describes a historical order of knowledge structured and articulated through categories of resemblance. It is a world that, although already including forms of logic and scientific rationality, is still charged with magical spirit, where everything exists in a state of reciprocal resonance, the whole being implicated in its parts and where is no division between words and things. It gives expression to a relation of 'anteriority', so that what can be discovered was already there, referring to a timeless order and eternal truth that needs to be deciphered and interpreted through 'divination' (Foucault, 2002a: 65). This changes radically during the 17th and 18th century, when western societies enter the 'classic age', fundamentally influenced by 'rationalist' thinkers like Descartes and Newton. Thinking in terms of resemblance will now be devalued as unscientific and a new logic of representation is established to give non-formal sciences a strict and robust fundament conceived after the ideal of formalized mathesis, aiming towards the establishment of taxonomies based on a universal method of comparison to structure empirical facts according to a logical division between identity and difference. No longer anchored on the surface of nature, systems of signs become analytical instruments related to an 'act of knowing' (Foucault, 2002a: 65), hereby also introducing elements of chance and probability. Although Foucault characterized the classical episteme as a form of spatial thinking, time and chronology were also fundamental, but only in a 'tamed' form of causalities, successions and sequences, which could be ordered into uniform and strictly simultaneous tableaus. 6 Foucault also called it a 'classified time' that imagined progress as a form of 'squared and spatialized development' ( 2002a: 144) based on an a priori spatial continuum and a fixed hierarchy of sequences. The classical episteme changed radically with the emergence of 19th century modern episteme, opening up the spatial grid of representational thought for radical becoming by introducing notions of time and history as enabling condition of empirical statements prior to any fixed continuum. History therefore does not enter the realm of empirical knowledge in 'a probable form of succession', but rather as their 'fundamental mode of being' (Foucault, 2002a: 300). Foucault's treatment of time and history in the modern episteme, besides being fundamentally influenced by Heidegger, shares certain characteristics with Koselleck's (2006) investigations of the emergence of History as a collective singular noun, different to the prior meaning of histories in plural. But on a closer inspection, Foucault's argument is different, pointing towards contradictory tendencies of conceptual unification and pluralisation constitutive for the emergence of modern notions of time and history. Furthermore, what makes Foucault's argumentation unique is that he situates the 19th century turn towards radical becoming in the context of the historical emergence of the figure of the human, together constituting what he calls a thought of 'anthropological finitude' (2002a: 283). 7 Therefore, while 19th century thought was so engaged in making an end to metaphysics, it reintroduced metaphysical notions of the human and history. The transformation of the classical to the modern episteme is quite paradoxical: on the one hand, time is freed from the hierarchical and classificatory logic of representations characteristic of 17th and 18th century rationalism, while on the other hand it introduces a new form of 'temporal immobility' 8 which becomes conceivable only through the powerful articulation of history and the human. This also brings with it a new understanding of the 'origin': while for the classical episteme (and also for Renaissance), the 'origin' was situated outside time; in modern thought, beginnings will always be already mediated through History. The reason for this lies in a double movement related to the modern emergence of the human as a 'strange empirical-transcendental doublet' (Foucault, 2002a: 347), which brings into view the uncontrollable manifold of non-human times that prefigure and enable empirical human existence, while at the same time situating the former in a transcendental horizon, which unfolds from human vision. Foucault therefore identifies a tendency towards both identity and difference at the root of modern-time thinking, which in the end are nonetheless both placed under the umbrella of the 'same', grounded in a metaphysical notion of history and the human.
While the order of things attempted to investigate the transformation of discursive knowledge structures from the positive side of 'order' and the 'same', Madness and Civilisation 9 (published 1961) was concerned with 'otherness', understood as a historical-philosophical investigation of a series of foundational exclusions, enabling the existence of modern socio-temporal order in its positivity (see Foucault 1994a: 498). Against Hegel's understanding of history as Reason's development in the consciousness of freedom and Descartes' attempt to found a rational order of truth in the cogito, Foucault identified a fundamental split between reason and madness at the origins of western philosophy of time and progress, leading to an exclusion of madness, conceived as the 'other' of time. Madness is defined as the 'absence of an oeuvre' (Foucault, 2006c: xi) and Foucault investigates how it was historically associated with an idea of 'unproductivity', a 'merely fallen time': 'the poor presumption of a passage refused by the future, a thing in becoming which is irreparably less than history' (Foucault, 1988: xxxi). In opposition to this, Foucault introduces Nietzsche's figure of the 'tragic', which -not unsimilar to madnessbrings to expression elements of the 'forgotten' and 'expelled' from Reason's march towards historical progress (1988: xii). It becomes a form of 'counter-memory' that is no longer associated with temporal movement, but leads to a radical 'immobilisation of history'. Situated at 'the point at which history freezes' (Foucault, 1988: xxxiv), and therefore logically prior to the original split between madness and reason, the figure of the tragic makes visible the constitution of the 'other' of time: 'That will allow that lightning flash decision to appear once more, heterogeneous with the time of history, but ungraspable outside it, which separates the murmur of dark insects from the language of reason and the promises of time (Foucault, 1988: xxxiii). Not unsimilar to Johannes Fabian's writings on Time and the Other, who refers to Foucault as a major influence on his thinking (Fabian, 1983: xiii), modern conceptions of time and history, through their intrinsic connection to a reasoning subject, become a powerful dispositive of social division, where historical becoming is bound to conscious human action, leaving the mad in a realm less than time and history, because they are unable to produce social values and to contribute to cultural and economic progress.
The Archaeology of Knowledge (2002b, first published in 1969) has often been interpreted as a 'Discourse de la m ethod' to Foucault's previous works. 10 But to assume that this book was just the attempt of a retrospective reconstruction of a method already operative in The Order of Things would be to neglect his fundamental time-theoretical innovations. Rather, it has to be understood as a work that stands in between two phases, where Foucault first was flirting with structuralism and later tried to overcome its methodological flaws by extending his approach to the realm of knowledge-power and by introducing plural scales of temporal series, enabling him to theorize simultaneous and non-simultaneous series of events forming the structure of discursive formations. Here, Foucault argued against a privileging of long periods of continuity, linear sequences and irreversible processes (especially in the field of 'history of ideas') and emphasized the relevance of thresholds, breaks, cuts and ruptures, which make possible an analysis in terms of discontinuities. According to him, the persistence of categories of continuity like tradition, Zeitgeist, worldview and collective memory makes the appearance as 'if we were afraid to conceive of the Other in the time of our own thought' (Foucault, 2002b: 13). Although Foucault mobilized an army of concepts associated with 'discontinuity' and 'rupture', on a closer inspection, he is far away from playing out the discontinuous against the continuous (see Kusch, 1991: 83ff). Rather, he tries to introduce concepts able to re-evaluate the persistent and immobile in light of series of discursive events. This is also the reason why he -at least for this moment -prefers the notion of 'transformation' over 'becoming' (Foucault, 2001b: 864), since he wants to remove the continuous from the level of the transcendental. But hereby, he does not eliminate aspects of persistence, order and regularity from view, the opposite is the case. Even his concept of 'transformation' is conceived on the basis of regularities of series of events, and it is pluralised, so that we have to investigate these shifts on multiple levels and in terms of their own time(s).
Inspired by Nietzsche and Heidegger, the concept of the event is important throughout Foucault's whole work, but this should not mislead us to assume that its meaning did not change. During the 'archaeological' phase, the event is introduced to describe discursive formations, which are defined as 'scheme of correspondence between several temporal series' (Foucault, 2002b: 74). Each temporal series is built of multiple events, which refer to linguistic statements. These events are not 'immediately given' (see Kusch, 1991: 59): on a temporal level, they do not refer to a basal temporal building block, discrete in its time-being, or to any kind of measurable 'timespan', but to an 'intersection between two different forms of persistence, two speeds, two developments, two historical lines' (Foucault, 1994c: 581, my translation). Therefore, there is no event that is just one time, because each event always gives expression to an intersection of multiple times. This idea has similarities with 'relational' theories of time that we find in Leibniz or Elias, who defines time as a 'symbol' for the synchronisation of different flows of events (Elias, 1992). But time for Elias is fundamentally connected to the human, enabling the operation of 'temporal syntheses' or 'timing', whereas for Foucault, discursive practises cultivate their own rhythms that follow no pre-established temporal continuum: the time of discourse 'is not your time' (2002b: 232); it is 'not the translation, in a visible chronology, of the obscure time of thought' (2002b: 138).
Genealogy: 'Counter-memory' in becoming
Since the mid 70s, there have been numerous attempts to reconstruct Foucault's methodological turn from 'archaeology' to 'genealogy' (and later to ethics). 11 In its most general form, 'genealogy' represents the methodological extension of discourse analysis to an investigation of dispositives of knowledge-power, which structure behaviour and self-understanding of social subjects according to a functional logic, investing into bodily forces for the sake of economic profit, while repressing political agency (see Foucault, 1995). Power is here not conceived as a possession, or substance, but as a relation, a technology and a strategy, not as centralized in the state but dispersed throughout the social body. It is tightly interwoven with forms of knowledge, both standing in a relation of mutual support while not being reducible to another. Therefore, power is not only negative and constraining, but also positive, productive and enabling, it is not reduced to law and repression but is connected to forms of normalization (Foucault, 1978(Foucault, : 92f, 1995. History, on the other side, seen from the standpoint of genealogy, is a permanent state of war, a steady confrontation of social forces, and politics is only the 'continuation of war by other means' (Foucault, 2003a: 15). In this radical force-field, 'official' state-history is written from the standpoint of the victors, while the fragmented voices of the defeated are buried underneath liberal institutions of social justice. Genealogy therefore re-constructs 'a counter-memory -a transformation of history into a totally different form of time' (Foucault, 1998: 385) and reintroduces 'into the realm of becoming everything considered immortal in man' (Foucault, 1998: 379). It makes visible power-relations, reconstructs their historical emergence and shows how what is considered as necessary in the present is the contingent result of social struggles (see Foucault, 1997aFoucault, : 119ff, 1998. At the same time, genealogy is centred around the body, to make visible the past and present forces that cut through it. On a time-theoretical level, we are confronted with important shifts: the most significant change is (a) the re-introduction of a notion of becoming, which was formerly associated with apriori continuism, and (b) the reinterpretation of the event as a transformation of a force-relation. Therefore, what the 'series' was in relation to the 'archaeological event', 'becoming' is to the 'genealogical event'. This is a necessary move, because with his new key-focus on the transformation of force-relations, he puts a notion of radical contingency into the heart of his approach, which before was mediated through a notion of regularity, order and lawfulness associated with the operation of forming series of discursive events. Now, what seems regular and orderly in its appearance is deconstructed as a contingent result of social struggles, which leaves no room for stability and persistence beyond the permanence of war and its effects. Therefore, the notion of becoming is introduced in an attempt to 'immobilise' time and history anew. Indeed, the re-introduction of 'becoming', which was dismissed before in favour of the notion of transformation, neither has to be conceived along the lines of a Newtonian 'absolute time' nor as any other form of apriori continuism. In its strictly methodological focus, it is designed to put the emphasis on contingent continuities in the form of historical pathways of events connecting a past and a present. 12 And indeed, this is exactly the core time-theoretical innovation of Foucault's attempt to rethink history, which connects 'archaeology' and 'genealogy': bringing a thinking of slow, almost immobile, historical processes typical for the Annales together with Nietzschean thinking of the event and therefore finding not only an innovative way to re-think relations of continuity and change, but also to focus on the mutual imbrication of micro-and macro-structural dynamics of social time-regimes.
Foucault will not remain satisfied with the Nietzschean 'force-ontological' foundation of his approach. The 'war-hypotheses' will be questioned, historicized and finally rejected for being reductionist (Foucault, 2003b; see also Lemke, 2019: 133f). 13 The most important shift of the late 70s and early 80s, besides his substitution of 'knowledge' by 'truth', is the introduction of the concept of 'government' (Foucault, 2007). In a first step, Foucault further develops his notion of power: by conceiving power no longer just in dynamic-structural terms, 14 but from the side of its exercise, the actuality of a dynamic force relation is emphasized, which 'exists only when it is put into action, even if, of course, it is integrated into a disparate field of possibilities brought to bear upon permanent structures' (Foucault, 1982: 788). Foucault defines power as 'a mode of action which does not act directly and immediately on others. Instead it acts upon their actions: an action upon an action, on existing actions or on those which may arise in the present or the future ' ( 1982: 789). To exercise power is hereby not reduced to direct influence, but also understood as indirectly structuring the setting of the social context, in which actions take place. Foucault therefore investigates power-relations not as isolated dyads, but rather he develops a 'field theory of power' (Wartenberg, 1990 : 8; see also Kusch, 1991: 108), which is structural, dynamic and temporally tensed. Building on these assumptions, Foucault introduces the notion of government, understood as 'conduct of conduct.' 'For "conduct" is at the same time to "lead" others . . . and a way of behaving within a more or less open field possibilities' (Foucault, 1982: 789).
In his 'governmentality lectures' (held from 1977 to 1979 at the Coll ege de France), Foucault was interested in the historical ways that temporal schemes have been inscribed into rationalities and technologies of government, being fundamental for correlative processes of western state-and subject-formation. Therefore, he sketches historical trajectories of different rationalities of government, beginning with pastoral power, proceeding with raison d' etat and the police state, and finally describing the emergence of liberalism and neo-liberalism as specific forms of state-projects, which all go in hand with the employment of specific temporal schemes. Emerging with Christianity and being understood as a 'prelude' to governmentality, pastoral power is not yet oriented towards a fixed territory but characterized by a permanent concern of the shepherd for the guidance of the individual and the survival of the flock (Foucault, 2007: 125). It introduces the interiority of the soul by individualizing the promise of salvation and creating an inner truth, which also functions as a form of submission. Here, the arrow of time points to an eschatological future in which the end of time coincides with a form of eternal truth. According to Foucault, at the end of feudalism, a 'crisis' of pastoral power occurs through which the 'pastoral of the soul' is transformed into the 'political government of men' (Foucault, 2007: 227) and the question 'how to guide oneself in the best way' spreads to the entire realm of temporal life. Therefore, we see the advancement of 'raison d' etat,' which establishes a new time-consciousness at the level of history, no longer directed towards the 'end of time' but opening up 'onto an indefinite time in which states have to struggle against each other to ensure their own survival' (Foucault, 2007: 365). Now it is the state that stands for an 'immobile condition' in that it is fundamentally conservative and protective, securing dur ee and continuity in the form of an 'indefinite governmentality with no foreseeable term or final aim' (Foucault, 2007: 260). Out of 'raison d' etat' emerges the 'police state' as a new governmental rationality directed 'to the set of means by which the state's forces can be increased while preserving the state in good order' (Foucault, 2007: 313). With the help of statistics and police science, it tries to gain knowledge about the population for the sake of the development of state forces. Foucault illustrates the temporal logic of the 'police' as a circle that begins with a political intervention that leads through the lives of individuals to increase the state's forces ( 2007: 327). However, in contrast to law's concern for things that are permanent and definitive, the 'police state' watches out for the little 'things of each moment,' is concerned 'with the details' and always needs to act 'promptly and immediately' (Foucault, 2007: 340). Finally, through the transition from police to liberalism, the new focus becomes the question how state power can be constricted from within, not in order to abolish it, but rather as an 'internal refinement,' which, still in the tradition of 'raison d' etat,' aims at 'maintaining [the state], developing it more fully, and perfecting it' (Foucault, 2008: 28). The rationality of this new liberal practise of government is tightly connected to the emergence of the 'market' as a new principle of truth determining the measure and value of things. Liberalism no longer looks out, like ancient societies, as we will see soon, for Kairos: the right moment and due measure of political decision. Instead, the 'effect[s] of time' themselves (Foucault, 2007: 22) become the basis for a new liberal technology of power that puts securing the conditions of individual freedom centre stage.
Foucault's genealogy of the modern state overlaps with his description of three historical types of power, which under (neo)liberal conditions intersect in complex ways: sovereignty, discipline and security, 15 the latter two 'dispositives' together constituting what Foucault called 'biopower.' The concept of 'dispositive' in Foucault's work is highly complex, but its purpose is to integrate and materialize rationalities and technologies of government as forms of knowledge-power into specific apparatuses. The notion of 'apparatus' is here used synonymously with the notion of 'dispositive,' referring to a relational ensemble of heterogeneous elements, involving discursive and non-discursive aspects, answering to an urgency in society, and -as I will show -being fundamentally concerned with the social organization of time and temporality. Like Barbara Adam's notion of 'timescapes,' dispositives of time make visible the relational nature of social time(s), but in a way that contextualises it in a broader field of social relation of knowledge-power. Indeed, Adam describes the emergence of timescapes of modernity along a powerful, sequential and additive processes of appropriation, commodification, control and colonialization, but in this way homogenizes on adiachronic plane what she pluralised synchronically, giving voice to an all-embracing force of rationalization which connects the different spheres of politics, science and economy (Adam, 2004). Contrary to this, Foucault not only tries to pluralize 'rationalities' of timegovernment both on a diachronic and a synchronic level, but to bind them intrinsically to a network of changing regimes of power, knowledge and subjectivation. Therefore, what we find in Foucault are not so much 'timescapes,' putting the focus on temporal difference, but intersecting 'dispositives of time,' which historically emerged in different contexts and were designed by social actors to exercise power over another.
Associated with monarchy, law and repression, sovereign power is not only oriented towards the spatial matrix of a territory, but also enfolds and extends in and over time. It holds a permanent dialogue with the past, since it bears the characteristic traits of a 'founding precedence' (Foucault, 2006a: 43), that is, an event 'before time,' like a mythical origin from which it receives a sacral status of divine right. Sovereignty here obviously shares certain temporal characteristics with 'Renaissance episteme.' Structured by an intersection of sacral and profane times, sovereignty refers not only to a single 'origin' but also to the temporal choreography of rites of reinstatement, where the past is actualized in the present so that the signs of power can be renewed. As an act of excessive expenditure, on special occasions, sovereign power returns some part of what it first had deduced cyclically recurring from its subjects -their products, their harvest, their labour and their times. Against this subtractive and therefore negative access to the time of life of the subjects, which for Foucault implies a power over death, he is going to differentiate the positive form of biopower. However, in temporal terms, Foucault did not limit sovereignty to its 'negative powers' because it is also fundamentally concerned with the establishment of a unifying form of historiography in the name of the state: 'a historical narrative whose function was to recount the sovereign's past, to re-actualize the past of sovereignty in order to reinforce power' (Foucault, 2015: 239). The working together of these temporal mechanisms for the sovereign serves as an antidote against its inner fear of becoming caught in an eternal circle of emergence, ascent and decline, instead lending it the glance of an endless duration. In its embeddedness into the realm of the juridical, the time of the sovereign appears to be somehow 'time-less' and 'natural,' which covers up historically contested processes through which local forms of temporal organization were codified through law.
In the context of discipline, emerging from the 16th to the 19th century, while also building on time-rituals of Christian monasteries, time is used primarily as an instrument of control and coordination of individual actions, bodily movements and social routines (Foucault, 1995(Foucault, , 2015. 16 The penal system, psychiatric institutions, the school, etc. take hold, at least for a certain amount of time, of the totality of individual times, including their aspirations, hopes and dreams. The schedule and the calendar (see also Zerubavel, 1985) here represent the primary instruments for the structuration of the rhythms of particular activities, the organization of frequencies of repetition and the alignment of actions to a hierarchy of goals. The central focus of discipline lies on the imperative of 'efficient time use,' which is put into action through the analytical division of time into small and smaller units and through the transmission of temporal schemes onto bodily activities. Discipline, through a form of temporal processing of individual actions, attempts to reach the point 'at which one maintained maximum speed and maximum efficiency' (Foucault, 1995: 154). In this context, discipline develops a focus on the smallest incidents and events by trying to anticipate and prevent their actualization in advance. Therefore, discipline must work not only on actual but also on 'potential behaviour' (Foucault, 2006: 51). Because discipline wants to pre-form the very potential of the individual, it projects a 'soul' behind the body, which, as a bearer of future potentials, stands open towards endless processes of transformation. Therefore, and different than sovereignty, which recurs on the image of an unthinkable past and the eternity of law, discipline, especially in the context of penal institutions, refers to the future to prevent the repetition of a particular offence or crime and initiates and constantly evaluates a process of reform on the side of the individual. Along the utopian scheme of 'panoptism,' discipline 'looks forward to the future, towards the moment when it will keep going by itself and only a virtual supervision will be required, when discipline, consequently, will have become habit' (Foucault, 2006: 47). Nonetheless, the actual object of the exercise of power in discipline is always the physical presence of a subjected body on which it leaves traces to structure an endless series of rule-following. In analogy to his time-theoretical descriptions in The Order of Things, discipline works within a timeframe characteristic for the classic age, based on a highly rationalized, rigid grid, serving as a standard system of reference for the control of bodily movements on a demarcated spatial territory. Furthermore, we see the establishment of 'time as measure, and not only as economic measure in the capitalist system, but also as moral measure' (Foucault, 2015: 83, see also Harcourt, 2015), aiming towards a normalisation of particular social groups and individuals. Foucault illustrates this with industrial societies' urge to fight forms of 'temporal irregularity' on the side of the working class,i.e. absenteeism, belatedness, nomadism, debauchery -through timediscipline, by framing it as hostile acts against society as a whole. Against this, punishment fulfils several functions: on the one hand, it constitutes a form of compensation on behalf of society (Foucault, 2015: 7f); on the other, it works as a preventive deterrent so that new enemies of society do not emerge in the first place (Foucault, 2015: 177); finally, it installs a procedure of 'recovery' for guilty criminals, which for Foucault gives expression to a silent infiltration of legal practises through Christian morals from 'below' (Foucault, 2015: 106). Therefore, punishment aims at a process of inner transformation, which constitutes a complex causality, involving both linear and cyclical dynamics that extend the present act of 'punishment' through a permanent process of 'surveillance': 'Punishment is not just an act that is carried out, it is an unfolding process whose effects on the person who is its object must be monitored' (Foucault, 2015: 91). What constitutes the outline of 'an abstract, monotonous, rigid punitive system' (Foucault, 2015: 70) is focused in 'the last instance' on exactly one variable: time. 'Just as the wage rewards the time for which labour-power has been purchased from someone, the penalty corresponds to the infraction, not in terms of reparation or exact adjustment, but in terms of quantity of time of liberty' (Foucault, 2015: 70). Therefore, no longer stands the sovereign's deduction of goods and resources in the foreground; instead, it is the life and times of individual bodies that move to the centre of attention, making visible 'the relationship of the time of life to political power: that repression of time and repression through time, that kind of continuity between workshop clock, production line stopwatch, and prison calendar' (Foucault, 2015: 72). Because disciplinary time-norms have to be inscribed into the body, the primary objective becomes the shaping of habits. Contrary to the juridical contract, which binds individuals to their property, it is habit, which makes it possible to fix those individuals, who own no property, to the apparatus of production (Foucault, 2015: 239).
While in 'disciplinary society' power tries to take hold of the totality of individual times, the temporal technologies of security apparatuses -in line with governmental rationalities of liberalism -focus on letting times pass. Making use of statistics and probability calculus, they register the frequency of events and project possible future scenarios (see Foucault, 2007: 37ff). Their focus lies on the aleatory, the eventful and the contingent, not as a radical cut, but again in the context of the processual, so that their main objective is the administration and regulation of an open series of events, which are part of a wider reality that is in flux. Security does not care about the total mastery of the life and times of individuals; instead, it attempts to influence the milieu of population development. Contrary to discipline, security does not integrate the events centripetally into the direction of a panoptic centre but takes a step back to get an overall view on social tendencies from a distance, lets things develop according to their own movement (Foucault, 2007: 44), or attempts to put different dynamics in relation so that they mutually amplify, lower another or move in a totally different direction (see Foucault, 2007: 37). The series of events hereby involved do not follow a universal principle of sequence but create temporal order in the medium of time. Liberalism installs an imperative to take seriously all things and events in their own temporal movement, to recalibrate along its flow and to pull down all forms of rigid barriers. Temporal order, it seems, is freed from any external measure and is determined only according to the inner dynamic of movement itself, making only punctual forms of intervention necessary to channel flows and regulate peaks and throughs. However, even in this vision of a more flexible regulation, liberalism has not fully left behind all forms of 'temporal immobility.' Here, it is especially a form of temporal fixation that raison d' etat had so firmly established in putting the focus on 'keeping balance,' which liberalism interprets as a compensation between applying a range of regulatory forms and letting things move their way. Security therefore gives up the hope to be able one day to control the totality of events in a comprehensive way. Rather, the future now appears as radically open. It was already in the context of discipline, but the latter processed time under artificial laboratory conditions and imagined a future point of absolute control without any controlling subject. Security, in contrast, turns the utopian time-image of modernity into its norm: we have to deal with the new each and every moment (Habermas, 1987;Koselleck, 2006). Therefore, comparable to the transition from the classic age to modernity, we are witnessing the transformation from an 'ideal-model' of time-government to a quasi-naturalistic 'real-time-model', which is oriented towards a reality that is in permanent transition. The dominant timeframe of security could be termed as a 'serialaleatoric evolutionism' that is oriented primarily to present-and nearfuture-related processes and understands regulation as a flexible form of time-and space-political context-shaping. 17 Finally, in his lectures on 'The birth of biopolitics' from 1978-1979, Foucault examines the transformation of a liberal art of governing into a neo-liberal form that structures social relations according to the principle of the enterprise, hereby involving both aspects of discipline and security (see also Bargetz et al. 2015). The neo-liberal subject must be understood as 'eminently governable' (Foucault, 2008: 270), also because she/he learns to cultivate specific attitudes towards time that make her/him invest, calculate and plan for an uncertain future. In this context, the dismantling of the welfare state and social security systems supports a radical displacement of risk and responsibility into the realm of individual self-management, which creates the imperative to invest into human capital. Therefore, the subjects are constantly urged to care for the future, making their limited life-time into an object fraught with risk and constant concern, which better be managed with precaution. In this regard, the subjects still project themselves into the future as an integrated unity at the end of time, but they no longer move upwards on an imagined universal ladder of life. Instead, they are confronted with a modular structure of interconnected tasks that secure continual development without any prospect of an end before death (see Br€ ockling, 2013Br€ ockling, , 2017. Therefore, in the context of imperatives of economic increase, the time of one's life has to be interpreted -in the case of doubt, with the assistance of professional guidance -as a coherent, reflexive project that has to be continuously optimized (see Binkley, 2009).
Ethics: Time and temporality in ancient technologies of the self
While 'ethics' is often understood as a sub-field of 'practical philosophy' directed to moral reasoning and acting, what interests Foucault in his later writings is how ethical codes in western history have shaped the subject's self-understanding in such a way, as he or she is provided with the means to govern him-or herself as the precondition to govern others (Foucault, 2005b). Ethics is therefore not uncoupled from 'genealogy,' but rather has to be understood as its extension to the field of 'technologies of self' in order to write a 'history of the modern subject', 18 beginning with an investigation of ancient Greek, Roman and early Christian self-culture 19 (see Foucault, 1985Foucault, , 1986aFoucault, , 2019. Although his investigations show surprising commonalities with regard to the structures and objectives of ethical concern, while the Greek and Hellenistic culture of the self is characterised through its aim of bringing the individual into a state of autonomy, Christianity tendentially shifted the focus towards a culture of submission and self-renunciation (Foucault, 1993).
Foucault's historical investigations into ancient 'self-cultures' are again fundamentally concerned with aspects of time and temporality. As a reader of Heidegger and Husserl, Foucault is aware that the historical forms of 'self-relations' he wants to investigate are fundamentally temporally structured. But against the assumption that these could be understood as generalized conditions of being qua individual Dasein, Foucault searches for the concrete ethical codes, guidelines, spiritual exercises and practises which made particular temporal self-relations, including the experience of being present, as well as forms of retro-and prospection, possible. Investigating these ethical codes, it becomes clear that there is no time to lose for the fundamental task to constantly examine one's conscience and to transform oneself according to ethical standards (Foucault, 1985(Foucault, , 1986a(Foucault, , 2005b(Foucault, , 2019. This also takes time, which therefore shall be structured well and in line with the right moment and due measure. Constituting the ethical basis for the government others, technologies of the self fundamentally build on the cultivation of an attitude of attentiveness and self-presence. This on the other hand, needs guidance and rules, which can only be learned on the basis of the constant exercise of memory. Against Plato's anamnesis doctrine, the Stoics will focus extensively on memory training exercises, no longer directed towards a spiritual realm of eternal ideas, but on practical moral lessons given from teacher to student, aiming at the strengthening of the individual's ability to cope with unforeseen events (Foucault, 2005b: 460). Even expectation, for example in the form of a meditative anticipation of possible dangers, which can happen during the day (praemeditatio malorum) or the working through of the all-time given the possibility of one's death (melete thanatou), has to be trained through continuous exercising, not so much to shape the future 20 actively but rather to tame a surplus of contingencies in the context of present action (Foucault, 2005b: 477). In addition to the description of ways of exercising and cultivating quasi-existential temporal horizons, on a more instrumental level, the core temporal technology is located at the intersection of kairos and phronesis: establishing a form of practical wisdom with regard to the determination of the 'right moment' on different but interrelated temporal scales (the hours of the day, the seasons of the year and the phases of a lifetime). In every regard, it is imperative to observe and comply with differences of times for each practice, and to keep due measure, always finding a balance between too little and too much. Therefore, developing an ethical attitude in the context of constituting oneself as a subject of desire requires calendars and schedules, which have to be handled not with rigid accuracy, but rather need a 'flexible interpretation,' building on reflexion and practical wisdom, always situating a rule for right action in the context of changing circumstances. 'So it was not a question of determining the "working days" of sexual pleasures, uniformly and for everyone, but of how best to calculate the opportune times and the appropriate frequencies' (Foucault, 1985: 116).
Through his investigation of early Christianity, Foucault continues his previous work on the function of historical practices of confession in the context of the emergence of a modern subject of truth and desire. 21 He defines the confession as 'a verbal act through which the subject affirms who he is, binds himself to this truth, places himself in a relationship of dependence with regard to another, and modifies at the same time his relationship to himself' (Foucault, 2014b: 17). Already his investigation of pastoral power as a 'technique for the government of souls' (Foucault, 2003a: 177) had shown that Christianity deployed several temporal techniques for the control of individual life-times, including the guidance of everyday conduct, practices of self-mastery and rituals of maintaining a good and a clear conscience through forms of truth-speaking. Foucault shows how Christian pastoral power tried to get a grip on the totality of the individual's lifetime, by integrating him/her into recurring practices of confessing the truth about his/her self. Since the end of the world and final redemption did not take place, the church had to deal with the possibility of suffering relapse into sin as a permanent feature of this world (Foucault, 2014a: 93ff). Baptism and penance provided only limited success into gaining continuous control over the individual, therefore the church institutionalized permanent rituals of shrift in order to make confession a ritualized practise, which integrated the will to constantly tell the truth about one-self into a fixed relation of subordination and control. Because the danger of relapsing into sin was always given, the confession involved not only the search for traces of sin in the past and the present, but also in the future. Nonetheless, the ritualized examination of one's conscience is fundamentally directed towards 'a present that is experienced as a "state" [fr. etat, dt. Zustand]' (Foucault, 2018(Foucault, , 2019. What emerged with early Christianity was the 'flesh' as a 'form of experience,' involving both 'self-presence' and a 'mode of self-transformation' directed towards 'saying the truth' and 'abolishing evil' (Foucault, 2019: 76f, my translation). Even if these descriptions are not conclusive, it seems like Foucault wanted to tell us that the self-practises he described were fundamental for of our way of historically constituting us as temporal subjects, without turning these temporal technologies into timeless categories, but making them understandable in a historical context of social relations of knowledge-power.
Conclusion
In this paper, I tried to show that Foucault, over different phases of his work, was deeply engaged with issues of time, temporality and history, while putting a special focus on aspects of knowledge, power and forms subjectivation. Apart from a restless search for strategies to deal with notions of time and history in a complex and non-reductive way, he described historically dominant regimes of time-knowledge, temporal schemes of government, powerful time-dispositives and time-norms, which shape bodies, affects and identities, produce habits and therefore fundamentally direct the conduct of conduct. In his investigation of historical ways to conceptualize and rationalize time, temporality and history, he constantly looked out for strategies to live time differently (see Foucault, 2003c: 984). 22 As a self-titled 'philosophical journalist,' who was concerned with writing a 'history of the present', asking the 'diagnostic question' of what kind of difference a present introduces in distinction to a past, Foucault's primary concern always was the present ( 1994a: 665, 2001b: 848). He understood the latter not as a discrete point in time, but rather as an extended force-field, which structures and therefore both limits and enables our thinking and acting. Therefore, the present, as a 'system of actuality' (Foucault, 1994b: 259), is not understood as a stable ground, but rather it is exactly this present -'our present' -which stands in question. Neither 'we' nor the 'present' are fixed terms, and we should not be led astray trying to turn Foucault's approach into a kind of 'presentism.' Foucault is not trying to write a history in terms of the present, but a history of the present (see Foucault, 1995: 31). This is also reflected in Foucault's genealogy of a critical ethos, understood as the art of not letting oneself be governed like that. 23 Inspired by ancient self-relations, Foucault seems to suggest that one part of this critical ethos, which aims towards a 'desubjugation of the subject' (Foucault, 1997a: 32) is turning your life and times into a work of art. Leading us back to our initial investigation of 'madness' as 'original exclusion' from history and time by defining it as the 'absence of an oeuvre,' Foucault now seems to understand turning the self-relation into a constant piece of work as an elementary part of an 'art of voluntary insubordination' (Foucault, 1997a: 32). Are these the same kind of practices related to the production of an oeuvre that Foucault criticized earlier, which he now suggests we should turn ourselves, our lives, our times to? Of course not, because neither are 'we' nor are 'our times' the same. Turning one's time into an oeuvre is not thought along a teleonomic history of reason connected to a foundationalist rational subject, but builds on a multiplication and diversification of lifes, times and histories. This may even enable us to think 'the Other in the time of our own times' (Foucault, 2002b: 13), but only under the provision that this time would no longer be one, and therefore also not our own anymore, but rather a manifold relation.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Funding achieved by the Austrian Academy of Science ( € OAW).
|
2020-05-21T00:03:31.645Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "1e258a3ffc96f02d939bd832ded2084690959b59",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0961463X20911786",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "ab39c0da20e69fbcfc3ba2f3ca9f7724e8b2618c",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
268649573
|
pes2o/s2orc
|
v3-fos-license
|
A study on information sharing behaviour on various categories of agricultural information received by the ICT tool user farmers
ICTs play a very crucial role in disseminating the timely farm information and aids in farm-management decisions. The study was conducted in the year 2017-18 in Shivamogga and Chikkamagaluru districts of Karnataka state. Simple random sampling technique was employed to collect the data from the 120 samples. The data was collected with the help of structured interview schedule. It was found that more than half (60.00%) of the ICT tool user farmers inferred that it has saved time in obtaining agricultural information. Further, 73.30 percent of What’s app users used less than 2 ICT sources of information. 76.70 percent of the KMAS user farmers used 2-3 ICT sources to obtain agricultural information. An insight into information sharing behaviour of ICT tool user farmers reveals that about 84.17 percent of the farmers regularly shared the information with friends, followed by 70.80 percent of the respondents occasionally shared the information with their family and peer group peoples.
Introduction
The Information technology in recent years witnessed major changes and emerging as a powerful tool to accelerate agricultural growth in the developing country like India.The rapid growth in the IT sector, since the late 1980s and the use of ICT has intensely increased since the 1990s.The penetration so smart phone in the last two decades has increased i.e., 1.2 billion mobile phone users and 600 million smart phone users in India in 2022.World population is expected to exceed the 9 billion by 2050, and agricultural production will need to increase by 60 percent from its current levels to meet this additional food demand (Bansal, 2022) [1] .ICT applications in husbandry creates a significant contribution to meet the future needs.With the increasing population the available cultivable land is decreasing tremendously which restricts the horizontal expansion of the agricultural land (Ghosh, 2022) [4] .Hence vertical expansion is the only way to meet the same by the adoption of improved cultivation practices.Thus, it is essential to reach farmers with the authenticated information timely on various aspects of the farming.Traditional methods of reaching farmers will delay the process of diffusion and adoption of the technology.In this context ICT tools plays an important role by providing the timely information on various aspects of agriculture within no time.The time gap between information development and dissemination has significantly decreased because of technological advancements.Farmers are now better able to handle risks related to the weather, technology, prices, and many other factors.The appropriate use of ICT aids in overcoming hurdles related to time, place, language, and illiteracy.As a result, ICT has become a key driver of the contemporary knowledge-based economy, supporting the nation's socioeconomic development.ICTs helps the farmers by updating the information, further the useful, innovative, cost effective and drudge less technologies become viral and keeps farming community well-informed.Young farming groups escalates the use of mobile phones as they are easy, fast, and convenient means to communicate and get appropriate solutions to their problems.Nowadays, the mobile phone has generated an opportunity for the farmers specially to get the information about marketing and weather.
With the improvement in the technology, farmers are directly in touch with the market personnel and do trading at the reasonable prices.The use of mobile phone also keeps them aware about weather forecast, agriculture input application like fertilizer and pesticides which might be affected by unforeseen seen disasters as communicated by meteorological department.Increased mobile usage, improved network connectivity and various platforms has given new direction and approach to farmers to communicate directly and share about recent advances with each other (Fawole, O. P., and Murty, D. T., 2012) [3] .Considering the use of mobile phones different private organizations, govt. of India and ICAR had developed mobile applications for farmers to avail information about agriculture.Hence, the current study is to comprehend the information sharing behaviour and various categories of agricultural information received by the farmers using ICT tools.
Methodology
The study was conducted in Shivamogga and Chikkamagaluru districts of Karnataka State.In Shivamogga district the what's app group of KSDA and Kissan call centre were selected.Similarly, e-Krushika app and KVK Kissan mobile agro advisory services in Chikkamagaluru district were selected purposively.Under each district two taluks were selected.Under each taluk two villages were selected with a minimum of 5 km and maximum of 15 km radius from the taluk headquarters, where 15 farmers were randomly selected from each village.
Thus, the total sample constituted to 120.The data was collected using pretested interview schedule.The responses were scored, classified, analyzed, and tabulated with the help of frequency and percentage techniques Selection of the population: The farmers using the ICT tools in the Shivamogga and Chikkamagaluru districts were constituted as population of the study.
Selection of respondents:
From each village, fifteen farmers were selected by using simple random sampling technique.Thus 120 ICT user farmers were selected for the study.
Results and Discussion
Time saving in obtaining agricultural information by the ICT tool user farmers Information regarding time saved in obtaining agricultural information is presented in Table 1 and Fig 1 .Majority 60.00 percent of the farmers inferred that based on the technology/ agricultural information time depends.The possible reason may be that the latest technology will take more time to understand.The already used technology which is familiar to farmers will not take much time to understand.Whereas, the unknown/latest information will take more time to understand and adopt for their situations.Thus, farmers opined that the time saved based on previous exposure of the farmer on technology (Pavithra S., 2018 Patel R., 2023) [8,7] .
Different ICT sources of information used by the farmers
The data in the Table 2 and Fig. 2 infers the information about different ICT sources used by the farmers.Majority 73.30 percent of the What's app users used less than 2 ICT sources.The reason might be that the What's app tool user farmers were growing the field crops which were grown from the generations and they were satisfied with the information obtained on these crops.Thus, they might feel not required to use more than 2 ICT tools.Majority (60.00%) of the e-Krushika app users and 88.33 percent of the KCC user farmers were used more than 3 ICT sources.
The probable reason may be that majority of the farmers were the coffee and areca planters in addition cultivated other crops like banana, pepper, and cocoa, due to unavailability of information on various aspects in single tool.Hence the farmers might use more than 3 ICT sources.Whereas, 76.70 percent of the KMAS user farmers inferred that they used 2-3 ICT sources, this might be due to that though KMAS provide credible information again farmers might had checked the same information with the other sources (Panda, S., 2019 and Dechamma S., 2020) [6,2] .
Information sharing behaviour of ICT tool user farmers
The findings in the Table 3 and Fig. 3 revealed that about 84.17 percent of the farmers regularly shared the information with friends, followed by 70.80 percent of the respondents occasionally shared the information with their family members.The probable reason for sharing information with the friends was that they play important.
Categories of agricultural information received by the ICT tool user farmer
The results in Table 4 showed that out of four ICT tools KMAS user respondents inferred they received information on fruit crops (44.17%), flower crop (33.33%), medicinal and aromatic crops category (43.33%), organic farming category (58.33%) and dairy categories (88.33%).Out of seven categories of information farmers received maximum information on afore said categories.The probable reason for this was that KVK's are meant for provides pertinent information on crops growing region which covers all categories of crops (Dechamma S., 2020) [2] .One or other way the information is received by the farmers on these categories.Hence, maximum categories of information received from KMAS tools (Pujar S. S., 2021) [10] .Whereas, majority of e-Krushika app user farmers received fruit crops and commercial crops categories information, the e-Krushika app was operated in hilly region where majority of the farmers were plantation and fruit crop growers.
Conclusion
Information and Communications Technology, is a key driver of transformation in the agricultural sector.It enables efficient data collection and analysis, remote sensing, weather forecasting, optimized farming techniques, and improved decisions making processes.It is also a key enabler for the increased use of precision agriculture techniques, such as precision seeding, nutrient application and harvesting.ICT can be used to increase yields, reduce risks and improve sustainability.It is fundamental in improving crop production and production efficiency, from soil fertility monitoring to harvest management.Moreover, it helps farmers become more efficient and productive by facilitating timely decisions, enabling better monitoring of the production process, and optimizing the use of resources (Vivek, 2021) [11] .ICT can also help farmers take advantage of automation in the form of farm probes, sensors, and drones, which can be used to monitor crop growth, soil nutrients, and climate conditions.Using ICT in agriculture will help farmers reduce their dependence on traditional paper-based tools and move to more advanced, automated technology solutions.
Fig 1 :
Fig 1: Time saving in obtaining agricultural information by the ICT tool user farmers
Fig 2 :
Fig 2: Different ICT sources of information used by the farmers
Fig 3 :
Fig 3: Information sharing behavior of ICT tool user farmers
Table 1 :
Time saving in obtaining agricultural information by the ICT tool user farmers
Table 2 :
Different ICT sources of information used by the farmers
Table 3 :
Information sharing behaviour of ICT tool user farmers
Table 4 :
Categories of agricultural information received by the ICT tool user farmer
|
2024-03-24T15:21:11.033Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "33e4b458f17adedd2027712659d4bde09e56aca0",
"oa_license": null,
"oa_url": "https://www.biochemjournal.com/articles/790/8-3-73-463.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "49dc486e1954c68720c51f0fa4ca06d038812bf6",
"s2fieldsofstudy": [
"Computer Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
259358679
|
pes2o/s2orc
|
v3-fos-license
|
Management of the ‘wicked’ combination of heart failure and chronic kidney disease in the patient with diabetes
Patients with type 2 diabetes are at an increased risk of developing heart failure and chronic kidney disease. The presence of these co‐morbidities substantially increases the risk of morbidity as well as mortality in patients with diabetes. The clinical focus has historically centred around reducing the risk of cardiovascular disease by targeting hyperglycaemia, hyperlipidaemia and hypertension. Nonetheless, patients with type 2 diabetes who have well‐controlled blood glucose, blood pressure and lipid levels may still go on to develop heart failure, kidney disease or both. Major diabetes and cardiovascular societies are now recommending the use of treatments such as sodium‐glucose co‐transporter‐2 inhibitors and non‐steroidal mineralocorticoid receptor antagonists, in addition to currently recommended therapies, to promote cardiorenal protection through alternative pathways as early as possible in individuals with diabetes and cardiorenal manifestations. This review examines the most recent recommendations for managing the risk of cardiorenal progression in patients with type 2 diabetes.
| INTRODUCTION
The increasing prevalence of type 2 diabetes (T2D) is a growing clinical burden worldwide. 1 Heart failure (HF) and chronic kidney disease (CKD) are common complications that often occur in individuals with T2D, and the presence of both HF and CKD significantly increases morbidity and mortality in this group. 2 HF often presents as the first cardiovascular (CV) event in patients with T2D 3 and affects more than 30% of such patients, making it a major cause of mortality in this population. 4 Patients with established T2D have a 33% higher risk of hospitalization for heart failure (hHF) than individuals without diabetes. 5 Patients with HF and prediabetes are also at a greater risk of all-cause mortality and cardiac events compared with those with normoglycaemia. 6 Diabetic kidney disease (DKD) is observed in 40% of patients with T2D, and is the leading cause of end-stage kidney disease (ESKD). 7,8 In 2021, the global prevalence of diabetes was more than 500 million individuals worldwide, and this number is expected to rise to more than 750 million in 2045. 9 It is therefore essential to optimize treatment strategies for managing both HF and CKD to decrease global morbidity and mortality. 10 Management of T2D has notably improved over the last two decades, largely by focusing on the optimization of blood glucose, blood pressure and lipid levels. Despite reaching treatment goals with current strategies, a residual risk for developing HF and CKD remains. 11 Cardiac and renal complications associated with T2D can arise together and co-exist as cardiorenal syndrome (CRS). 12 CRS describes the interplay between the heart and kidney and classifications of such syndromes arose based on initial organ damage or insult (heart or kidney), and whether the disorders were acute or chronic. 12 The CRS categories acknowledge that HF, whether acute or chronic, can lead to kidney damage with a reduced estimated glomerular filtration rate (eGFR). The opposite is also true, in that progressive kidney damage can lead to HF with congestion. 13 The bidirectional interaction of HF and kidney disease can be seen from placebo groups of major CV outcomes trials such as DAPA-HF, 14 EMPEROR-Reduced 15 and EMPEROR-Preserved. 16 These show the presence of CKD in participants with HF. In addition, patients with CKD typically also present with HF, such as in DAPA-CKD [17][18][19] and CREDENCE. 20 There was presence of HF in all participants, both with or without T2D. 21 Notably, the major recent clinical trials did not use CRS categories as entry criteria, but did capture onset or progression of CKD. As such, future meta-analyses may add insights into this combined disorder.
The emergence of effective treatments such as sodium-glucose co-transporter-2 (SGLT2) inhibitors and non-steroidal mineralocorticoid receptor antagonists (MRAs) highlights the importance of targeting alternative pathways to improve cardiorenal outcomes in patients with T2D. Major diabetes and cardiology societies [22][23][24][25] are updating their guidelines by recommending these targeted approaches. 26 Here, we review the clinical rationale for initiating cardiorenal-protective therapy beyond the traditional risk factorreduction strategies of hyperglycaemia, hypertension and dyslipidaemia in individuals with T2D and summarize the evidence for its benefit.
| LITERATURE SEARCH STRATEGY
In preparation for this review, we searched the PubMed database for articles published until July 2022. We identified articles on the management of CKD and HF using the terms 'chronic kidney disease' and 'HF'. Articles on clinical studies related to the management of CKD associated with T2D were identified using the terms 'diabetes', 'kidney', 'management', 'treatment options', 'SGLT2 inhibitors' and 'MRAs'. We also reviewed the reference lists of articles identified in these searches for other relevant papers illustrating pathophysiology underlying HF and CKD in T2D.
| THE PATHOPHYSIOLOGY UNDERLYING HF AND CKD IN T2D
The pathogenesis of HF and CKD in patients with T2D can be attributed to the disrupted metabolic pathways associated with hyperglycaemia, hypertension, inflammation and fibrosis. 27 As previously mentioned, a bidirectional interaction between the heart and the kidneys exists ( Figure 1). In addition, the sympathetic nervous system (SNS) and the renin-angiotensin-aldosterone system (RAAS) contribute to the stimulation of pathways associated with HF and CKD. 28 F I G U R E 1 Pathophysiology underlying HF and CKD in patients with type 2 diabetes. CKD, chronic kidney disease; HF, heart failure; LV, left ventricle; RAAS, renin-angiotensin-aldosterone system; SNS, sympathetic nervous system The aetiology of HF in patients with T2D can be explained through the cardiotoxic triad of diabetic cardiomyopathy, hypertension and coronary artery disease (CAD). 1 The term diabetic cardiomyopathy describes ventricular dysfunction in individuals without hypertension and CAD. 29 The term can also be used to describe myocardium dysfunction that is prevalent in patients with diabetes. 30 The presence of myocardial ischaemia is thought to induce changes in cardiac biochemistry. Impaired cardiac cells and tissues contribute to reduced cardiac function and are linked to abnormalities in electrophysiology. 1,31 Cardiac ischaemia, whether attributable to large or small vessel disease, is responsible for pathophysiological changes in the myocardium. 1,32 This myocardial dysfunction, combined with hypertension, leads to fibrosis and dysregulated systolic function; the process is further aggravated by activation of the renin-angiotensin system (RAS) and the SNS. The result is a loss of cardiac myocytes and development of HF. 1,32 RAS and SNS activation causes disorganized compensatory cellular hypertrophy that is also known as 'cardiac remodelling'. 1,32 A state of dysregulated gene expression lowers both diastolic and systolic ventricular function, and is thought to influence HF progression. 1,33 Reduced ventricular function may arise as a means to lessen the energy expenditure of the dysfunctional/ weakened myocardium. 1,33 Macrovascular cardiac impairments, such as myocardial infarction, are common in patients with T2D. 34 Vascular dysfunction may result from oxidative stress, which occurs when there is an imbalance of endogenous oxidants and antioxidants. 35 The presence of oxidative stress may accumulate from varying factors. This loss in redox homeostasis in reactive oxygen species (ROS) and reactive nitrogen species amounts to activation of the immune system and a proinflammatory and profibrotic environment. Although physiological levels of ROS are essential for proper cell function, overproduction of these molecules is known to stimulate both cardiac and renal dysfunction. [35][36][37] It is important to note that, in the kidney and vascular tissues, oxidative stress leads to hypertension, while hypertension also promotes oxidative stress. 37 Together, oxidative stress and inflammation are critical in CKD-related pathologies. 36 Furthermore, inflammation and oxidative stress contribute to the structural and functional diastolic dysfunction observed in HF with preserved ejection fraction (HFpEF). 38 Inflammation promotes fibrotic tissue production, impairing optimal myocyte contraction and resulting in suboptimal cardiac function. 28,39 Fibrosis is a crucial aspect of tissue repair and is regarded as a pathological phenomenon that is prevalent in chronic inflammatory diseases. 40 Overactive fibrosis can lead to the development of HF and CKD ( Figure 1). 41 In endothelial cells, mineralocorticoid receptor (MR) activation leads to higher levels of ROS, resulting in oxidative stress, which is associated with vascular inflammation. 42 Based on evidence from animal models and from studies on primary hyperaldosteronism, aldosterone has been reported to cause left ventricular (LV) remodelling by inducing cardiomyocyte hypertrophy, chronic inflammation and extracellular matrix dysregulation. 43, 44 The underlying mechanism involves: activation of extracellular signal-regulated kinases, c-Jun N-terminal protein kinases and protein kinase c-alpha 44 ; phosphorylation of both light-and heavy-chain myosin; and production of cardiotrophin-1, which can cause cardiomyocyte hypertrophy and increase the expression of myosin light chains. 45 Aldosterone also promotes inflammatory cytokine formation, 46 macrophage activation and macrophage proinflammatory factor production, while also increasing the expression of intercellular adhesion molecules on endothelial cells, which facilitate macrophage attachment to the endothelium. The ensuing fibrosis of the myocardium occurs when degradation of the extracellular matrix by metalloproteinases is exceeded by matrix production. 43 This leads to decreased contractility, non-compliant ventricles, as well as increased myocardial ischaemia. Fibrosis, in addition to LV hypertrophy, results in both systolic and diastolic dysfunction. The damage to the myocardium may also be exacerbated by a high-salt diet. 47 Hyperglycaemia disrupts intraglomerular pressure control and leads to intraglomerular hypertension in the kidneys, which has been shown to activate metabolic pathways that contribute to the accumulation of ROS 48 ; this, in turn, leads to mitochondrial dysfunction, as well as upregulation of pro-oxidant enzymes. 48 Abnormal glucose metabolism and dysregulated intracellular signalling also contribute to inflammation, fibrosis, and endothelial and epithelial injury, resulting in CKD. 27 Evidence suggests that MR overactivation promotes inflammation and fibrosis, as well as influencing the progression of CKD and cardiovascular disease (CVD). 49
| THE IMPORTANCE OF MONITORING CARDIORENAL RISK IN PATIENTS WITH T2D, HF AND CKD
The co-existence of cardiorenal complications in patients with T2D is common; thus, it is important to conduct routine monitoring of T2D patients to assess their risk of developing HF and CKD.
For CKD screening, the American Diabetes Association (ADA) guidelines recommend an annual assessment of urinary albumin levels and eGFR in all patients with T2D, regardless of treatment. 22,50 Guidance from the 2023 ADA Standards of Care for CKD and risk management also advise that patients with established DKD should be monitored multiple times a year to guide therapy. 50 Monitoring serum potassium in patients taking diuretics is important to prevent cardiac arrythmias caused by hypokalaemia. Individuals receiving angiotensinconverting enzyme (ACE) inhibitors, angiotensin receptor blockers (ARBs) or MRAs should also have their medication dosages adjusted to diminish additional CKD-related risks. 22 The presence of LV hypertrophy is common in patients with T2D and is also a major CVD risk. 51 Somaratne et al. 51 reported that even echocardiograms are insufficient to detect LV hypertrophy, but are superior to N-terminal pro-B-type natriuretic peptide levels and electrocardiograms, and thus highlights the need for alternative tools to detect LV hypertrophy in patients with T2D. Meanwhile, the prevalence of LV hypertrophy, as measured by echocardiography, in asymptomatic patients with T2D is high, 51 and routine screening for patients with T2D who are asymptomatic for CVD is not recommended at this time, provided atherosclerotic CVD risk factors are treated as per the 2023 ADA guidelines. 52 Hadjkacem et al. 53 reviewed the value of assessing masked arterial hypertension (HTN). Masked HTN (MHTN) is associated with CVD risk, a risk that is similar to that of permanent HTN and is common in patients with T2D. The results revealed that systematic screening for MHTN through 24-hour blood pressure monitoring in patients with T2D provided an insightful indication of CVD risk. This investigation emphasizes the need for screening tools to ensure optimal monitoring of cardiorenal risk to facilitate timely clinical intervention for patients with T2D.
The TOPCAT trial noted that more than one-third of participants with T2D and HFpEF had microvascular complications and a greater number of adverse outcomes than those without microvascular disease. 38, 54 The report from the trial recommends that during routine screening of patients with T2D, physicians should also take note of structural and functional changes of the heart, eyes, kidneys and peripheral nerves to prevent further adverse outcomes. Therefore, routine monitoring of patients with T2D for early renal damage is not only a key component for delaying CKD progression through testing for eGFR and albuminuria, but also contributes to improved CVD risk stratification. 55
| TREATMENTS SHOWN TO IMPROVE HF AND CKD IN PATIENTS WITH T2D
5.1 | The role of SGLT2 inhibitors in the management of HF and for improving renal outcomes SGLT2 inhibitors are able to improve cardiorenal outcomes through various mechanisms of action, including via reductions in blood pressure, arterial stiffness and endothelial dysfunction. 14 A shift in bioenergetics may also explain the beneficial CVD outcomes from SGLT2 inhibitors. 56 Substituting ketone metabolism for fat consumption and glucose oxidation improves energy efficiency and reduces the workload placed on the myocardium. 56 Preclinical research has shown that the cardioprotective characteristics of dapagliflozin observed in diabetic cardiomyopathy may involve modulation of ion homeostasis as a way to reduce fibrosis and inflammation, and improve systolic function. 57 It is thought that SGLT2 inhibitors decrease inflammation and oxidative stress through activation of the nitric oxide-soluble guanylyl-protein kinase G pathway, contributing to attenuated diastolic stiffness of the left ventricle in HFpEF. 58 There are numerous renal benefits of SGLT2 inhibition including the positive effect on glomerular haemodynamics, which leads to long-term preservation of kidney function. SGLT2 inhibitors target mechanistic pathways that reduce intraglomerular pressure and the glomerular filtration rate. 59 Other modes of action that have been proposed are hypoxia reduction and activating transcription factors. 59 However, to better understand the mechanisms underlying the cardiorenal-protective properties of SGLT2 inhibitors, further investigation in humans is required.
CV outcomes trials such as DAPA-HF, 14 CANVAS, 60,61 EMPEROR-REDUCED, 15 For patients with T2D, HF and CKD, treatment selection will need to be patient-specific and may depend on the CKD stage of the patient, as well as the presence of co-morbidities. 28 Adverse effects should be considered when administering treatment options. The use of SGLT2 inhibition has been associated with volume contraction because of osmotic diuresis. Although these effects may be mild and infrequent, they need to be monitored, particularly in elderly patients and patients utilizing diuretics. 58 Other clinical risks of SGLT2 inhibitors, such as euglycaemic diabetic ketoacidosis, have been noted, but were not observed in the CREDENCE 20,66 and DAPA-CKD trials. 17,18,58 The 2022 ADA and Kidney Disease: Improving Global Outcomes (KDIGO) consensus statement recommends monitoring blood or urine ketones for managing diabetic ketoacidosis, as well as maintaining low-dose insulin in insulin-requiring patients. 67 SGLT2 inhibitor administration is also associated with an increased risk of hypoglycaemia. 58 This risk increases with higher doses in patients with T2D, CKD and HF who are also taking insulin or sulphonylureas, suggesting that careful dosage adjustments of antidiabetes therapy should be implemented when managing these patients, 58,67,68 to avoid hypoglycaemia. 37 5.2 | The role of MRAs in the reduction of CV and kidney outcomes Fibrosis and inflammation are caused by overactivation of the MR. 69 Selectivity of MR antagonism varies between MRAs. Although it shows lower selectivity, the first-generation steroidal MRA spironolactone is more potent than the second-generation MRA eplerenone. 69 The non-steroidal MRA finerenone has been shown to inhibit detrimental gene activation independent of aldosterone inhibition, [70][71][72] whereas steroidal MRAs such as spironolactone and eplerenone show partial agonism on co-factor recruitment. 71 Patients with T2D and HF exhibited clinical improvements following MRA treatment compared with non-MRA therapy, with lower allcause mortality, CV mortality and hHF. 73 Historically, MRAs have been linked with an increased risk of hyperkalaemia. 22 Abbreviations: ADA, American Diabetes Association; HF, heart failure; HFpEF, heart failure with preserved ejection fraction; HFrEF, heart failure with reduced ejection fraction; SGLT2, sodium-glucose co-transporter-2; T2D, type 2 diabetes.
| UPDATED GUIDELINE RECOMMENDATIONS FOR THE MANAGEMENT OF PATIENTS WITH T2D
Optimal management of T2D involves collaboration between multidisciplinary teams of clinicians, including primary care providers, endocrinologists, cardiologists and nephrologists. 25 The management of patients with T2D has historically focused on controlling risk factors such as elevated blood pressure and optimizing blood glucose and lipid levels to prevent CV disease and reduce the risk of DKD. 7 Management recommendations for patients with T2D with HF and CKD are summarized in Tables 1 and 2.
Guidelines from the American Heart Association 82 and ADA 83 recommend treating patients with diabetes using ACE inhibitors or ARBs, 84 MRAs and SGLT2 inhibitors. 83 The KDIGO guidelines advise reducing the risk of CV complications and CKD progression in patients with T2D by implementing a multifactorial approach. 25 However, despite guideline-recommended treatments such as metformin, many patients with well-controlled blood pressure and blood glucose progress to kidney disease and/or develop CV co-morbidities, 85,86 highlighting the complex nature of cardiorenal protection. The US Food and Drug Administration (FDA) has not only approved SGLT2 inhibitors for their antihyperglycaemic properties in the treatment of T2D, but also for the reduction of CV events. 87 The 2019 T A B L E 2 Management recommendations for patients with T2D and CKD Table 1. 81 The KDIGO guidelines suggest the use of ACE inhibitors or ARBs to reduce blood pressure in patients with T2D and CKD. 25 The ADA guidance recommends these agents as the preferred first-line therapy for blood pressure control in patients with diabetes, hypertension, an eGFR of less than 60 mL/ min/1.73m 2 and a urine albumin: creatinine ratio of 300 mg/g or higher, because of their ability to prevent CKD progression. 22,50,[89][90][91][92] However, the recommendations do not support combining these treatments because of the risk of acute kidney injury or hyperkalaemia. 25,50,93,94 The 2022 ADA and KDIGO consensus statement recommends an ACE inhibitor or ARB for patients with T2D with hypertension and albuminuria, titrated to the maximum tolerated dose, 67 and this is supported by the 2023 ADA Standards of Care (Table 1). 50 To slow the progression of CKD in patients with T2D and CKD, the ADA recommends reducing urinary albumin of 300 mg/g or higher by 30% or more for patients with macroalbuminuria. 22,50 However, clinicians should be aware that some patients with T2D may experience ESKD in the absence of albuminuria. 95 Management recommendations for patients with T2D and CKD ( and CKD, and an eGFR of less than 20 mL/min per 1.73m 2 , with an SGLT2 inhibitor. 67 The consensus statement is also in favour of administering an SGLT2 inhibitor or glucagon-like peptide-1 receptor agonist to patients with T2D with either established atherosclerotic CVD or kidney disease, as part of a CV risk reduction protocol and glucose-lowering management. 67 Guidance from the 2022 American Association of Clinical Endocrinologists 96 recommends finerenone to reduce CKD progression and CV events in patients with CKD who are at an increased risk of CV events or CKD progression, and who are treated with maximum tolerated doses of ACE inhibitor or ARB. 96
FUNDING INFORMATION
Medical writing support and article processing charges were funded
|
2023-07-07T22:15:49.554Z
|
2023-07-06T00:00:00.000
|
{
"year": 2023,
"sha1": "ab3d8f5065b598bf08ab42be48d1c607ae08c285",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.15181",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "66d6140e36c170b20d66421b6995a1d659600f29",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7796194
|
pes2o/s2orc
|
v3-fos-license
|
Clinical significance of hepatocyte growth factor/c-Met expression in the assessment of gastric cancer progression
Among the mechanisms that control cancer progression, cell mobility is a significant factor required for cellular liberation from the primary focus and infiltration. Hepatocyte growth factor (HGF) has been shown to facilitate cell mobility. In the present study, the clinical significance of the HGF/c-Met pathway in the assessment of gastric cancer progression was evaluated. From a cohort of patients with gastric cancer who underwent surgical resection between April 1999 and March 2003, 110 subjects were randomly selected. Preoperative serum HGF levels were measured and various pathological factors were analyzed. Furthermore, 50 subjects were randomly selected from within this group and immunohistochemical staining of tissue preparations for HGF and its receptor c-Met were performed. In the infiltrative growth pattern [(INF)α,β vs. INFγ], advanced progression was associated with elevated preoperative serum HGF levels (P<0.001). No correlation was identified between serum HGF levels and immunostaining for HGF or c-Met in the tissue preparations. Immunostaining revealed a significant correlation between c-Met expression and lymphatic vessel invasion (ly0.1 vs. 2.3; P=0.0416), lymph node metastasis (n0.1 vs. 2; P=0.0184) and maximum tumor diameter (≤50 mm vs. >50 mm; P=0.0469). Furthermore, c-Met-positivity was associated with a significant difference in overall survival (P=0.0342), despite stage I and II cases accounting for 82% of the total cohort (41 of 50 cases). These results suggested that the expression of the HGF/c-Met pathway in gastric cancer may be a potential predictive factor for disease progression.
Introduction
Among the mechanisms that mediate cancer progression, cell mobility is a significant factor necessary for liberation from the primary focus and infiltration. Various cell growth factors (1)(2)(3)(4)(5), including epidermal growth factor, transforming growth factor β (6,7) and hepatocyte growth factor (HGF) (8), are known to facilitate cell mobility.
HGF, which was first isolated and cloned by Nakamura et al (9)(10)(11)(12), performs various biological activities in cells, including stimulation of cell growth, promotion of migration, induction of morphogenesis and anti-apoptotic activities, via the c-Met receptor, which is a transmembrane protein containing a tyrosine kinase domain (13)(14)(15). The involvement of HGF in the infiltration/metastasis of cancer cells was first suggested in 1991, in a study in which the scatter factor, isolated as a fibroblast-derived bioactive factor with cell stimulatory activities in various cultured epithelial and cancer cells, was found to share an identical structure to that of the HGF molecule (16,17). The functions of HGF were further elucidated by in vitro and in vivo analyses using various types of cancer cell (18,19). Activation of the HGF/c-Met pathway leads to simultaneous activation of multiple signal transduction pathways that promote the infiltration of cancer cells and is considered to underlie the potent infiltrative/stimulatory effect of HGF (20)(21)(22)(23)(24)(25). Genetic mutations of the c-Met receptor have been reported in various cancer types, including papillary renal (20)(21), hepatic (22), gastric (23) and pulmonary cancer (24,25), and the overexpression of c-Met has also been reported in numerous cancer tissues (26). Therefore, if the c-Met receptor is present in cancer cells, HGF antagonists should be able to inhibit multiple signal transduction pathways that lead to cancer cell infiltration, thereby exerting potential anti-cancer effects (27).
In a previous study by our group, an association between elevated pre-operative serum HGF levels and advanced disease stages in colon cancer was identified, mainly regarding the depth of tumor invasion into the wall and liver metastasis, which suggested the expression of the HGF/c-Met pathway as a potential predictive factor of colon cancer progression (8). In the present study, serological and immunohistological analyses were conducted in order to evaluate the clinical significance of the expression of the HGF/c-Met pathway in assessing the stage of gastric cancer progression. (Table I). Data obtained from 200 healthy individuals were used as the control. Healthy individuals comprised patients undergoing surgery for benign diseases, including inguinal hernia or hemorrhoid, and healthy volunteers. Classification of infiltrative growth pattern (INF) was performed according to the General Rules for the Gastric Cancer Society, by the Japanese Research Society for Gastric Cancer, which is based on the Union for International Cancer Control criteria (28).
The 50 subjects that were subjected to immunostaining comprised 38 males and 12 females, with a mean age of 61.8±10.6 years (range, 29-81 years). The tissue samples were histologically classified as follows: One as papillary adenocarcinoma, 23 as tubular adenocarcinoma (12 well-differentiated and 11 moderately differentiated), 20 as poorly differentiated adenocarcinoma, five as signet-ring cell carcinoma and one as mucinous adenocarcinoma. The histological classification of invasion depth was as follows: m in 13 patients, sm in 15 patients, mp in five patients, ss in eight patients and se in nine patients. The stage classification was IA in 25 patients, IB in seven patients, II in nine patients, IIIA in four patients, IIIB in two patients and IV in three patients (Table II).
Serological analysis. Serum was obtained by centrifugation of venous blood collected prior to surgery at 1,000-2,000 x g for 10 min, which was stored frozen at -80˚C and thawed at the time of measurement. HGF levels were measured using a two-step sandwich HGF ELISA kit (Otsuka, Tokyo, Japan), which included the antibodies and o-Phenylenediamine substrate solution, according to the manufacturer's instructions. In the first reaction, 50 µl phosphate-buffered saline (PBS; Wako Pure Chemical Industries, Ltd, Osaka, Japan) and 50 µl sample were added to each well of a microtiter plate, which was sealed and incubated at room temperature for 1 h with agitation. Following removal of the reaction mixture, the plate was washed five times with wash buffer (Wako Pure Chemical Industries, Ltd). Subsequently, 100 µl/well rabbit polyclonal anti-HGF primary antibody was added for the second reaction and incubated for 1 h at room temperature. Following aspiration and washing five times, 100 µl/well of the horseradish peroxidase-conjugated goat anti-rabbit immunoglobulin G secondary antibody was added for the third reaction and incubated for 1 h at room temperature. Following aspiration and washing five times, 100 µl/well o-Phenylenediamine substrate solution was added. Following incubation at room temperature for 10 min, the reaction was stopped by adding 100 µl of stop solution. Absorbance was measured at 420 nm using a microplate reader (SpectraMax Plus 384; Molecular Devices, Sunnyvale, CA, USA), and HGF levels were determined using a standard curve.
Immunohistological analysis. HGF: Following deparaffinization with petroleum benzene (Kanto Chemical Co., Inc., Tokyo, Japan) of the 20% formalin-fixed (Wako Pure Chemical Industries, Ltd) paraffin-embedded (Junsei Chemical Co., Ltd, Tokyo, Japan) sections (4 µm), which included the innermost tumor portion of each gastric cancer primary focus, the sections were immersed in PBS and exposed to microwaves at 95˚C for 15 min to activate the antigens. Subsequently, the tissue sections were treated with 3% H 2 O 2 (Sankyo Kagaku Yakuhin Co., Ltd, Kanagawa, Japan) for 20 min to remove the intrinsic peroxidase activity and immunohistochemical staining was performed using the avidin-biotin-peroxidase complex (ABC) method. Following dilution of the reaction with normal horse serum at room temperature for 10 min, rabbit polyclonal anti-human HGF antibody (dilution, 1:20; IBL Co., Ltd, Gunma, Japan) was used as the primary antibody and incubation was continued at room temperature for 60 min. This was followed by reaction with a biotin-conjugated anti-mouse immunoglobulin G secondary antibody (DAKO Japan, Kyoto, Japan) at room temperature for 30 min and reaction with the ABC reagent (DAKO, Glostrup, Denmark) at room temperature for 30 min. The color was developed by addition of 20% 3,3'-diaminobenzidine tetrahydrochloride (Dojindo Laboratories, Kumamoto, Japan), the nuclei were stained with hematoxylin (Merck Millipore KGaA, Darmstadt, Germany) and the sections were dehydrated. c-Met: c-Met was assayed in a similar manner to HGF, except that the antigen was activated by autoclaving at 95˚C for 15 min and a rabbit polyclonal anti-human c-Met primary antibody (dilution, 1:20; IBL Co., Ltd.) was allowed to react at room temperature for 1 h.
Microscopic examination of HGF and c-Met was performed on the tip of the tumor, particularly the innermost section. Three fields of each section were observed at 200x magnification using a BHS/System Living microscope (Olympus Corp., Tokyo, Japan) and the results were classified as positive when the ratio of stained cancer cells was >25%, according to previous studies that were analyzed for comparison ( Fig. 1) (8,(29)(30)(31).
Statistical analysis. JMP version 9.0.2 statistical software (SAS Institute, Inc., Cary, NC, USA) was used for statistical analyses. Values are presented as the mean ± SD. The Mann-Whitney U test was used to compare differences between two independent groups. Cumulative survival rates were calculated using the The terminology used in this report is in accordance with the General Rules of the Gastric Cancer Society by the Japanese Research Society for Gastric Cancer (28).
Results
Serological analysis of HGF. Significant differences were detected in preoperative HGF levels between the gastric cancer and control groups (391.0±68.4 vs. 193.3±52.0 pg/ml, respectively; P<0.0001). There was no correlation between preoperative serum HGF levels and patient age or gender. The results of analyses to identify correlations between serum HGF levels and clinicopathological factors are shown in Table I. Advanced progression in the INFα/β vs. INFγ was correlated with elevated preoperative serum HGF levels (P<0.001). Although there was no significant difference in tumor diameter, invasion depth or lymphatic vessel invasion (ly), preoperative serum HGF levels increased as the disease progressed. In patients with peritoneal dissemination, serum HGF levels were frequently increased. Table II). The overall survival (OS) was significantly lower in c-Met-positive cases than that in c-Met-negative cases (P=0.0342; Fig. 2 and Table III).
Discussion
Cell growth factors, including HGF, constitute a significant group of molecules that regulate cell proliferation, migration and apoptosis in the dynamic organization of cell populations during embryogenesis, organogenesis and regeneration. Numerous factors amongst these additionally promote cell migration. It has been previously reported that HGF has the most potent effect on the promotion of cancer cell infiltration, the cell migration associated with the degradation of extracellular matrix components, including the basement membrane and collagen (9)(10)(11)(12)(13)(14)(15)(16). Therefore, activation of the HGF/c-Met pathway results in the simultaneous activation of multiple signal transduction pathways that promote cancer cell infiltration. Antagonists of the HGF/c-Met pathway represent potential anti-cancer agents to inhibit cancer infiltration and metastasis, and therefore, the development of such antagonists is currently underway (27).
In the present study, serological and immunohistological analyses of the expression of the HGF/c-Met pathway in gastric cancer were performed in order to establish its clinical significance in the assessment of disease progression. To the best of our knowledge, no previous studies analyzing serum HGF levels and immunostaining for HGF and c-Met simultaneously with pathological factors were available in the literature.
Although elevated serum HGF levels in patients with gastric cancer had been previously reported (32)(33)(34)(35), the present study aimed to determine whether this factor may be used in the assessment of disease progression. The results indicated that pre-operative serum HGF levels were significantly higher in patients with gastric cancer than those in the control group (P<0.0001), and that high HGF levels above the cut-off value (297.3 pg/ml; mean in the control+2 SD) were observed in 93.75% of patients, similar to that reported previously. However, the correlation between HGF levels and disease stage previously reported by Wu et al (32) and Han et al (33) was not observed in the present study, the results of which were similar to those reported by Taniguchi et al (34).
Conversely, advanced progression in the infiltrating growth pattern (INFα/β vs. INFγ) was significantly correlated with high preoperative serum HGF levels (P<0.001). Although this effect may be associated with the involvement of HGF in the infiltrating growth of cancer cells, this factor could not be evaluated because, to the best of our knowledge, no other study on infiltrating growth patterns was available in the literature. HGF levels were not significantly correlated with certain parameters, including tumor diameter, invasion depth and ly factors; however, preoperative serum HGF levels were elevated as the disease progressed. Regarding the association between HGF levels and invasion depth (pT factor), Niki et al (35) identified a significant difference between pT1 and pT2-4 tumors.
Although a significant difference in HGF levels was not detected in patients with peritoneal dissemination, there was a tendency towards high HGF levels among these patients.
Subjects for the present study were selected randomly; therefore no patient with liver metastasis was included. Niki et al (35) reported a significant elevation in serum HGF levels in patients diagnosed with liver metastasis, whereas Taniguchi et al (34) reported that there was no significant difference in serum HGF levels in patients with relapse independent of liver metastasis. Therefore, the preoperative serum HGF levels in patients with gastric cancer represent a potential predictive factor for disease progression, as observed in colon cancer (6).
In the present study, no correlation was identified between serum HGF levels and immunostaining for HGF or c-Met in tissue preparations; this was potentially due to the complex paracrine and autocrine mechanisms of HGF in cancer cells (36,37). Therefore, the significance of HGF expression in the microenvironment surrounding tumors requires further investigation.
Although there was no correlation between pathological factors and immunostaining for HGF, a significant correlation was identified between c-Met, which is a receptor of HGF, and lymphatic vessel invasion (ly0.1 vs. 2.3, P= 0.0416), lymph node metastasis (n0.1 vs. 2, P= 0.0184) and maximum tumor diameter (<50 mm vs. >50 mm, P= 0.0469). Correlations between immunostaining for c-Met and various pathological factors, particularly invasion depth and disease stage, have been reported in previous studies (38)(39)(40)(41)(42)(43)(44)(45)(46). In the present study, cases were selected randomly for immunostaining analysis, as for serological analysis. It was demonstrated that 41 (82%) of the 50 cases analyzed were stage I or II, and 28 (56%) had an invasion depth of m or sm, indicating that the majority of the cohort comprised relatively early stage cancer cases. Only three (6%) cases that were Peritoneum dissemination-factor-positive were stage IV. These results likely explain the absence of statistically significant differences between immunostaining and invasion depth or disease stage.
However, in the present study, which included numerous relatively early cancer cases, the OS of c-Met immunostaining-positive cases was significantly lower than that of negative cases (P= 0.0342), indicating that c-Met positivity may be a prognostic factor for gastric cancer.
In chemotherapy for unresectable recurrent gastric cancer, the efficacy of trastuzumab was demonstrated in HER2-positive cases, which subsequently led to the use of personalized drug treatments with molecularly targeted drugs (47). Rilotumumab, which is a fully human monoclonal antibody against HGF and a ligand of the c-Met receptor, suppresses c-Met downstream signaling (47). In pre-clinical models, rilotumumab was shown to inhibit tumor progression in a HGF/c-Met-dependent manner, and its tolerability was verified in early clinical trials (48,49). If future phase II/III trials are implemented under clinical trial designs that allow sufficient verification of the potential of c-Met expression as a biomarker to aid the identification of cases in which rilotumumab is effective, a field of c-Met-positive gastric cancer may be established, similarly to that of HER2-positive gastric cancer. Therefore, further basic studies regarding c-Met expression are required, particularly to improve quality control in immunostaining.
In conclusion, the results of the present study revealed that elevated pre-operative serum HGF levels were indicative of invasive growth of tumor foci, categorized as IFNγ, and characterized by high-grade tumors with an unclear border between the tumor and the surrounding tissue. c-Met-positive immunostaining indicated a tumor with a large diameter, advanced lymphatic vessel invasion and a high degree of lymph node metastasis, and may therefore be a factor indicating poor prognosis. Based on the results described above, the expression of the HGF/c-Met pathway in gastric cancer is a potential predictive factor for disease progression, as previously established for colon cancer.
|
2016-05-31T19:58:12.500Z
|
2015-01-15T00:00:00.000
|
{
"year": 2015,
"sha1": "392e7f7329ce0e51c28a7bb9b6188b178a3eea43",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2015.3205/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "392e7f7329ce0e51c28a7bb9b6188b178a3eea43",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258236554
|
pes2o/s2orc
|
v3-fos-license
|
Foundation of classical dynamical density functional theory: uniqueness of time-dependent density-potential mappings
When can we uniquely map the dynamic evolution of a classical density to a time-dependent potential? In equilibrium, without time dependence, the one-body density uniquely specifies the external potential that is applied to the system. This mapping from a density to the potential is the cornerstone of classical density functional theory (DFT). Here, we derive rigorous and explicit conditions for such a unique mapping between a nonequilibrium density profile and a time-dependent external potential. We thus prove the underlying assertion of dynamical density functional theory (DDFT) - with or without the so-called adiabatic approximation often used in applications. We also illustrate loopholes when our conditions are violated so that two distinct external potentials result in the same density profiles but different currents, as suggested by the framework of power functional theory (PFT).
Introduction
The foundation of classical density functional theory (DFT) [21,9,10] rests on the fact that the one-body density uniquely determines the external potential and hence the underlying Hamiltonian if the interaction potential is known. In essentially all relevant cases, there exists a unique mapping from the one-body density ρ(x) to an external potential V (x) for x ∈ R d in d dimensions and for a given interaction potential, temperature, and number of particles (or chemical potential). Remarkably, because of this unique mapping, the one-body density specifies a many-body system in equilibrium and hence all higher-body correlations. The existence of such a unique density-potential mapping was first proven in the context of quantum mechanics by Hohenberg and Kohn [15], Kohn and Sham [16], and Mermin [21]. Mermin's generalized arguments can be directly applied to classical many-body systems as elaborated by Evans [9] and later rigorously confirmed by Chayes, Chayes, and Lieb [3]. The unique mapping exists under mild and natural conditions on the density and interparticle interactions that essentially assume finite energies. Among others, this result implies a formal equivalence of Mermin-Evans DFT to the alternative framework [7] based on Levy constrained search [17] (which does not a priori restrict to density profiles that are realizable by an external potential).
Here, we are interested in the generalization of unique density-potential mappings to the time-dependent case, i.e., to classical dynamical density functional theory (DDFT) [20,1,29], as first derived by Marconi and Tarazona [20] from the stochastic Langevin equation and later by Archer and Evans [1] from the corresponding Smoluchowski equation or by Español and Löwen using the projection operator formalism [8]. More specifically, we study the fundamental relation between the time-dependent external potential V (x, t) and one-body density ρ(x, t), which will naturally also involve the one-body current j(x, t).
To this end, we use the exact nonequilibrium interaction force, i.e., we do not rely on an "adiabatic" approximation that equates equilibrium and nonequilibrium correlations (which is usually required for explicit calculations in DDFT). Therefore, our results also pertain to the recently developed superadiabatic extension of DDFT [30,31] as well as to the framework of power functional theory (PFT) [27] derived by Schmidt and Brader [28], where both approaches incorporate "superadiabatic" forces that are neglected in standard (adiabatic) DDFT approximations. The underlying variational principle of PFT, based on Levy constrained search [17], entails the existence of a unique mapping from both ρ(x, t) and j(x, t) to V (x, t). Since our pursuit of unique density-potential mappings neither requires an approximation nor a specific framework, our results shed light on the relation between DDFT and PFT on a formal level and help, in particular, to better understand the role of the current.
We explicitly address the question: under which conditions can we uniquely map a classical time-dependent density ρ(x, t) to an external potential V (x, t)? As in the case of equilibrium DFT, if such a unique mapping is established, we can assert that the density profile ρ(x, t) specifies the Hamiltonian and hence all relevant information about the system, including higher-order correlations. Hence, this question is of fundamental importance and practical relevance to the study of time-dependent many-body systems.
In quantum mechanics, an argument for the unique mapping from time-dependent densities ρ(x, t) to potentials V (x, t) was provided by Runge and Gross in 1984 [26], which became the foundation of time-dependent density functional theory (TDDFT). Assuming time-analytic potentials and smooth densities, they linked the question for uniqueness of the density-potential mapping to that of the solution for an elliptic partial differential equation (PDE). However, as pointed out later [36,6,12,13], this solution is unique only under certain conditions on both ρ(x, t) and V (x, t). These joint assumptions on the density and potential are more complex than in equilibrium, where the conditions only depend on ρ(x) [3]. Intuitively speaking, these more intricate assumptions arise in the time-dependent case because more states are allowed than in equilibrium.
For classical systems, Chan and Finken [2] asserted uniqueness following the idea of Runge and Gross [26]. However, because higher-body correlations due to interparticle interactions were omitted, the argument so far holds only under the adiabatic approximation. Moreover, no conditions have hitherto been stated for the uniqueness of classical density-potential mappings. In fact, this omission is more critical in the classical setting than it would be in the quantum case since for the latter, counterexamples to unique mappings are considered to be "largely unphysical" [13] and are hence often neglected. By contrast, diverging potentials are not only relevant but even common in classical statistical physics.
In this work, we close these two gaps by proving explicit conditions for the uniqueness of classical density-potential mappings based on an exact hierarchy for the n-body densi-ties. Importantly, our conditions are independent of the adiabatic approximation. Thus, we provide a mathematically rigorous foundation of classical DDFT. At the same time, our conditions exemplify loopholes, where uniqueness cannot be assumed so that a more general framework, like PFT, is required that relies on both the density and current.
To this end, we first specify our setting in Sec. 2 and derive the hierarchy of reduced Smoluchowski equations for all time-dependent n-body densities ρ n (x 1 , . . . , x n , t) in Sec. 3; see Theorem 3.1. Since we are concerned with possibly diverging potentials, we accurately derive the boundary contributions and find that all corresponding terms vanish if and only if the Yvon-Born-Green (YBG)-hierarchy holds on average at the boundary.
Then, we prove our main results in Sec. 4, i.e., we rigorously derive generic conditions that guarantee a unique mapping from the time-dependent one-body density ρ(x, t) to the external potential V (x, t). As a mere technicality, we begin by noting that uniqueness can only hold up to physically irrelevant differences like a constant offset. We capture these subtleties by the definition of diffusion-equivalent potentials; see Definition 4.1.
Similar to the idea of Runge and Gross (or Chan and Finken) [26,2], we assume analytic potentials and can thus reduce the uniqueness of the mapping to the uniqueness of a solution to a (semi-)elliptic PDE. In contradistinction to the available proofs in quantum mechanics [26,12,13,24], we explicitly have to take the hierarchy of reduced Smoluchowski equations into account. By doing so, our proof requires no approximation of n-point correlations. Hence, the fundamental question of uniqueness does in no way depend on the adiabatic approximation.
Moreover, our rephrasing of the problem allows us to obtain a physically intuitive condition for uniqueness. Theorem 4.4 asserts that if the density does not vanish at the boundary, then a unique solution can be guaranteed for no-flux boundary conditions or, in fact, any specified flux in or out of the system. Even more generally, we prove that uniqueness holds for a suitable asymptotic behavior of ρ(x, t) and V (x, t); see Theorem 4.6.
In Sec. 5, we demonstrate that such a simultaneous condition on the density and potential is inevitable. More specifically, we present explicit counterexamples to uniqueness where for two different external potentials, the same ρ(x, t) is attained at all times. Obviously, these examples violate the conditions of our theorems. For an exponentially fast decaying density profile, a non-unique external potential must necessarily include an exponential divergence (in space). In contrast, if the density profile has heavy tails, already a polynomial divergence of V (x, t) can lead to non-unique mappings. Hence, the conditions on the asymptotic behavior have to depend on both ρ(x, t) and V (x, t).
To conclude the discussion of counterexamples in Sec. 5, we embed our findings in the framework of PFT. A unique mapping to an external potential implies a unique current j(x, t). In contrast, if a suitable external potential V (x, t) that violates our conditions is added, it causes a divergence-free current j (x, t) that does not change the density ρ(x, t). Such counterexamples have been simulated via a numerical procedure known as custom flow [4,5] that determines, in line with PFT, the unique external force field as a functional of ρ(x, t) and j(x, t). The hierarchy of Smoluchowski equations from Theorem 3.1 emphasizes the necessity of this approach for interacting systems. For the ideal gas (or under the adiabatic approximation), our analytic formula 5.2 can be applied to systematically construct counterexamples for effectively one-dimensional systems.
Finally, Section 6 provides an outlook. We discuss some open questions and possible generalizations.
Densities and Smoluchowski operators
We here consider an overdamped many-body system with a fixed number of particles N > 0 in an open domain Ω ⊆ R d , d ≥ 1. The interaction between the particles is given by a pair potential U (x, y) with x, y ∈ Ω. As usual, the pair potential is symmetric and only depends on the relative distance, i.e., U (x − y) := U (x − y, 0) = U (x, y). The inverse temperature β and diffusion constant D are fixed.
As a side-remark, in a slight abuse of notation that is common in physics, we denote a function together with its arguments, e.g., U (x, y) may represent the potential itself or the function evaluated at positions x, y. The meaning should always be clear from the context.
Since our motivation are applications in classical physics, we restrict our analysis to smooth functions (rather than aiming for the greatest possible generality). More precisely, we assume throughout the paper that all density profiles and potentials are twice continuously differentiable in space, i.e., on Ω, and continuously differentiable in time, i.e., for t ≥ 0 (where we assume differentiability from the right-hand side for t = 0). We characterize our system by its symmetric N -body probability density P N (x N , t), where x N is a shorthand notation for a collection of positions x 1 , . . . , x N ∈ Ω. Note that P N (x N , t) is a simple function of time t ∈ R but a density in the spatial coordinates x N . More precisely, it is the density of an intensity measure (which assigns to each Borel set the number of particles inside). Hence, the total mass of the measure is constant and given by Here and in the following, each unspecified integral is over the full domain.
We obtain the (reduced) n-body densities ρ n (x n , t) with n ≤ N from the symmetric N -body probability density P N (x N , t) by applying the n-body density operator: where δ denotes a Dirac delta distribution. The evolution of P N (x N , t) under an external potential V (x, t) for time 0 ≤ t < ∞ obeys by the following N -body Smoluchowski equation: where the force F i (x N , t) on particle i ∈ {1, 2, . . . N } is defined as The same definition of F i (x N , t) holds for any number of particles, say n < N . Let us also point out here that the index i always refers to the ith argument, e.g., F n+1 (x n , y, t) = −∇ y V (y, t) − ∇ y n j=1 U (y − x j ). In shorthand notation, we combine all forces into a single vector F (x N , t) ∈ R dN (and analogously define the gradient ∇ x N ). We, moreover, define the N -body current field that obeys the continuity equation which is then equivalent to the Smoluchowski equation. By defining the Smoluchowski operatorÔ we can write the Smoluchowski equation (2.2) more succinctly as For our derivation of a reduced Smoluchowski equation for ρ n (x n , t) in the next section, it is useful to define partial Smoluchowski operators aŝ The behavior of the system is determined by an initial value boundary problem that, in our case, is defined by the Smoluchowski equation (2.4), the initial condition P N (x N ) := P N (x N , 0) at time t = 0 (for all spatial coordinates), and a boundary condition on ∂Ω (for all times). A quite general condition is defined by an oblique derivative boundary problem with variable coefficients; see [11,Section 6.7]. Among others, such a choice allows for a no-flux boundary condition in the physical sense, i.e., the vanishing of the normal component of the current field. The oblique derivative boundary condition also includes the classical Dirichlet or Neumann boundary conditions as special cases.
We say that a solution P N (x N , t) is well behaved if it has the following properties for all 0 ≤ t < ∞, x N ∈ Ω N , and n < N : (i) as a function of spatial coordinates x N , P N (x N , t) is twice continuously differentiable on Ω; and as a function of time t, P N (x N , t) is continuously differentiable for t ≥ 0; (ii) moreover, ∂ t P N ,Ô N P N ,Ô n P N , andÔ ± n,N P N are Lebesgue integrable; (iii) finally, the average n-body interaction force on particle i ∈ {1, . . . , n} exists and is continuously differentiable on Ω and for t ≥ 0: (2.5) For convenience, we also define E i (x n , t) ≡ 0 for n ≥ N . The index is analogously defined to that of the force F i (x n , t).
In the following, we always assume the existence of a well-behaved solution. Even though a proof of existence is beyond the scope of this paper, we briefly discuss conditions that are to be expected and a strategy in the outlook. Moreover, we formally assume that Ω is bounded so that it has a well-defined (and sufficiently smooth boundary) ∂Ω. Nevertheless, our results immediately apply to unbounded Ω whenever the integrals converge appropriately.
Hierarchy of reduced Smoluchowski equations
The physics literature usually neglects all boundary terms in the derivation of a reduced Smoluchowski equation [18]. These boundary terms are essential, however, to derive necessary conditions for a unique density-potential mapping (since non-unique counterexamples involve diverging external potentials).
We, therefore, first derive a reduced Smoluchowski equation paying special attention to the boundary terms. Moreover, since we do not rely on the adiabatic approximation but instead consider the exact dependencies between n-body densities, we derive a complete set of reduced Smoluchowski equations for all orders.
Theorem 3.1. The reduced n-body density ρ n (x n , t) with 1 ≤ n < N obeys the following reduced Smoluchowski equation for a bounded domain Ω with a piecewise smooth boundary ∂Ω: Proof. Under our assumptions, we can apply the n-body density operator to the N -body Smoluchowski differential equation (2.4). First, we use To simplify the first term on the right-hand side, we note that Using the average n-body interaction force E i from (2.5), we can further simplify the remaining integral: Inserting this result in (3.3) and finally in (3.2), we have where we define the boundary term by The last equality holds by the divergence theorem. To prove 3.1, it remains to show that This assertion follows from the fact that where the last expression is, by definition, equal to E n+1 (x n , y, t). From now on, we only consider the case of vanishing boundary terms in the reduced Smoluchowski equations. Thus, we recover the well-known (reduced) Smoluchowski equation for the one-body density: where here and in the following we use ρ(x, t) := ρ 1 (x, t). More generally, for the n-body densities, we obtain: (3.5) That the boundary terms vanish must, of course, be confirmed for each example. A violation of this condition can lead to spurious counterexamples to our uniqueness theorems 4.4 and 4.6 (as discussed below). Based on (3.4), we define the one-body current j(x, t) as so that it obeys the following continuity equation The definition is equivalent to the ensemble average of a current operator [27].
Uniqueness theorems
We now turn to the central question of this paper. Given an initial condition P There are two obvious limitations to uniqueness. First, the mapping can only be unique for x ∈ Ω where and when ρ(x, t) > 0. Variations in V (x, t) outside the support of ρ(x, t), i.e., in the complement of the set supp(ρ) := {(x, t) ∈ Ω × R + 0 : ρ(x, t) > 0}, do not change the time evolution of the system as determined by the Smoluchowski equation. Secondly, adding a time-dependent constant to the potential does not change the time evolution either. In fact, for disjoint subsets of the support, we can add different constants to each subset.
We can combine both limitations in a single statement; a difference of the potentials that is constant on the support of ρ(x, t) has no effect on the density. Note that, in general, the potentials can differ by more than just an offset. For a similar restriction of the uniqueness in equilibrium DFT for the canonical ensemble, see Example 9.1 in [3]. This definition allows us to formulate our strategy of proof more specifically. In the following, we consider two systems with densities ρ(x, t) and ρ (x, t) and with external potentials V (x, t) and V (x, t). Both systems start from the same initial condition P N (x N ), and the same boundary conditions are applied. Our aim is to derive conditions for which an equivalence of ρ(x, t) and ρ (x, t) implies diffusion equivalence of V (x, t) and V (x, t), or equivalently that the difference is diffusion equivalent to a function that is constant zero. For convenience, we will actually show the contrapositive. If the two potentials are not diffusion equivalent, then the densities must differ, and thus ρ(x, t) uniquely determines V (x, t). An essential step in the proof is to reduce the uniqueness of the mapping to the uniqueness of a solution to a (semi-)elliptic PDE. As discussed in the introduction, this approach is, in parts, similar to the argument by Runge and Gross (or Chan and Finken) [26,2], but it differs in that we have to take the hierarchy of reduced Smoluchowski equations from Theorem 3.1 into account. We, of course, pay close attention to a rigorous treatment of the boundary terms. Additionally, we rearrange the argument to obtain generic boundary conditions. Thus, we prove that a no-flux boundary condition always implies uniqueness (if the density does not vanish).
The main advantage of our strategy is a physically intuitive proof that helps to clarify the essential physical questions. This intuition comes at the price of the following three additional assumptions that could possibly be avoided by alternative methods, like a fix-point scheme that has already been employed in the quantum case [25,24].
The first assumption (A1) for our proof is that the external potentials V (x, t) and V (x, t) are real analytic in time for t ≥ 0. By including the start time t = 0, we assume that the potentials are right differentiable and that the corresponding Taylor series at the origin converges in a neighborhood. Hence, the derivatives at the origin uniquely specify the potential at all times (according to the identity theorem for analytic functions).
Our second assumption (A2) is that the n-body densities ρ n (x n , t) and ρ n (x n , t) for all n = 1, 2, . . . N are infinitely often differentiable from the right at t = 0. Note that we do not require them to be time analytic.
Thirdly, we can only derive explicit conditions for uniqueness if the support of ρ(x, t) does not change with time, which is essentially equivalent to redefining the domain Ω. Hence, our third assumption (A3) is that ρ(x, t) > 0 for all x ∈ Ω and t ≥ 0. Without loss of generality, we also assume that Ω is connected.
Taking advantage of our analytic potentials, we will consider the time derivatives of their difference; see (4.1). Hence, we define for k ∈ N 0 : and V (x, t) be not diffusion equivalent; then there exists a smallest non-negative integer, say l, for which ∇ x d l (x) ≡ 0. The proof of our theorems rests on the following lemma. It allows an exact treatment of the average n-body interaction forces for all orders of n (via the hierarchy of reduced Smoluchowski equations).
Lemma 4.2.
Given two many-body systems with identical initial and boundary conditions that satisfy assumptions (A1)-(A3) as described above. Let V (x, t) and V (x, t) be not diffusion equivalent and let l ∈ N 0 be the smallest integer for which ∇ x d l (x) ≡ 0. Then and Proof. We first prove (4.2) by an induction-like argument. This equation obviously holds for k = 0 by (2.1) because both many-body systems start from the same initial condition P (0) N (x N ). In the case l > 0, assume that (4.2) holds for all k = 0, 1, . . . m for some m < l. We need to show that it is also true for m + 1. Therefore, we subtract the reduced Smoluchowski equations (3.5) for the n-body densities of the two many-body systems, take m additional time derivatives and evaluate the derivatives at t = 0: The first, third, and last term on the right-hand side vanish directly since (4.2) holds for k = m by our induction argument. For the remaining derivative in the second term, we have where the last equality holds again because of our induction argument, i.e., we apply (4.2) for k ≤ m. Now, since ∇ x i d k (x i ) ≡ 0 for all k ≤ m < l, assertion (4.2) follows for all k = 0, 1, . . . l.
To prove (4.3), we subtract the reduced Smoluchowski equations (3.4) for the one-body densities of the two many-body systems, take l additional time derivatives and evaluate the result at t = 0: The first and last term on the right-hand side vanish by (4.2). Thus, we have where the last equality holds again by (4.2). Since ∇ x d l−k (x) ≡ 0 for all 0 < k ≤ l, we have proven (4.3) which concludes the proof.
The n-body densities ρ n (x n , t) are highly relevant for the correct dynamic evolution of the one-body density ρ(x, t). The preceding lemma provides control over these contributions in our proof. In fact, they no longer appear explicitly.
To prepare our first main theorem that guarantees uniqueness under suitable boundary conditions, we define the normal flux j ⊥ (x, t) at the boundary via an extension of the current j(x, t) from (3.6) to ∂Ω: where n(x) denotes the outward unit normal on ∂Ω. As before, the product of vectors is consistently interpreted as a scalar product. We can utilize (in our proof of uniqueness) this common choice for a physical boundary condition via the following lemma.
where l again denotes the smallest integer for which ∇ x d l (x) ≡ 0.
Proof. Since the normal flux is equivalent for the two systems, subtracting (4.6) yields for all x ∈ ∂Ω. By applying l subsequent time derivatives and (4.2), we get and so the assertion follows by the same argument as in (4.5).
Next, we state our first main Theorem 4.4 that holds for a quite general and physically intuitive boundary condition that (i) specifies the flux in and out of the system and that (ii) requires a nonvanishing density at the boundary. The second condition is necessary because our first condition on the flux is quite generic. Below, we will drop (ii) at the expense of (i), i.e., a more precise specification of the behavior of ρ(x, t) and d(x, t) for x → ∂Ω allows for a more general Theorem 4.6.
For now, we require that ρ(x, t) does not vanish at ∂Ω, i.e., ρ(x, t) is allowed to diverge at the boundary but if a smooth extension of ρ(x, t) to ∂Ω exists, then it must be positive.
Theorem 4.4. For a many-body system satisfying (A1)-(A3) and with a given normal flux j ⊥ (x, t) at the boundary ∂Ω, the external potential V (x, t) is uniquely determined (up to diffusion equivalence) by ρ(x, t) if the initial density does not vanish at the boundary.
As a special case, no-flux boundary conditions imply a unique density-potential mapping if the density is strictly positive at the wall.
Proof. Consider two many-body systems as described above with identical initial and boundary conditions and the same normal flux j ⊥ (x, t). Our aim is to prove that the density-potential mapping is unique, i.e., ρ(x, t) ≡ ρ (x, t) implies V (x, t) ∼ V (x, t). We do so by showing the contrapositive, i.e., if V (x, t) and V (x, t) are not diffusion equivalent, then ρ(x, t) must differ from ρ (x, t) for some x ∈ Ω and t > 0. By (A2), the densities are not equivalent if for some k ∈ N. Let l denote the smallest integer for which ∇ x d l (x) ≡ 0, as in Lemma 4.2, and consider the case k = l + 1.
Using (4.3), we can reduce the proof of uniqueness for the density-potential mapping to a proof that the following elliptic PDE has only trivial, i.e., constant, solutions. In that case, ∇ x d l (x) ≡ 0 together with (4.3) implies (4.8).
To show the uniqueness of the trivial solutions, we have to take the boundary conditions into account. We start with (4.7) from Lemma 4.3. If additionally ρ(x, 0) > 0 for all x ∈ ∂Ω (or if the density diverges), then (4.7) requires that ∇ x d l (x) = 0 for all x ∈ ∂Ω; in other words, d l (x) is bounded. Thus, we have obtained a stronger condition: for all x ∈ ∂Ω. Now, consider the following integral using partial integration. By (4.10), the surface term vanishes, and we obtain By our assumptions, the right-hand side is strictly negative. Therefore, the integral on the left-hand side cannot vanish for all x ∈ Ω, which in turn implies that (4.9) has only trivial solutions with constant d l (x).
Remark 4.5. Theorem 4.4 captures a common case where the density and the flux at the boundary together uniquely specify the external potential and hence all higher-order correlations. This assertion is consistent with the PFT framework, where the density and current together yield a complete statistical description of a time-dependent many-body system [28,27]. However, for non-vanishing densities, our result is less restrictive since, in that case, we only need to fix the normal flux at the boundary. The latter is often already defined by the set up of the system (e.g., as a no-flux boundary condition for bounded domains).
Notice that in the proof of Theorem 4.4 the conditions on the flux and density are only used to obtain (4.10), which in turn implies that the surface term vanishes in (4.11). Therefore, we can immediately formulate a physically less intuitive but mathematically more general theorem (that extends the uniqueness of the solutions to semi-elliptic PDEs with one-body densities that can vanish at the boundary).
Theorem 4.6. Given two many-body systems satisfying assumptions (A1)-(A3) with identical initial and boundary conditions. If then we have a unique density-potential mapping, i.e., ρ( From Theorem 4.6, we can distinguish different cases of uniqueness based on the behavior of ρ(x, t) close to the boundary. As before, condition (4.12) allows for diverging densities if the gradient of the potential vanishes fast enough. In distinction to Theorem 4.4, we can now assert uniqueness even if ρ(x, 0) ≡ 0 for all x ∈ ∂Ω as long as the potential remains bounded. Conversely, we learn in this case that if we want to vary the boundary flux, the potential must diverge.
Remark 4.7. This physically intuitive interpretation holds even for Ω = R d , where the density has to vanish at infinity (because ρ(x, 0) is normalized by the total number of particles N and hence integrable). A violation of criterion (4.12), therefore, requires that at least one of the potentials V (x, t) and V (x, t) diverges (rapidly) at infinity. As long as the external potentials do not diverge (faster than allowed by our criterion), they are uniquely specified by the density.
We expect that our results can be generalized to systems with nonintegrable density profiles, to include such a simple case as the homogeneous bulk or, more interestingly, periodic boundary conditions (which are common for simulations). Indeed, we find consistent results when we discuss explicit examples for all of these scenarios in the following section.
Remark 4.8. Theorem 4.6 can be generalized even further. Given two potentials V (x, t) ∼ V (x, t); then, the corresponding densities will differ if suitable boundary conditions are imposed on d(x, t), so that (4.9) has only trivial solutions. While this assertion is less explicit than Theorem 4.6, it can be used to choose appropriate boundary conditions for special settings (e.g., Dirichlet boundary conditions).
Loopholes to uniqueness
In the previous section, we derived physically relevant conditions for which the densitypotential mapping is unique. If we violate these conditions (e.g., by a vanishing density and diverging potentials), then identical one-body densities can be obtained for different external potentials; see Remark 4.7. The two systems will differ, however, in a divergencefree current field.
A generic procedure to construct such counterexamples is indicated by (4.9). The key idea is to add to the external potential a nontrivial solution of this elliptic PDE. For the ideal gas or, more generally, under the adiabatic approximation, such an additional potential leaves the one-body density ρ(x, t) unchanged.
Proposition 5.1. Assume that the two-body density ρ x (x, y, t) is a functional of the onebody density ρ(x, t). Let d(x, t) be a (nontrivial) solution of for all x ∈ Ω and t ≥ 0 (which has to violate the boundary condition of Theorem 4.6). Then adding d(x, t) to the external potential does not change the density ρ(x, t) but leads to an (additional) divergence-free one-body current j(x, t) = −Dβρ(x, t)∇ x d(x, t).
. Figure 1: Schematics of non-unique density profiles. The same density profile is obtained for non-interacting particles if only the even external potential is applied or if the uneven external potential is added. The latter causes a constant current (as indicated by the arrows at the bottom).
Otherwise, one may obtain inconsistent results. If the example above were to be applied to a radially symmetric potential, it would result in a radially symmetric current that would not change the density profile, but particles would be missing or accumulating at the boundary.
Example 5.3 (Homogeneous bulk density). Even though a constant ρ(x, t) defined on Ω = R d is, strictly speaking, excluded from our setting, the construction principle of Proposition 5.1 and (5.2) still works. It provides an obvious counterexample to uniqueness, namely, a constant force applied to a constant density profile on Ω = R d (even though, strictly speaking, this example is excluded by our condition N < ∞). By adding the potential d(x, t) = v(t)x, we obtain a constant gradient that results in a constant current proportional to v(t). Note that a no-flux boundary condition again implies uniqueness.
Example 5.4 (Periodic density profiles). Our procedure can also be applied to systems with periodic boundary conditions. Similar to the homogeneous bulk density, we obtain a diverging potential on R d in the nonunique case, but the forces remain bounded if the density is strictly positive. Such an example with ρ(x, t) = cos 2 (x 1 ) + ρ 0 and ρ 0 > 0 has already been numerically studied in simulations of interacting particles, going beyond the adiabatic approximation, within the framework of PFT and custom flow [4,5].
Outlook
So far, we have assumed the existence of a well-behaved solution P (x N , t), but a proof of existence can be constructed similarly to our proof of uniqueness. Analogously, van Leeuwen [32] generalized the argument by Runge and Gross [26] in quantum mechanics. We, therefore, expect that our proof can also be generalized, but an additional difficulty arises. The existence of a suitable potential requires the solution to an inhomogeneous PDE analogous to (4.9). The resulting conditions on the density and interaction potential should include, as a special case, the known conditions for systems in equilibrium [3]. Similar questions have recently been discussed in quantum mechanics [35]. Another open problem is to drop the condition of analytic potentials. As mentioned above, a fixed-point approach as in [23,24] could avoid this restriction. A useful generalization would also be to include unnormalizable densities to rigorously treat periodic boundary conditions. Finally, we can generalize the pairwise-interacting passive particles to (i) many-body interactions and marked particles, as well as to (ii) non-conservative forces, such as for active particles. (i) Higher-body interactions lead to more complex average interaction forces but do not change the structure of the hierarchy, so our method of proof should apply. Similarly, our proof should be generalizable to marked particles, where the marks may represent different particle shapes or orientations [22]. (ii) If a known non-conservative force field is added to (2.3), we expect that the corresponding terms drop out similar to (4.2) and (4.4). Thus, the uniqueness of the density-potential mapping equally holds for intrinsically nonequilibrium systems, such as active particles [34].
|
2023-04-21T01:15:48.548Z
|
2023-04-20T00:00:00.000
|
{
"year": 2023,
"sha1": "4f812a0040cb3b3a72fee260eb2cc17488698bbd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4f812a0040cb3b3a72fee260eb2cc17488698bbd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
233244624
|
pes2o/s2orc
|
v3-fos-license
|
Proportion of T follicular helper cells in peripheral blood of rheumatoid arthritis patients: a systematic review and meta-analysis
ABSTRACT Introduction:Alterations in the levels and activity of Tfh may lead to impaired immune tolerance and autoimmune diseases. The aim of this study was to investigate the proportion and types of Tfh cells in the peripheral blood (PB) of RA patients. Areas covered:Comprehensive databases were searched for studies evaluating the proportion of Tfh cells in the PB of patients with RA compared to healthy control (HCs). The proportion of Tfh cells in RA patients was significantly higher than in HCs (SMD 0.699, [0.513, 0.884], p < 0.0001). Furthermore, Tfh cells proportion in untreated-RA and early-RA patients was markedly greater than HCs, when comparisons done without considering the definition markers, and also when Tfh cells were defined by the specified definition markers. While the proportion of Tfh cells by all definitions was higher in active-RA compared to HCs, analysis of two definitions, CD4+CXCR5+ and CD4+CXCR5+ICOS+, didn’t show significant differences. Furthermore, higher proportion of Tfh cells defined by all definitions and a specified definition (CD4+CXCR5+PD-1high) was observed when S+RA compared to S−RA patients. Expert opinion:The results demonstrate that circulating Tfh are highly elevated in RA patients highlights its potential use as a biomarker and a target for RA therapy.
Introduction
Rheumatoid arthritis (RA) is a chronic autoimmune disease with a prevalence of 0.5% to 1% of the population, and primarily affecting joint tissues [1]. Typical clinical manifestations include symmetrical inflammation and swelling in particular of the small joints of hands and feet, which can lead to joint destruction, deformity and functional disability [1,2]. RA imposes a substantial socio-economic burden on patients and societies [2,3]. Although, the exact etiology and pathogenesis of RA are not completely understood, it is well accepted that the disease is multifactorial caused by multiple genetic, environmental, and microbial factors. These risk factors together trigger uncontrolled immune responses, and development and progression of the disease [2]. Different types of innate and adaptive immune cells including dendritic cells (DCs), monocytes, macrophages, antigenspecific T and B lymphocytes along with cytokines and autoantibodies contribute to the pathogenesis of RA [3]. Among the immune cells, CD4 + T cells have critical roles in the disease induction and progression. Once activated, CD4 + T cell can differentiate into different T helper (Th) subsets including Th1, Th2, Th17, T regulatory, or T follicular helper (Tfh) cells, each play an important role in the RA [4].
Tfh cells are a subpopulation of Th cells localized in the B cell follicle of lymph nodes. These cells are essential in humoral immune response by providing help for B cell to develop germinal center, differentiate to memory B cells and long-lived antibody-secreting plasma cell, immunoglobulin class-switching and antibody affinity maturation [5]. Tfh cells can be distinguished from other differentiated CD4 + Th subsets by the expression of B-cell lymphoma 6 (BCL-6) transcription factor [6], the surface expression of CXCR5 chemokine receptor [7][8][9], CD40 ligand, programmed cell death protein 1 (PD-1), and inducible costimulatory molecule (ICOS) [5,10], and the production an release of interleukin (IL)-21 [6].
Despite the prominent role of Tfh cells in humoral immunity for protection against infectious pathogens, accumulating evidence has demonstrated that deregulated frequency and activity of Tfh may lead to impaired immune tolerance, generation of high-affinity autoantibodies and hence development of antibody-mediated autoimmune diseases [11][12][13][14]. However, anatomical localization of Tfh cells in the secondary lymphoid tissues limits their routine studies in human patients [5,7]. Recently, studies have introduced a new circulating subpopulation of CD4 + T cells that show both phenotypical and functional characteristics of classical Tfh cells residing in the lymphoid tissues [15,16]. Based on these studies, circulating Tfh (cTfh) cells, similar to the classical Tfh, could promote differentiation of naïve B cell into plasma cells and immunoglobulin secretion through IL-21 production [15]. In addition studies have shown that cTfh cells increase in proportion to their GC counterparts in secondary lymphoid tissues [16]. Thus, they represent circulating compartment of Tfh cells and evaluation of the status of cTfh would reflect the status of their counterpart in GC of secondary lymphoid tissues [15,16]. Following the introduction of cTfh, several human studies have demonstrated the increased frequency of Tfh in the peripheral blood (PB) of patients with different autoimmune diseases including multiple sclerosis (MS), systemic lupus erythematosus (SLE), autoimmune thyroiditis, myasthenia gravis (MG), Sjogren's syndrome (SjS) and RA [17].
Several research groups have explored the potential role of cTfh cells in RA pathogenesis by determining their frequency in PB of RA patients compared to the healthy controls (HCs). While some of these studies have shown increase in the frequency of Tfh in PB of patients with RA and its positive correlation with disease scores, some other studies have failed to show any differences in the frequency of these cells in patients with RA compared with HCs. Hence, in the present work, we systematically searched and reviewed the available studies documenting the proportion of Tfh among CD4 + T cells in PB of RA patients, and conducted a meta-analysis to elucidate the proportion of Tfh cells in the PB of RA patients compared to the HCs.
Search strategy
This study was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and was registered at the international prospective register of systematic reviews (PROSPERO) (CRD42020163549). We searched online databases including Embase, PubMed, Scopus and Web of Science (WOS) to identify original articles published until November 2019 reporting the frequency of Tfh among CD4 + T cells in PB of patients with RA.
The search terms used in this meta-analysis were as follows: ('Arthritis, Rheumatoid'[Mesh] OR 'Rheumatoid arthritis' OR RA) AND ('T helper follicular' OR 'T follicular helper' OR 'Follicular T helper' OR 'Follicular helper T' OR 'Tfh'). In order to reduce the risk of missing any related study, we also searched manually the reference list of review articles and the included original articles. The searches and selection of articles were conducted by two investigators independently. In the case of any disagreement, it was resolved through discussion with a third investigator.
Study selection
Inclusion criteria: 1) original human studies; 2) title or abstract containing the terms 'rheumatoid arthritis' or 'RA' and 'T helper follicular' OR 'T follicular helper' OR 'follicular T helper' OR 'follicular helper T' OR 'Tfh'; 3) documenting the frequency/ proportion of Tfh among CD4 + T cells in the PB of RA patients and HCs; 4) available as full-text article on the internet (website or PDF); 5) Available information regarding the number of patients and HCs. No restriction was applied for the subtypes of the RA disease, disease severity score, sex or race of the participant subjects in the study. Also, no time or language restrictions were imposed.
Exclusion criteria: 1) non-original studies, review articles, case reports, letter to editor or conference abstract publications; 2) Animal or in vitro studies; 3) studies without healthy control group; 3) studies without raw data in which data extraction from their graphical results were not feasible; 4) Redundancies among Embase, PubMed, Scopus, and WOS search were removed and each study was counted only once in the meta-analysis
Data extraction and quality assessment
Two independent researchers carefully reviewed the included studies and extracted data, and disagreements were resolved through discussion with the third investigator. The following data were extracted from the eligible articles: author's name, publication year, country where the authors performed the analysis, number of RA patients and HC controls, Tfh definition, mean and standard deviation (SD) of the frequency of Tfh among CD4 + T cells. When Tfh were defined by different patterns of markers in the studies, data were extracted from all of them. If instead of mean and SD, the studies reported mean and standard error (SE), medians and ranges, or median and interquartile ranges, we extracted all types of the data. In the case of some of articles presenting only graphical results without raw data in which it was not possible to extract data from their graphs, the raw data (mean and SD) were generously received from the authors after contacting the authors by e-mail. The quality of all included studies was assessed by Newcastle-Ottawa Quality Assessment Scale (NOS) (Case Control Studies) [18].
Data analysis
All statistical analyses for the Tfh frequency differences between RA and HC subjects were conducted using STATA
Article highlights
• The proportion of Tfh cells defined by all definition markers was higher in PB of RA patients than HCs. • The proportion of Tfh cells with individual definitions were higher in PB of RA patients than in HCs. • Tfh cells proportion in untreated-RA and early-RA patients was greater than HCs. • The proportion of Tfh cells defined by all definitions was higher in active -RA compared with HCs. • Seropositive RA patients had higher Tfh cells proportion compared to seronegative patients. • These results highlight potential pathogenic role of Tfh cells in RA and its potential use as a biomarker and a target for RA therapy. version 16.0 (Stata Corp., College Station, Texas). The sample size, mean, and SD of the frequency of Tfh cells were used to calculate the standardized mean difference (SMD) between RA and HC groups. For certain studies reporting only graphical results, the graphs were converted to the numerical data by using GetDat Graph Digitizer software version 2.24 (http://getdatagraph-digitizer.com/). When some studies reported SE instead of SD, the SD was calculated from sample size and SE using the following formula: SD = SE×√n. When some studies presented their results as median and range, or median and interquartile range (IQR), the mean and SD were estimated by using a standard method [19]. When there was need to combine data from two subgroups (for example combining mean and SD of 'active-RA' and those from 'remission-RA' groups), the pooled mean and SD was calculated by using a standard formula according to the Cochrane Handbook for Systematic Reviews of Interventions [20].
The meta-analysis was performed to calculate the SMD of the proportion of Tfh among CD4 + T cells in the PB of 1) RA patients versus HCs 2) untreated RA (u-RA) patients versus HCs, 3) early RA (e-RA) patients versus HCs, 4) active-RA (a-RA) patients versus in remission-RA (r-RA) patients, 5) a-RA patients versus HCs, 6) seropositive (S + ) RA patients versus seronegative (S − ) RA patients, and 7) S + RA patients versus HCs. Furthermore, since the studies used different patterns of markers to define Tfh cells, subgroup analysis was also performed to estimate the results based on each Tfh definition.
In order to evaluate the heterogeneity between the studies, the Q test and I 2 statistic was used. For the Q test, P-value < 0.1 was considered as statistically significant and I 2 values of 75, 50, and 25% were considered as evidence of high, moderate and low level of heterogeneity, respectively. SMD with 95% confidence interval (CI, Hedges' g) was calculated to assess the proportion of Tfh cells, and random effect model (REM) was used when the heterogeneity was high (I 2 > 50%), and a fixed effect model (FEM) was chosen when the heterogeneity was low or absent (I 2 ≤ 50%). In addition, the publication bias was assessed by using funnel plot in which the x -axes and y-axes of the plot were used to show SMD and SE. Egger's test was used for evaluating funnel plot asymmetry, and in the case of its presence, the 'trim and fill' method was applied to assess the effect of publication bias on the calculated results. P-value <0.05 was considered to be statistically significant except where noted.
Literature search
A total of 821 potentially eligible studies were retrieved after searching databases along with manual searching. Of these, after primary screening, 405 and 365 studies were respectively excluded due to duplication and not satisfying the inclusion criteria. Fifty one relevant full-text articles were selected and assessed for eligibility, of which 26 articles were excluded because of the following reasons: lack of healthy control group (n = 3), evaluating the Tfh cells in in vitro (n = 4), without raw data and unable to extract data from the graphical result (n = 2), evaluating changes in the proportion of Tfh in autoimmune diseases other than RA (n = 1), reporting only the absolute number of Tfh cells (n = 2), overlapping samples from other studies (n = 2), reporting only Tfh subsets (Tfh-1, −2 and −17) (n = 2), and letter to editor. Additional 10 articles were irrelevant and didn't evaluate proportion of Tfh cells, thus, finally a total of 25 articles were included in the metaanalysis ( Figure 1).
Characteristics of eligible studies
Of the included articles, 14 were performed in China, 4 in Japan, 2 in Spain, 1 in Sweden, 1 in Italy, 2 in USA, and 1 in Argentina. In total the eligible articles included 1041 RA patients and 610 HCs. According to the NOS risk of bias assessment, all the included studies had a score of 4-8, most of them had high quality (NOS score, 7 and 8), and fewer with moderate quality (NOS score, 4-6). The included studies and their basic characteristics are also listed in Table 1.
Pooled results
In initial step, we carried out meta-analysis to compare the proportion of Tfh cells between RA and HC groups regardless of the markers used for definition of Tfh cells. Some of these studies reported a significant increase in the proportion of Tfh cells in PB of RA versus HC subjects [21][22][23][24][25][26][27][28][29][30][31][32][33], some studies showed no differences [34][35][36][37][38], some studies revealed both increase in RA and no differences between the two groups depending on the definition markers [39][40][41][42], and some other reported non-significant decrease in Tfh cells proportion in RA group [43,44]. As most articles evaluated more than one Tfh definition in the same population, in order to prevent skew in the results, the number of patients and HCs was adjusted according to the number of definitions. This was also applied for other analyses in the next sections. Based on the pooled result of metaanalysis, RA patients had a significantly higher proportion of cTfh cells compared with HCs (SMD 0.699, [0.513, 0.884], p < 0.0001), although high level of heterogeneity (I 2 = 64.57, p < 0.0001) was observed between studies ( Figure 2).
Subgroup analysis
In the next step, we performed subgroup meta-analysis based of different definition markers used for Tfh cells. It should be noted that as all Tfh cells express CD3 and due to previous antigen exposure all are CD45A − and CD45RO + memory T cells. Therefore, CD3 and CD45RA/RO were not taken into consideration for subgrouping and meta-analysis. The results of the subgroup analysis based on Tfh definitions are summarized in Table 2.
We also analyzed the studies in which the 'CD4 + CXCR5 + ' Tfh cells co-expressed PD-1 and ICOS (CD4 + CXCR5 + PD-1 + ICOS + cells). From four studies, we removed two studies in which high expression of PD-1 and/or ICOS were used for Tfh cell evaluation. According to the pooled SMD result, a significant increase in the proportion of Tfh cells co-expressing ICOS and PD-1 was found in RA patients in comparison with the HCs (SMD 0.67, [0.31, 1.04], p = 0.0003, I 2 = 0%, p = 0.518) ( Supplementary Fig.7).
Finally, we assessed Tfh cells proportion from the studies in which 'CD4 + CXCR5 + CCR7 low PD-1 high ' was used to identify Tfh. Of these studies 2 reported markedly higher proportion of Tfh cells in patients with RA [33,40], and 1 reported no difference when RA group was compared with HC group [35]. Our subgroup meta-analysis revealed significant increase in the proportion of 'CD4 + CXCR5 + CCR7 low PD-1 high ' cells in PB of RA patients in comparison with HCs (SMD 0.68, [0.4, 0.96], p < 0.0001, I 2 = 4.93%, p = 0.34) (Supplementary Fig.8).
The proportion of Tfh in PB of untreated-RA patients as compared with HC
Studies in which patients were taking steroids or immunesuppressive drugs were removed from this part of the metaanalysis, since treatments were not uniform across the studies. A total of 12 articles covering 21 studies evaluated the proportion of Tfh cells in patients who did not take any medicine at the time of sample collection. Of these, some of studies included patients who didn't receive any drug for at least 1 month [21], 2 months [30,34] and 3 months [33,35,41] before blood sample collection, and 6 articles covering 12 studies included patients with lack of any treatment history [22,25,26,28,37,42]. Therefore, the proportion of Tfh cells was compared between RA patients without immunosuppressive treatments and HCs. Pooled analysis without considering the defined phenotypes for Tfh cells revealed a significant higher Tfh cells proportion in u-RA compared to the HC subjects (SMD 0.671, [0.461, 0.882], p < 0.0001, I 2 = 52.73, p = 0.003) (Figure 3). Egger's test also showed no publication bias (p = 0.32). Furthermore, we performed subgroup analysis on the studies that defined Tfh cells as 'CD4 + CXCR5 + ', 'CD4 + CXCR5 + PD-1 + ', ' CD4 + CXCR5 + PD-1 high ', ' CD4+ CXCR5+ ICOS high ', ' CD4 + CXCR5 + PD-1 + ICOS + ' and 'CD4 + CXCR5 + CCR7 low PD-1 high ' (Supplementary Fig. 9 A-F). As there was a maximum of 1 study per group for other Supplementary Fig. 9 A-F). In addition, based on Egger's test, no evidence of publication bias was found for the studies (p > 0.1, Table 3)
The proportion of Tfh cells in PB of early RA compared with HC
Then we questioned whether the proportion of Tfh would differ between patients and HCs when the stage of the disease (Table 1). (Table 1).
course was taken into consideration. We included 3 articles (Table 1). Supplementary Fig. 10 A, B)
The proportion of Tfh in the PB of patients with active RA, patients in remission, and HCs
We next asked whether the proportion of Tfh cells of the blood CD4 + T cells, could be different between patients with active RA (a-RA) and those who were in remission (r-RA). Eight studies compared the proportion of Tfh cells between a-RA and r-RA, from which 4 studies reported higher proportion of Tfh in a-RA, 3 studies reported no differences in the Tfh cells proportion between the two groups, and 1 study reported higher Tfh cells proportion in r-RA ( Table 5). The pooled SMD result of Tfh proportion regardless of its definition revealed that there was an increasing trend toward significant in the Tfh cells proportion in the PB of a-RA in comparison with r-RA (SMD 0.275, [−0.012, 0.563], p = 0.06, I 2 = 0%, p = 0.432) ( Figure 5.A). No risk of publication bias was found based on the egger's test (p = 0.155). As evident from Table 5, different set of markers was used by the authors to evaluate the Tfh cells proportion. Thus, we also did sub-group meta-analysis on these studies to compare a-RA with r-RA. Two studies used 'CD4 + CXCR5 + ' [21,36] and 2 used 'CD4 + CXCR5 + ICOS + ' [36,39]. Four remaining studies evaluated the proportion of 'CD4 + CXCR5 + PD-1 + ' [39], 'CD4 + CXCR5 + PD-1 high ' [21], 'CD4 + CXCR5 + CCR7 low PD-1 high ' [33], and 'CD4 + CXCR5 + Foxp3 − ' [39] cells, and because meta-analysis could not be performed on only one study, they were removed . Study identifier (α, β, γ) was used to distinguish each study (Table 2). Fig. 11 A, B) We also performed the same analysis to compare a-RA patients with HC group. Total of 8 studies compared the proportion of Tfh cells between a-RA and HC groups. Of these, 6 studies showed significant increase in the Tfh cells proportion in a-RA, and 2 remaining studies showed no differences (Table 5) Table 6, supplementary Fig. 12 A, B).
The proportion of Tfh in PB of seropositive RA patients versus seronegative RA patients
Seropositive patients were defined as anti-citrullinated protein antibodies (ACPA) + and/or rheumatoid factor (RF) + . Whether Tfh cells proportion among PB CD4 + T cells is different between S + RA and S − RA patients was evaluated. Five studies assessed the proportion of Tfh in S + RA compared with S − RA patients, of which three studies showed higher proportion of Tfh in S + RA, and 2 other showed no difference (Table 7). Pooled meta-analysis of all 5 studies, regardless of the Tfh definition revealed a significant increase in the proportion of Tfh cells in PB of patients with S + RA compared to that of those with S − RA (SMD 0.637, [0.271, 1.003], p = 0.0006, I 2 = 0%, p = 0.831) ( Figure 6A). In addition, based on the egger's test, no publication bias was found (p = 0.932). It should be noted that 4 out of the 5 studies were evaluated Tfh cells proportion in 'early RA' patients. In addition, since only two studies used the same pattern of Tfh definition (CD4 + CXCR5 + PD-1 high ) to compare S + RA and S − RA patients, we performed a sub-group meta-analysis on these studies. The sub-analysis revealed higher proportion of Tfh cells in PB of S + RA compared with S − RA patients when Tfh was defined as 'CD4 + CXCR5 + PD-1high '(SMD 0.67, [0.26, 1.07], p = 0.001, I 2 = 0%, p = 0.93) ( Table 8, supplementary Fig. 13). Table 7. Characteristics of the studies that compared Tfh cells proportion between S + RA and S − RA.
Author
Year There was insufficient number of articles to compare Tfh cell proportion between S + RA patients and the HCs.
Geographic location of studies and Tfh cells proportion in PB of RA patients
Finally, we assessed the proportion of Tfh cells based on the geographical regions where the studies were performed. Regarding the geographic location of the included studies 14 studies were performed in China, 4 in Japan, 2 in Spain, 1 in Sweden, 1 in Italy, 1 in USA, and 1 in Argentina (Table 1) to compare RA with HC. We divided the studies into 2 groups, Asian and non-Asian studies. There were 18 articles covering 30 studies on Asian populations (Japan and China), and 6 articles covering 9 studies on the non-Asians ( Table 1). The pooled SMD revealed that there was higher Tfh cells proportion in RA patients compared to HCs in Asian populations (SMD 0.849, [0.649, 1.049], p < 0.0001, I 2 = 58.37, p < 0.0001) (Supplementary Fig. 14). There was also a higher proportion of Tfh cells in patients compared to HCs in non-Asians (SMD 0.225, [0.017, 0.432], p = 0.033, I 2 = 41.76%, p = 0.088) (Supplementary Fig. 15). There was no evidence of publication bias in the Asian as well as non-Asians analyses based on egger's test (p = 0.24, and 0.75, respectively).
Discussion
The main function of Tfh cell is to help B lymphocytes to proliferate, differentiate and promote antibody production, and regulate humoral immunity [5]. The changes in the chemokine receptors enable Tfh to localize near and interact with B cell, and promote GC formation and class-switched highaffinity antibody production [5,8]. However, Tfh deregulations can drive abnormal germinal center, B-lymphocyte differentiation and survival, autoantibody generation, and thus could be related to the development of autoimmune diseases [11,46]. Due to the difficulty in the sampling from human lymphoid tissue [5,7], and since PB Tfh cells in terms of phenotypic and B lymphocyte helper functions are similar to the GC Tfh cells [15], analysis of cTfh cells in patients has become an important clinically significant alternative strategy [17] RA is a systemic autoimmune disease characterized by the production of a large number of autoantibodies, such as ACPAs, RF and others [3], and this autoantibody production may be related to Tfh cell abnormality. In this line, while several studies have evaluated the frequency of Tfh in the PB of patients with RA compared to HCs, conflicting conclusions have been reached in some cases. The purpose of this work was to systematically evaluate the proportion of Tfh in PB of RA patients compared to HCs to clarify the proportion of cTfh in RA patients.
Meta-analysis of all included studies neglecting the definition markers, treatment status, disease activity status and serological status of the RA patients revealed a significantly higher Tfh cells proportion in the PB of patients with RA in comparison with HCs. So, in this review, we have analyzed the earned results of previous studies based on the above criteria. Our sub-analyses revealed that subgrouping the studies based on these criteria were associated with reduction in the strength of heterogeneity in most of analyses which reached to moderate, low or even no heterogeneity (Tables 2,3, 4, 6 and 8).
In the sub-group analyses, at first we performed the metaanalysis based on only definition markers used to identify Tfh cells. According to the results, the proportion of Tfh cells with the all definition markers was significantly higher in the PB of RA patients compared to HCs (Table 2). From different definitions, 'CD4 + CXCR5 + PD-1 high ' and then 'CD4 + CXCR5 + PD-1 + ' cells showed the largest SMD, and while 'CD4 + CXCR5 + ' were significantly increased in RA, it showed the lowest SMD, suggesting the importance of considering the expression level of PD-1 in RA patients.
PD-1 can promote GC B cells survival and high-affinity longlived plasma cell formation through its interaction with programmed death-ligand 1 (PD-L1) and PD-L2 on the surface of GC B cells [47,48]. Studies have shown that the expression of PD-1 is elevated on Tfh cells in patients with autoimmune diseases including RA [39,49], and PD-1 high Tfh cells have a stronger ability to activate B lymphocytes [49]. Consistently, a positive correlation has been found between the expression of PD-1 on Tfh cells, and disease activity of RA [39]. There is also a significant positive correlation between serum level of the sPD-1 and the frequency of cTfh cells, titer of auto-antibodies and DAS in RA patients [29,50]. Indeed, the inhibitory function of the increased membrane-bound PD-1 on Tfh cells is blocked in the presence of its soluble form, while its humoral assistance to antibody producing cells is intact or even hyper activated. The results of our meta-analyses revealed significantly higher proportion of Tfh cells in all the comparisons which have included PD-1 high for Tfh cells (RA versus HC, u-RA versus HC and S + RA versus S − RA). Furthermore, based on subgroup analyses, the definitions which included PD-1 ('CD4 + CXCR5 + PD-1 + ' and especially 'CD4 + CXCR5 + PD-1 high ') had the largest SMDs in comparison with the other definitions suggesting more associations between the proportion of these phenotypes and RA. Thus, in RA patients, the higher expression of PD-1 on the Tfh cells could be associated with the higher proportion of these cells.
ICOS is another surface molecule that has a pivotal role in the development of Tfh cells and also in the production of IL-21 as signature cytokine of Tfh [51]. In addition, ICOS signaling is essential for maintenance of anatomical localization of Tfh cells in B cell follicles through preserving the expression of homing receptor pattern on Tfh cells [52,53]. ICOS also promotes survival and functional maturation of GC B cells [54]. Furthermore, Tfh cells expressing the highest expression level of ICOS have the most capacity of inducing IgG production [55]. Our study demonstrated that Tfh cells expressing ICOS (CD4 + CXCR5 + ICOS + and CD4 + CXCR5 + ICOS high ) ( Table 2) were higher in RA versus HCs. In addition, 'CD4 + CXCR5 + ICOS high ' proportion had higher association with RA than 'CD4 + CXCR5 + ICOS + ' proportion as was evident from SMDs ( Table 2) which, could be justified by the above-mentioned roles defined for ICOS molecule. Considering the importance of ICOS in antibody production, unfortunately there were not at least 2 studies with CD4 + CXCR5 + ICOS +/high definitions for comparing Tfh cells proportion based on serostatus subgroup analyses.
We also analyzed the studies that only included untreated patients to exclude the effect of immuno-suppressive drugs. Higher proportions of Tfh cells regardless of Tfh cell definition, as well as when different pattern of definition markers were used observed in u-RA compared with HCs. Of comparisons from different definitions, the highest association was found for 'CD4 + CXCR5 + PD-1 high ', and 'CD4 + CXCR5 + ' had the lowest SMD, again suggesting the importance PD-1 molecules in defining the Tfh cells.
The proportion of Tfh cell was also evaluated based on the stage of the disease course. Studies have shown that the immunological aberrations during the first few months after the disease onset differ from those during later phases [56]. Due to the importance of the early diagnosis of RA and also to clarify the status of Tfh cell in the early phase of the disease, we also compared Tfh cells proportion in e-RA patients with that of HCs. All e-RA patients included in the selected articles were treatment naïve. The pooled result of the all Tfh definitions as well as subgroup analysis of Tfh cells with definitions of 'CD4 + CXCR5 +' , CD4+ CXCR5+ PD-1 + showed greater Tfh cells proportion in untreated e-RA compared to HCs, suggesting pathogenetic role of Tfh in initial stage of the RA development.
S + RA is defined as positive for RF or ACPAs and seropositivity is associated with more severe disease [3]. The pooled meta-analysis results, regardless of the Tfh definition demonstrated that S + RA patients had a significantly higher proportion of Tfh cells in their PB compared to the either S − RA patients. Also, markers-based subgroup analysis showed that the proportion of Tfh cells was significantly elevated in S + RA in comparison with S − RA patients when Tfh cells were defined as CD4 + CXCR5 + PD-1 high . This can further emphasize the previously mentioned importance of PD-1 expression level in the frequency of Tfh cells and autoantibody production in RA patients. Altogether, these results demonstrate that the production of auto-antibodies in RA patients is associated with the proportion of CD4 + CXCR5 + PD-1 high Tfh cells, which provides evidence for the connection between the seropositivity and the proportion of these cTfh cells.
Looking at different definitions used in all comparisons demonstrates that the largest SMDs for proportion of Tfh cells in comparison groups belong to the Tfh cells expressing PD-1 (PD-1 + and especially PD-1 high ), while the lowest association was related to the 'CD4 + CXCR5 + ' cells.
We also evaluated the quality and the accuracy of the data conversion by the GetDat Graph Digitizer software. Authors of some of the included articles reported numerical data in the text or table, in addition to presenting them in graph formats. In order to evaluate the accuracy and the precise of the data conversion, we converted these graphs to the numerical data. To ensure that the software cover all types of graphs, we converted different types of graphical results to the numerical data. By comparing the digitizer-extracted data from the graphs with the numerical data reported by the authors in the paper, we evaluated the quality of our conversion. Based on the comparison, there was a high degree of confidence because of the almost identical extracted data to the original reported data. Furthermore, GetDat Graph Digitizer software has been used as a reliable tool for conversion of graphical data to the numerical by other studies [57][58][59][60][61].
In the present study, high level of heterogeneity as well as high range in the percentage of Tfh cells in the PB of HC and RA patients were observed. These could be due to heterogeneity in diagnostic criteria, disease severity and activity, geographical regions of population of the studies, using different pattern/set of markers for definition of Tfh cells, and finally different gating strategies of flow cytometry that may affect the range of Tfh cell percentage for both patients and HCs.
Proliferation of auto-reactive B cells capable of generating high-affinity autoantibodies contribute to the pathology of autoimmune diseases and, thus, have led to consider Tfh cells as possible players in their pathogenesis. It should be noted that different auto-immune diseases, such as RA, SLE, MS, SjS, and MG are associated with different and specific profile of autoantibodies [17]. While the profile of autoantibodies involved in the pathogenesis of these diseases is different and specific, what is common in these disorders is the help and pathologic role of Tfh cells in the generation of disease-specific auto-reactive antibody-producing B cells. So, in different auto-immune diseases, disease-specific Tfh cells help disease-specific antibody-producing B cells. But, in the present study, the specificity of Tfh is not important, and regardless the exact specificity, its altered frequency and potential pathogenic role by providing the help for the development of these RA-specific B cell producing antibodies is important.
Studies have shown that Tfh cells have specific TCR repertoire which is determined by the cognate antigen [62]. In autoimmune diseases, an altered repertoire of T cells can contribute to the disease development and also can be used as novel diagnostic markers. For example, a recent study demonstrated that the repertoires of T cells between SLE, RA and HC groups are different and can be used as novel diagnostic marker [6364]. Consistently, animal study has demonstrated that Tfh repertoire is different in lupus-prone mice compared to HC mice, and the Tfh repertoire alteration is associated with the development of the disease [64]. Considering the importance of the presence of Tfh cells for generation of such autoantibodies and the specificity of Tfh cell repertoire, the evaluation of Tfh cell frequency or repertoire alternation potentially could be used as predictive marker, prognosis or selecting strategy of therapy.
The present work has some limitations. First, as disease duration was inconsistent across the studies we couldn't consider duration of the disease in the analysis. Second, the disease activity score of RA patients was not uniform through the studies. Third, due to heterogeneity of drugs used for treatment of patient, we couldn't evaluate the effect of treatment on the proportion of Tfh cells in RA. Fourth, insufficient number of studies with particular Tfh definitions prevented their evaluation in some of the comparisons. Fifth, among the patterns of Tfh cell definition, CD4 + CXCR5 + may contain CD4+ CXCR5+ FoxP3 + T follicular regulatory (Tfr) cells. Unfortunately, a number of study evaluated the proportion of this cell was not sufficient and similar to Tfh cells to do meta-analysis. Thus, the result of this definition should be interpreted with caution as CD4 + CXCR5+ may contain Tfr cells.
Conclusion
In conclusion, this is the first systematic review and metaanalysis that has assessed the proportion of PB Tfh cells in RA patients. Our analyses demonstrate that compared to HCs the proportions of Tfh cells in general, and in particular Tfh cells expressing PD-1 (PD-1 high and then PD-1 + ) are highly elevated in patients with RA. Therefore, this cell type likely plays a pathogenic role in RA and could potentially be used as biomarker for diagnosis, prognosis, or as a target for treatment.
Declaration of interest
The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.
Reviewer disclosures
Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.
|
2021-04-16T06:17:08.728Z
|
2020-10-14T00:00:00.000
|
{
"year": 2021,
"sha1": "ac1afc2ddfd3119465c3f5ee5772ada58f9d721a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-88912/v1.pdf?c=1602707621000",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4eef85a4e716d6e6a00bad65f9ebb87ec20e052d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
67804864
|
pes2o/s2orc
|
v3-fos-license
|
Creation of an inflationary universe out of a black hole
We discuss a two-step mechanism to create a new inflationary domain beyond a wormhole throat which is created by a phase transition around an evaporating black hole. The first step is creation of a false vacuum bubble with a thin-wall boundary by the thermal effects of Hawking radiation. Then this wall induces a quantum tunneling to create a wormhole-like configuration. As the space beyond the wormhole throat can expand exponentially, being filled with false vacuum energy, this may be interpreted as creation of another inflationary universe in the final stage of the black hole evaporation.
Inflation in the early universe provides answers to a number of fundamental questions in cosmology such as why our Universe is big, old, full of structures, and devoid of unwanted relics predicted by particle physics models [1]. Furthermore, despite the great advancements in precision observations of cosmic microwave background (CMB) radiation, there is no observational result that is in contradiction with inflationary cosmology so far [2,3].
Inflationary cosmology has also revolutionalized our view of the cosmos, namely, our Universe may not be the one and the only entity but there may be many universes. Indeed already in the context of the old inflation model [4,5], Sato and his collaborators found possible production of child (and grand child...) universes [6][7][8].
Furthermore, if the observed dark energy consists of a cosmological constant Λ, our Universe will asymptotically approach the de Sitter space which may up-tunnel to another de Sitter universe with a larger vacuum energy density [9][10][11] to induce inflation again to repeat the entire evolution of another inflationary universe. In such a recycling universe scenario, the Universe we live in may not be of first generation, and we may not need the real beginning of the cosmos from the initial singularity [12].
In this context, so far the phase transition between two pure de Sitter space has only been considered. However, phase transitions which we encounter in daily life or laboratories are usually induced around some impurities which act as catalysts or boiling stones. In cosmological phase transitions, black holes may play such roles. This issue was pioneered by Hiscock [13]. More recently, Gregory, Moss and Withers revisited the problem [14]. They have observed that the black hole mass may change in the phase transition and calculated the Euclidean action taking conical deficits into account [14][15][16].
In this manuscript we report the effect of a black hole on up-tunneling assuming that the high energy theory of elementary interactions accommodate a false vacuum with energy density U = M 4 X ≡ 3M 2 P l H 2 , where M P l is the reduced Planck scale, and that transition be-tween this state and the current vacuum state is possible through a thin wall bubble nucleation with its surface tension σ (See Fig. 1). We assume the energy scale M X is somewhat smaller than the typical grand unification scale M GUT ∼ 10 16 GeV on the basis of the constraints imposed on the energy scale of inflation from B-mode polarization of CMB [17]. As a result we show that as the black hole mass decreases to ∼ M 3 P l /M 2 X due to the Hawking radiation [18,19], a false vacuum bubble may be spontaneously nucleated, to create a wormhole-like configuration. Beyond the throat is a false vacuum state which inflate to create another big universe. Then one may regard that the final fate of an evaporating black hole is actually another universe.
We study how such a configuration may be created from an initially Schwarzschild geometry with the mass parameter M + by calculating Euclidean actions of initial and final configurations. Since the energy scale of the false vacuum is presumably much larger than that of the current dark energy, we neglect the latter. To be more specific, we consider the case a false vacuum bubble is nucleated around aforementioned Schwarzschild black hole and its radius R expands to create a big inflationary domain, leaving a black hole with mass M − in the center which may be different from M + .
After the bubble nucleation, the inner geometry labeled with a suffix − is Schwarzschild de Sitter space, which is connected with the outer Schwarzschild geometry labeled by a suffix + by a thin wall bubble with surface tension σ. Since such a local process cannot change the outer geometry, it must remain Schwarzschild spacetime with mass M + . Then the inner and outer metrics are given by We describe the wall trajectory in terms of the local coordinates (t ± (τ ), r ± (τ ), θ, ϕ) on each side depending on the proper time τ of an observer on the wall, so that they satisfy where a dot denotes derivative with respect to τ . We take the radial coordinates so that the radius of the bubble is given by R = r + = r − in both inner and outer coordinates. The evolution of the bubble wall is described by the following equation [14,20,21] based on Israel's junction condition [22] where (3) we find the wall radius satisfies the following equation similar to an energy conservation equation of a particle in a potential V (z).
Here dimensionless coordinate variables are defined by As is seen in Fig. 2, the potential V (z) has a concave shape with the maximum V (z m ) ≡ V max given by In Eq. (8), one must take a positive (negative) sign for s < 1 (s > 1), respectively. In this system, obviously, an Euclidean solution is possible if and only if E ≤ V max . Let us concentrate on the case E = V max where there is an Euclidean solution of a static bubble, since E decreases in accordance with the decrease of the original black hole mass M + due to the Hawking radiation. We calculate the Euclidean action of the instanton. This bubble is unstable in Lorentzian spacetime and may start expansion or contraction with the same probability after nucleation. We are of course interested in the case bubble wall expands after nucleation.
There are four relevant parameters in this system, namely, χ, γ, M + , and s. Among them, χ and γ are determined by underlying high energy field theory. For the static bubble configuration, we find only the range s < 1 is relevant, and from E = V (z m ) and V ′ (z m ) = 0 we can express M + and s as monotonic functions of v ≡ z 3 m as From the above analysis alone, one may think that s may take arbitrary small value down to s = 0. This is not the case, however, because the requirement that the time must proceed in the same direction in both inside and outside the wall, or in other words, that β + and β − must have the same sign imposes a nontrivial constraint on s [14]. For s < 1, in which we are interested, we find β + > 0 for z > 1 and β + < 0 for z < 1. On the other hand, we find Hence only for z > (1 − γ 2 /2) −1/3 ≡ z c we have β − < 0. Thus in order for β ± have the same sign in the region z ≥ z m , where the nucleated bubble can expand, we must satisfy z m > z c , or v > (1−γ 2 /2) −1 > 1. Then we obtain the following bounds on M + and s from (9) and (10).
We also find Thus physically relevant expanding bubble nucleation is possible only for β + < 0 and β − < 0 satisfying the above bounds. It has been shown in [20] that in this case the trajectory of the bubble wall exists in region IV on the Penrose diagram (Fig. 4), that is, a wormholelike configuration is created and the false vacuum bubble exists on the other side of the throat (Fig. 4-(c)).
Let us now calculate the Euclidean action corresponding to the static bubble configuration following Gregory, Moss, and Withers [14], according to whom the Euclidean action with a bubble I • may be divided into the following components.
Here I − and I + denote contribution from inner and outer bulk, respectively, and I W denotes that of the domain wall. Finally I B represents contribution of conical deficits which was absent in Hiscock's analysis [13]. The result of explicit manipulation yields where A − and A + denote the area of the horizon of a Schwarzschild de Sitter black hole with mass M − and that of a Schwarzschild black hole with mass M + , respectively, and the suffix E indicates the Euclidean time.
The first and second terms, which are identical to the black hole entropy, are due to the conical deficits. It has been recently shown [23] that they are present even if we adopt Fischler-Morgan-Polchinski approach [24,25] to calculate the transition rate using the WKB approximation, which justifies Gregory-Moss-Withers type expression of the bubble nucleation rate [14] Γ Here I Sch is the Euclidean action of the Schwarzschild black hole with mass M + . For the case of the static bubble, we find the last term on the right hand side of (16) vanishes [14]. As for the first term, the gravitational radius, r g− , of the Schwarzschild de Sitter black hole with mass M − is obtained by solving We find The other positive solution of (18), r c− , which would correspond to the cosmological event horizon in case of genuine Schwarzschild de Sitter space, is given by We find it is larger than the bubble radius, Therefore, there is no cosmological horizon at this stage. This is why we have only black hole entropy terms in (16) unlike [14]. It is interesting to note that in the limit z m = z c (M + = M c ), we find yielding r g− > r g+ . Once we take z m > z c (M + < M c ), the proper hierarchy r g− < R < r c− is maintained but the inequality r g− > r g+ still holds (solid lines in Fig. 3).
As a result we find takes a negative value because M − 0 takes a finite value due to the constraint (13). It may suggest spontaneous nucleation of a bubble soon after the mass of the original black hole falls below the critical value M c . This result may better be interpreted from thermodynamic point of view [26,27]. As the terms corresponding to the energy [28] are absent in both I • and I Sch , the transition may be determined by the increase of the entropy by A − /4G. Then we can sketch the following scenario of cosmic evolution. Typical astrophysical black holes with mass ∼ 10M ⊙ will evaporate in ∼ 10 67 years from now. As its mass falls below the critical value M c , a false vacuum bubble is spontaneously nucleated with the radius (21). It is unstable in Lorentzian spacetime and so the bubble would start expanding with the probability 1/2, and then a wormhole-like configuration is realized. The space on the other side of the throat starts inflation to create an exponentially large domain causally disconnected from our patch of the universe. If inflation is appropriately terminated followed by reheating, another big bang universe will result there. For this purpose the old inflation model [4,5] with thin wall bubble nucleation does not work, but we may make use of the results of open inflation models there [29][30][31] which can also realize an effectively flat universe. Throughout these processes, the outer geometry remain Schwarzschild space with the mass parameter M + , so those who live there do not realize a black hole in their universe has created a child universe. The above result may also suggest that our Universe may have been created from a black hole in a previous generation in the cosmos.
|
2018-09-02T10:32:00.000Z
|
2016-01-15T00:00:00.000
|
{
"year": 2016,
"sha1": "2dab8ac98aabdedc19c7b7c73de77443a94adffd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2018.08.018",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "2dab8ac98aabdedc19c7b7c73de77443a94adffd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
222150689
|
pes2o/s2orc
|
v3-fos-license
|
Extracellular volume fraction measurement correlates with lymphocyte abundance in thymic epithelial tumors
Background Recent advance in tissue characterization with parametric mapping imaging has the potential to be a novel biomarker for histopathologic correlation with thymic epithelial tumors (TETs). The purpose of our study is to evaluate MRI T1 mapping with the calculation of extracellular volume (ECV) fraction for histologic correlation with thymic epithelial tumor based on lymphocyte abundance. Methods A retrospective study including 31 consecutive patients (14 men and 17 women, median age, 56 years; interquartile range, 12 years) with TETs was performed. The T1 values and ECV were assessed by using quantitative MRI mapping techniques. Mann-Whitney U test, Kruskal-Wallis H test, and receiver operating characteristic curve analyses were used to assess discrimination between different types of TETs based on lymphocyte abundance. Results Extracellular volume was significantly higher in TETs with sparse lymphocyte, including type A, type B3, and thymic carcinoma, compared with those with abundant lymphocyte, including type B1, B2, and AB thymomas (42.5% vs 26.9%, respectively; p < 0.001). Extracellular volume was significantly higher in thymic carcinoma compared with low grade and high grade thymomas (48.6% vs 31.1% vs 27.6%, respectively; p = 0.002). Conclusions T1 mapping with the calculation of extracellular volume (ECV) fraction correlate with the WHO histologic classification of thymic epithelial tumor based on lymphocyte abundance.
Introduction
Thymic epithelial tumors (TETs), including thymoma and thymic carcinoma, show a broad spectrum of histologic features and oncologic behavior. Several classifications have been proposed to correlate the histopathology and the clinical course of the TETs and to reflect their invasiveness and prognosis. The World Health Organization (WHO) histologic consensus classification, proposed in 1999 and revised in 2004, is the currently advocated classification. It represents both the clinical and the functional characters of TETs and hence contributes to the clinical assessment and treatment of the patients [1,2].
In the WHO classification, the thymomas can be divided into those with spindled neoplastic epithelial cells (A, AB) and those with epithelioid neoplastic epithelial cells (B1-B3). The further subdivision depends on the neoplastic epithelial cells and nonneoplastic immature T-cells component; in type A and B3 thymoma, there is a paucity or even lack of immature T-cells throughout the densely packed spindle cells (type A) or sheets of polygonal tumor cells (type B3), whereas there is abundance of immature T-cells and tumor cells in type AB, B1, and B2 thymomas. Type AB, B1, and B2 thymomas show an abundance of immature lymphocytes ("thymocytes") either diffusely (all type B1 and B2, rare type AB thymomas) or focally (most type AB thymoma) [3]. Accordingly, TETs were divided into lymphocyte abundant and lymphocyte sparse subgroups based on histopathological findings. The lymphocyte sparse group contains type A, B3 thymomas and thymic carcinoma, whereas the lymphocyte abundant group composes of type B1, B2, and AB thymomas. The clinical outcomes of TETs have been reported to associate with the WHO classification, where type A, AB, and B1 thymomas have better prognosis than type B2, B3 thymomas and thymic carcinoma [2].
Although the WHO classification of TETs has been reported to correlate with clinical outcomes, the histological typing of TETs still remains a challenge for surgical pathologists. Therefore the need for clinical judgement based on a complete history and physical examination, correlated with laboratory tests and radiological features, helps to develop a presumptive diagnosis. TETs can be divided into two compartments: intracellular cellular volume (ICV) and extracellular cellular volume (ECV). While the value of ICV represents tumor cell and lymphocyte, the value of ECV is composed of the measurement of extracellular matrix and intracapillary plasma volume. T1 mapping with ECV fraction measurement is a feasible and noninvasive clinical tool to assess and quantify tissue composition [4]. Compared with the sole evaluation of T1 mapping, ECV shows advantage including independence of field strength, imaging parameters and contrast dose because it is a ratio derived from pre-and post-contrast T1 values in addition to the physiologically intuitive unit of measurement [5,6]. To the best of our knowledge, there has been no study applying T1 mapping with ECV fraction measurement to the diagnosis of TETs. The aim of this study is to assess the diagnostic feasibility of T1mapping with ECV fraction measurement for the evaluation of TETs. We hypothesize that ECV correlates with the WHO classification because it reflects stromacell ratio.
Study population
This retrospective study was approved by the institutional review board (B-ER-108-046) and informed consent was waived. Between January 2018 and October 2019, a total of 31 consecutive patients with TET more than 2 cm in diameter were referred for mediastinal MR. All the patients underwent thymectomy and thymothymectomy (Video-assisted thoracic surgery, n = 17; sternotomy, n = 8) or core needle biopsy (n = 6), without neo-adjuvant treatments. Data were collected on age, sex, myasthenia gravis symptoms, tumor size, WHO histology classification, Masaoka-Koga stage, history of extrathymic malignancies, and cancer treatments before and after surgery for TETs. The hematocrit (Hct) was measured to calculate the ECV of the tumor.
Mediastinal MRI acquisition
Mediastinal MR was performed using a 3 Tesla system (Ingenia, Philips Healthcare, Best, the Netherlands) using a 16-channel dStream anterior coil and a 12-channel dStream posterior coil for signal reception. All patients underwent a clinical routine mediastinal image protocol and in addition received native and postcontrast modified Look-Locker inversion-recovery (MOLLI) sequence of the mediastinal tumor. The routine image protocol included axial pre-contrast modified Dixon (mDixon, water, inphase, outphase images), sagittal fat-suppressed T2-weighted, axial ECG-gated breathhold T2 turbo spin echo images with double inversion recovery, axial diffusion-weighted imaging (DWI) (b-values = 0, 400, and 800 s/mm 2 ), from which apparent diffusion coefficient (ADC) map was constructed, axial and sagittal T1weighted imaging after administration of contrast medium. The detail of the protocol was shown in Supplementary Table 1. A breath-hold, electrographic-gated, modified Look-Locker inversion-recovery (MOLLI) sequence with a 5 s (3 s) 3 s and 4 s (1 s) 3 s (1 s) 2 s sampling pattern was performed for native and post-contrast T1 mapping in the axial orientation, respectively, with a balanced steady-state free-precession (bSSFP) readout, FOV 250 × 250 mm 2 , matrix 192 × 192, TR/TE 2.8/1.29 ms, acquisition window duration 165 ms, flip angle 35 degrees and 7 mm thickness. T1 maps were acquired before and after 10 min after a bolus contrast agent administration (0.1 mmol/kg; Gadovist, Bayer Healthcare, Leverkusen, Germany). T1 maps were generated online from the MOLLI images after the motion correction.
Image analysis
All examinations were independently analyzed by a board-certified radiologist (C.Y.L., with 5 years of experience in thoracic MRI), blinded to patient's information and clinical data. The longest tumor diameter was measured at the widest dimension on transverse cross-sectional images. Tumor boundaries were determined and segmented via inspection of T2-weighted, contrast enhanced T1-weighted, and DW imaging. For each patient, a freehand region of interest (ROIs) were manually drawn on the postcontrast sequences in conjunction with T2-weighted imaging on three consecutive levels where the largest area of the tumor on axial MR images was included. In order to avoid including cystic or calcified part, necrosis, or Fig. 1 Representative example of a 59-year-old female with type A thyoma. a Axial T2-weighted black-blood imaging. b Axial post-gadolinium T1-weighted images (T1WI). c, e Axial native T1 mapping imaging. d, f Axial post-contrast T1 mapping imaging. The freehand region of interest (ROIs) were manually drawn on both native T1 and post-contrast T1 mapping imaging (c, d) where the largest area of the tumor on axial MR images was included. The ROI was smaller in size than the mass and included only the enhancing part of the tumor avoiding cystic or necrosis part. T1 values of the blood pool were obtained from the descending thoracic aorta at right pulmonary artery level on both native T1 and postcontrast T1 mapping imaging (e, f) hemorrhage of the tumor, the ROI was manually placed because of current development limitation of technology. The ROI was smaller in size than the mass and included only the enhancing part of the tumor. The respective ROI was then copied to the MOLLI sequence and ADC maps, using an automatic coregistration tool and by visual correlation in case of breathing artifacts. For each ROI, mean T1 value was recorded and used for final analysis. T1 values of the blood pool were obtained from the descending thoracic aorta at right pulmonary artery level on the transversal maps (Fig. 1). Extracellular volume values were normalized for hematocrit and calculated from pre-and postcontrast T1 values by using the following equation: The calculation of ECV assumes an equilibrium of gadolinium-based contrast agents between the ECV and intravascular compartment.
Interobserver concordance
For native T1 value and ECV fraction measurement, another rater (C.C.C.), a board-certified thoracic surgeon with 5 years of experience in thoracic MRI, who was blinded to the patient's clinical information, also delineated the ROIs. The measurements from the second rater were only used to compare with the findings of the radiologist to assess the inter-observer concordance.
Pathologic diagnosis
The final diagnosis was confirmed on microscopic pathologic examination. The specimens of TETs were fixed in 10% formalin and stained with conventional hematoxylin-eosin staining. Pathologic analysis was performed by a board-certified pulmonary pathologist (C.Y.C., with 7 years of experience), who was blinded to the clinical and MR findings. Thymic epithelial tumors were classified on the basis of the morphologic analysis of the neoplastic epithelial cells with lymphocyte-epithelial cell ratio based on the WHO histologic classification. Further division into lymphocyte sparse group (including type A, B3 thymomas and thymic carcinoma) and lymphocyte abundant group (including type B1, B2, and AB thymomas) was performed based on histopathological findings. Thymic epithelial tumors were staged according to the Masaoka-Koga clinical staging system.
Statistical analysis
Descriptive statistics were displayed as the median with 1st and 3rd quartile values. The Mann-Whitney U test and Kruskal-Wallis H test were used to compare variables. The nonparametric receiver operating characteristic analysis was performed to assess the discriminative ability, and area under the receiver operating characteristic curve (AUC) was calculated. Optimal cutoff values were derived from receiver operating characteristic curves, and sensitivity and specificity were calculated based on these best cutoff values. Inter-observer reliability was assessed with the
Demographic data
Demographic characteristics are summarized in Table 1 and detailed in Supplementary Table 3. The comparison of mean ECV fraction values of lymphocyte sparse with lymphocyte abundant TETs was demonstrated in Fig. 3a. Its best cutoff value for differentiation between lymphocyte sparse and abundant tumor was 36.0% at receiver operating characteristic curve analysis, with a sensitivity of 92.3% and a specificity of 94.4% (area under curve 0.97; 95% confidence interval: 0.93-1.00) as shown in Fig. 3b.
ADC value
The mean ADC value was significant lower in tumor sized greater than 5 cm compared with tumor sized less than (Table 3).
Reproducibility assessment
Excellent inter-observer reproducibility was seen in the measurement of the native T1 value and ECV (ICCs = 0.912 and 0.901, respectively).
Discussion
We have demonstrated in this study that higher ECV value was identified in thymic carcinoma than thymomas, and in lymphocyte sparse TETs than in lymphocyte abundant TETs. A few prior studies analyzed the imaging characteristics of TETs [7][8][9][10]. Jeong et al. found that CT imaging findings including irregular contours and necrotic part were more often seen in thymic carcinoma, whereas complete capsule, septum, and homogenous enhancement were more often seen in low grade thymoma [7]. Magnetic resonance (MR) images have been considered better in depicting tumor capsule, septum, or hemorrhage than images of computed tomography (CT) [9]. Recently, quantitative MRI has been increasingly applied to characterize anterior mediastinal tumors [9,[11][12][13][14][15][16][17]. Abdel Razek A.A. et al. [12] found that lower ADC value was identified more in high risk thymoma and thymic carcinoma than low risk thymoma. The results can be explained by that decreased diffusion space of water protons in the extracellular and intracellular dimensions due to enlarged nuclei, hyperchromatism, and hypercellularity in high risk thymoma and thymic carcinoma. They also found that ADC value was significantly lower in invasive thymoma than noninvasive thymoma. According to our result, ADC value could only differentiate the invasiveness of TETs but not the histologic types. The variation of the aforementioned results could be attributed to the difference of patient cohorts in each study. In Abdel Razek A.A. et al. study, most high grade thymoma were invasive thymoma (7 invasive thymomas out of 9 high grade thymomas). On the contrary, in our study, we had only 4 invasive thymomas out of 10 high grade thymomas, while the other six were noninvasive thymomas. Our results suggested that ADC value had better correlation with Masaoka stage than histologic type. Compared with ADC value quantification, T1 mapping imaging with ECV fraction calculation shows the advantage of better spatial resolution, repeatability, reproducibility, and accuracy. The ADC map of TETs shows prominent imaging distortion and prone to motion artifacts, and the small sized tumor cannot be evaluated precisely [18,19]. On the other hands, there is less image distortion and motion artifact in the breath-hold, EKG-gated T1 mapping with ECV fraction calculation. Even small sized tumor can be adequately measured. In addition, ADC value is affected by different field strength and diffusion encoding technique [20]. On the contrary, ECV calculations represent the ratio of T1 values, and are less sensitive to systemic biases that are likely to cancel one another in the mathematical derivation of ECV. Therefore, we consider ECV fraction a better noninvasive predictive imaging tool than ADC value for histopathological correlation. The addition of T1 mapping and ECV into the routine MR imaging of thymic epithelial tumors may improve assessment of these lesions.
T1 mapping with ECV measurement allows dichotomization of TETs into cellular and extra-cellular components, providing new frontiers for pathologic correlation. The use of ECV is preferable than native or postcontrast T1 as a biomarker, because ECV avoids confounders by taking into account the T1 behavior of blood, variable dosing and clearance of the contrast agent, and variation in the hematocrits. Besides, ECV is insensitive to systemic bias, the effects of renal function, anemia, or obesity [4][5][6]. Type A thymoma, type B3 thymoma, and thymic carcinoma showed higher ECV because of the sparse lymphocytes infiltrates (Fig. 4). On the other hand, type B1 and B2 thymomas belonged to the lymphocyte-rich thymomas, and type B1 thymoma showed the lowest ECV because of its dense lymphocystic population. Type AB thymoma has the components of type A and type B thymoma, and therefore the value depends on the proportion of either component (Fig. 5).
The ECV values and lymphoid densities among different TETs are illustrated in Fig. 6. Notably, the type AB and type B2 thymoma show similar ECV value. It is difficult to differentiate thymic carcinoma from type A thymoma or type B3 thymoma based on ECV alone. Nonetheless, the histologic classification could be predicted with the radiographic invasiveness taken into consideration. For example, a radiographically invasive TET with high ECV would more likely be thymic carcinoma than type A thymoma, whereas TET with extremely low ECV would more likely be type B1 thymoma. Although our result demonstrates ECV has the ability to differentiate thymic carcinoma from high grade thymomas and low grade thymomas, however, from histopathological point of view, the change of ECV is not a linear correlation with histologic classification. In fact, the denotation and classification of thymic epithelial tumor have been a matter of heterogeneity more than a continuum of histologic spectrum. It has been increasingly recognized that thymomas are not as "pure" as previously assumed. For example, tumors denoted as type B2 thymoma might actually consist of small foci of type B1 thymoma, and thymic carcinoma could also contain type B3 thymoma [3]. According to our results, native T1 value or ECV could not differentiate histological subtypes of TETs just as pathologist could not differentiate TETs simply based on its lymphoid component under hematoxylin and eosin stain. The density of epithelioid cell, lymphoid cell and other immunohistochemistry were also mandatory to reach a precise diagnosis. Further investigation and Fig. 6 The ECV spectrum based on variable lymphoid component of different TETs. Type B1 thymoma shows the lowest ECV due to highest lymphocytic density. Type A thymoma, type B3 thymoma, and thymic carcinoma show high ECV due to sparse lymphoid density. And type AB and type B2 thymomas lie in the middle development of MR imaging modality are mandatory to improve the diagnostic accuracy for thymic epithelial tumors. Combination of quantitative, qualitative data, variable imaging biomarkers, including T1 mapping, DWI, or dynamic contrast enhanced MR may help to predict cell type of thymic epithelial tumors or anterior mediastinal tumors. Taken together with clinical history, precise histologic diagnosis could be anticipated and unnecessary operation or biopsy could be avoided especially in patients with compromised performance status [21]. In patients undergoing surgery or chemoradiotherapy, follow-up for suspicious lesion using T1-mapping with ECV might help in depicting early disease relapse.
Our study had limitations. First, it was a single center retrospective study with small patient number and statistical power was therefore limited. Second, the image analysis was performed by manual ROI calculation of T1 value and ECV value, although excellent inter-observer reproducibility was confirmed in the measurement of the native T1 value and ECV. Third, our study did not compare predictive value of ECV with morphological features on routine MRI sequences. Fourth, because of the short follow-up period, oncologic outcome analysis between different histological types or lymphocyte abundance in TETs was not available. Future studies with larger number of patients are warranted to validate our results, and application of 3D volumetric calculation would be better for analysis if the tumor is heterogeneous.
Conclusions T1-mapping with ECV fraction measurement provides a non-invasive, reliable, and reproducible imaging tool for tissue characterization of TETs.
Additional file 1: Table S1 MRI acquisition parameters. Table S2 Patients' treatment course and oncologic outcome.
|
2020-10-07T14:16:50.484Z
|
2020-10-07T00:00:00.000
|
{
"year": 2020,
"sha1": "52a7aaba647b9131d372dc54731adb7c764b3da2",
"oa_license": "CCBY",
"oa_url": "https://cancerimagingjournal.biomedcentral.com/track/pdf/10.1186/s40644-020-00349-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52a7aaba647b9131d372dc54731adb7c764b3da2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210871502
|
pes2o/s2orc
|
v3-fos-license
|
Nonuniformly-Rotating Ship Refocusing in SAR Imagery Based on the Bilinear Extended Fractional Fourier Transform
Nonuniformly-rotating ship refocusing is very significant in the marine surveillance of satellite synthetic aperture radar (SAR). The majority of ship imaging algorithms is based on the inverse SAR (ISAR) technique. On the basis of the ISAR technique, several parameter estimation algorithms were proposed for nonuniformly rotating ships. But these algorithms still have problems on cross-terms and noise suppression. In this paper, a refocusing algorithm for nonuniformly rotating ships based on the bilinear extended fractional Fourier transform (BEFRFT) is proposed. The ship signal in a range bin can be modeled as a multicomponent cubic phase signal (CPS) after motion compensation. BEFRFT is a bilinear extension of fractional Fourier transform (FRFT), which can estimate the chirp rates and quadratic chirp rates of CPSs. Furthermore, BEFRFT has excellent performances on cross-terms and noise suppression. The results of simulated data and Gaofen-3 data verify the effectiveness of BEFRFT.
Introduction
In the marine surveillance of satellite synthetic aperture radar (SAR), nonuniformly-rotating ship refocusing is very significant for the detection and identification of ships. In complex sea conditions, the movements of ships are very complicated. In addition to the self-powered translation, ships also nonuniformly rotate by the influence of sea waves and other factors, which leads to the defocusing of ship images. Many SAR imaging methods [1][2][3][4][5][6] were proposed for moving target refocusing. However, these methods are inapplicable for rotating targets. The inverse SAR (ISAR) algorithm based on the rotatable model has advantages for moving target imaging, especially for rotating targets. Hence, ISAR technique has been widely applied in SAR ship imaging. The range-Doppler (RD) algorithm based on the ISAR technique can be utilized to coarsely focus rotating ship images. The key process of the RD algorithm is motion compensation which includes the range migration and phase compensation. However, due to the time-varying Doppler frequency, nonuniformly rotating ships cannot be finely focused by the RD algorithm. To overcome the Doppler frequency spread, the range-instantaneous Doppler (RID) algorithm utilizes time-frequency transformations [7] instead of Fourier transformations. But this class of algorithms has problems of loss of resolution and cross-terms, which appear as false points in ship images.
ISAR Imaging Model of the Nonuniformly Rotating Ship
SAR imaging is widely utilized in stationary target imaging. However, it has a limitation for complex moving targets, especially for nonuniformly rotating ships [5,6,23]. Hence, the SAR imaging result of a nonuniformly rotating ship is usually unfocused. As mentioned in Section 1, the ISAR technique can be utilized in SAR images. Before applying the ISAR technique, the inverse azimuth operation (i.e., FFT firstly and then the inverse operation of dechirping) must be utilized to transform the azimuth of SAR image from image domain to time domain.
The ISAR imaging geometry of a nonuniformly rotating ship is shown in Figure 1. The ship is located in the Cartesian coordinate XYZ and nonuniformly rotates around the geometric center O. The rotation of ship can be expressed as a synthetic rotation vector Ω. The radial direction R from the radar to the geometric center O is the radar line-of-sight (LOS). Ω can be decomposed into the co-directional component Ω R and quadrature component Ω e . Ω e has the only contribution to Doppler effect. The plane viewed from the direction of Ω e is the ISAR imaging plane.
Assume that the position of a scattering point p is at the distance r p from the geometric center O. The Doppler frequency of p can be written as where λ denotes the wavelength of transmitted radar signal and v p denotes the radial translational velocity between radar and p. Due to the nonuniform rotation of the ship, Ω e can be expressed as the Taylor expansion where Ω (n) p denotes the n-order derivative of Ω e and n = 0, 1, 2, 3... After the range migration and phase compensation, the translational velocity can be removed, and the ship signal in a range bin can be written as where P denotes the number of scattering points in a range bin, σ p denotes the magnitude of the pth scattering point and θ 0,p denotes the initial rotation angle of the pth scattering point.
Here, we approximate s(t) as From Equation (4), we can find that the ship signal in a range bin has the form of a multicomponent CPS. Therefore, we rewrite the ship signal in a general expression as where A p = σ p exp(jθ 0,p ), a 1,p , a 2,p and a 3,p denote the center frequency, chirp rate and quadratic chirp rate, respectively.
Bilinear Extended Fractional Fourier Transform
Fractional Fourier transform (FRFT) [13] is a generalized form of the Fourier transform, which is equivalent to rotating the time axis of the Wigner-Vile plane at an angle and performing a Fourier transformation at zero frequency. LFM signals can be accumulated into straight lines in the Wigner-Vile plane. Hence, FRFT can be utilized to estimate the parameters of LFM signals. However, CPSs are presented as curves in the Wigner-Vile plane, which is inconvenient for estimating their parameters. The bilinear extended FRFT (BEFRFT) is proposed to estimate the parameters of CPSs in Equation (5).
Principle of BEFRFT
Consider a noisy multicomponent CPS.
The bilinear correlation function can be written as where denotes the auto-terms; R cross (t, τ) and R noise (t, τ) denote the cross-terms and noise, respectively. The cubic phase function (CPF) [12,22] based on NUFFT [24] of R auto (t, τ) can be written as We utilize the modulus form to eliminate the influence of s 2 p (t) in Equation (9) as where denotes the Hadamard product and * denotes the complex conjugation. From Equation (10), we can find that if we rotate the coordinate axis and perform FFT in the direction of f τ 2 = 2a 2,p + 6a 3,p t, the auto-terms can be accumulated at zero frequency and the noise in Equation (7) will spread out over all frequencies. Based on the above statement, the expression of BEFRFT can be written as where α denotes the rotation angle; u and v respectively denote the new coordinate axes corresponding to t and f τ 2 ; f denotes the FFT of v.
The BEFRFT of auto-terms can be written as The auto-terms turn into peaks by Equation (12). a 2,p and a 3,p can be estimated as
Cross-Term Characteristic
Due to the nonlinear transformation, the cross-terms are generated under a multicomponent CPS in Equation (7).
Here, we consider two noise-free CPSs to analyze the characteristic of cross-terms of BEFRFT The auto-terms can be expressed as the form of Equation (8), and the cross-terms can be written as where Obviously, only if η(t, τ) = 0 is established, can cross-terms in Equation (15) be accumulated into the form of impulse functions in Equation (9). However, η(t, τ) = 0 is hard to be satisfied, especially for real data. Additionally, the following modulus operation and Fourier transform would not generate cross-terms. Hence, BEFRFT is a strict bilinear transformation, which has strong suppression to cross-terms.
Here, we give an example to illustrate the aforementioned content.
The simulation results are shown in Figure 2. Figure 2a shows the relative time t-relative frequency f τ 2 space of CPF in Equation (9). As indicated in Figure 2a, the auto-terms are accumulated into straight lines. However, the cross-terms also exist in a certain form, which increases the difficulty of distinguishing auto-terms. After BEFRFT in Equation (12), it can be seen from Figure 2b,c that the auto-terms are accumulated into peaks and the cross-terms are hardly observed, which means suppression to cross-terms. In terms of a N-component CPS, the BEFRFT of bilinear transformation generates (N 2 − N) cross-terms in Equation (7), while four-order multilinear transformations like CIGCPF and CIMCPF generate (N 4 − N) cross-terms. For real data, the generation of cross-terms is greatly reduced by BEFRFT, which can improve the veracity of parameters estimation.
Example 2.
We considered a mono-component CPS with zero-mean white Gaussian noise denoted by Bu. The sampling frequency was 256 Hz and the sampling number was 256. The parameters of Bu were as follows: A = 1, a 1 = 31 Hz, a 2 = −23 Hz/s, a 3 = 10 Hz/s 2 . The input SNR was SNR in = [−8 : 1 : 8]. Two-hundred Monte-Carlo simulations were performed for each input SNR. Figure 3a shows the comparison of the input-output SNR of BEFRFT, CIGCPF, CIMCPF and matched filter. When SNR in ≥ −5 dB, the input-output SNR curve of BEFRFT coincides with the matched filter line, which means the input SNR threshold of BEFRFT is −5 dB. The same as BEFRFT, the input SNR thresholds of CIGCPF and CIMCPF are −2 dB and −3 dB, respectively. We compare the MSEs of chirp rate a 2 and quadratic chirp rate a 3 with the Cramer-Rao bounds (CRB) in Figure 3b,c, respectively. Obviously, the input SNR thresholds of BEFRFT, CIGCPF and CIMCPF in Figure 3b,c match the results of Figure 3a. When the input SNR is above the threshold, the MSEs of chirp rate and quadratic chirp rate are close to the CRBs, which indicates the chirp rate and quadratic chirp rate can be estimated accurately.
Hence, we can draw a conclusion that BEFRFT has a better antinoise performance. There are two main reasons: (1) BEFRFT is a bilinear transformation, but CIGCPF and CIMCPF are four-order multilinear transformations. The higher order of transformations lead to the generation of more cross-terms between signal and noise. (2) Unlike the two step estimation of BEFRFT for a CPS, CIGCPF and CIMCPF need three steps, which causes more error propagations.
Nonuniformly Rotating Ship Refocusing Based on BEFRFT
The main idea of BEFRFT is estimation of CPS signals' parameters. Firstly, we utilize BEFRFT to estimate the chirp rate and quadratic rate. Then, we utilize the dechirp technique and FFT to estimate the center frequency and amplitude. The implementation procedures of nonuniformly-rotating ship refocusing based on BEFRFT are illustrated by the flowchart in Figure 4 and described in detail as follows.
Step 1 Perform the inverse azimuth operation to the original ship image, as mentioned in Section 2.
Apply the range migration and phase compensation to turn the received signals into the turntable form.
Step 2 Get the received signal s h (t) of the hth range bin, where 1 ≤ h ≤ H and H is the number of range bins.
Step 3 Apply BEFRFT to estimate the chirp rate a 2,p and quadratic rate a 3,p .
(â 2,p = u 2 sin α ,â 3,p = − cot α 6 ) = arg max Step 4 Dechirp s h (t) with a 2,p and a 3,p and utilize FFT to estimate the center frequency a 1,p and amplitude A p .
where D and f t denote the amplitude and the frequency of the peak after FFT, respectively.
To prevent the degradation of performance in a low SNR, we subtract each CPS in the frequency domain. The process can be written as where Step 6 Repeat steps 3-5 until the energy of residual signal is under the energy threshold. The energy threshold ξ can be set to 5% of the original signal energy [14,15]. Then, the estimatedŝ h (t) is obtained. Figure 4. Flowchart of nonuniformly-rotating ship refocusing based on BEFRFT.
Expeimental Results of Nonuniformly Rotating Ship Refocusing
In this section, the results of ship target simulation are given to illustrate the refocusing performance of proposed BEFRFT, and the Gaofen-3 data are utilized to verify the effectiveness of BEFRFT.
Nonuniformly Rotating Ship Refocusing With Simulated Data
The parameters of radar system and ship target are listed in Table 1. In Figure 5, the ship target model consists of 42 ideal scatters, and threww representative point targets, PT1, PT2 and PT3, are marked in red. Figure 6 shows ship images in the situation of SNR in = 5 dB. From Figure 6a, it can be seen the ship image based on ISAR algorithm is blurred in azimuth bin due to the Doppler frequency spread. After applying BEFRFT, the ship in Figure 6b is well-focused. To further illustrate the performance of proposed BEFRFT, contour plots and azimuth profiles of PT1, PT2 and PT3 are given in Figure 7. It can be seen that three point targets are all well-focused after applying BEFRFT.
Peak sidelobe ratio (PSLR) and integral sidelobe ratio (ISLR) are utilized as criteria to assess the quality of refocusing. The imaging quality parameters of PT1, PT2 and PT3 based on ISAR algorithm and BEFRFT are listed in Table 2. It can be seen that the imaging quality parameters of BEFRFT are very close to the theoretical values (i.e., PSLR (−13.26 dB) and ISLR (−9.8 dB)). Both the contour results and imaging quality parameters indicate that the proposed BEFRFT has a good performance on nonuniformly-rotating ship refocusing.
Nonuniformly Rotating Ship Refocusing with the Gaofen-3 Data
Two Gaofen-3 single-look complex (SLC) images of Singapore port were utilized to verify the effectiveness of the proposed BEFRFT, as shown in Figure 8. The latitudes and longitudes of center of images are (E104.0, N1.3) and (E104.1, N1.3), respectively. The Gaofen-3 SAR worked in the sliding spotlight mode and its partial parameters are as follows: radar center frequency f 0 is 5.4 GHz, the bandwidth B is 240 MHz, the pulsewidth T r is 55.0 µs, the pulse repeat frequency is 3125 Hz and the azimuth resolution is 1 m.
From Figure 8, we can find that the majority of ships are relatively big and well-focused, and some of relatively small ships are slightly rotated, which can be refocused by ISAR algorithm. Hence, we selected four small ships, which were nonuniformly rotated, to verify the refocusing performance of BEFRFT. The selected ships, S1, S2, S3 and S4, were framed in red and enlarged in Figure 8. The size of ship image slices was 180 m (range) × 176 m (azimuth). Figure 9 shows ship images of S1, S2, S3 and S4 based on different methods. As we can see from Figure 9a-d, the original ship images of S1, S2 and S3 are seriously unfocused, and the shape of the ships can hardly be seen. After the inverse azimuth operation and motion compensation mentioned in Section 2, the ship images based on ISAR algorithm are shown in Figure 9e-h. The ship images are still unfocused and the rotation of ships can still be seen. In Figure 9i-l, the classical LFM estimator RWT is utilized to refocus the ship images. The ships are very blurred and the details of ships can hardly be seen. LFM estimators like RWT only estimate the center frequencies and chirp rates of ship signal. Hence, the high-order phase terms cannot be estimated by RWT, which leads to the defocusing in Figure 9i-l. The defocused ship images based on RWT indicate the inadequacy of LFM estimators.
Here, we utilize the entropy [5,[14][15][16][17]19,20] and contrast [5] to assess image quality in Figure 9. An image with a smaller entropy has better image quality. The entropy of an image I can be written as Contrary to the entropy, a higher contrast means better image quality. The contrast of an image I can be written as C = std(|I(p, h)| 2 ) mean(|I(p, h)| 2 ) (24) where I(p, h) denotes pixel value at location (p, h), S = ∑ P p=1 ∑ H h=1 |I(p, h)| 2 . Table 3 shows the entropies and contrasts of ship images of S1, S2, S3 and S4 corresponding to Figure 9. From Table 3, we can find that Figure 9u-x showed the smallest entropies and the highest contrasts, which means better image quality resulted from BEFRFT than from the others. As analyzed in Section 3, BEFRFT has better performances on cross-terms and noise suppression. Hence, BEFRFT has excellent performance on refocusing of nonuniformly rotating ships.
Conclusions
This paper proposes a refocusing algorithm based on BEFRFT for nonuniformly rotating ships. The received signal is modeled as a multicomponent CPS for each range bin. BEFRFT estimates the chirp rates and quadratic chirp rates of CPSs. Compared with some other algorithms, (1) BEFRFT generates less cross-terms, which reduces the number of false points; (2) BEFRFT has a better antinoise performance for a lower SNR situation. Combining BEFRFT with RID algorithm, the finely refocused ship image can be obtained. Both the simulated data and Gaofen-3 data verify the practicability of proposed algorithm.
|
2020-01-22T15:47:35.390Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6eab1956ba8ebd9e1bd7837fe598ddbdc0be6591",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/2/550/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6eab1956ba8ebd9e1bd7837fe598ddbdc0be6591",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
236965621
|
pes2o/s2orc
|
v3-fos-license
|
Probing the Local Dielectric Function by Near Field Optical Microscopy Operating in the Visible Spectral Range
The optoelectronic properties of nanoscale systems such as carbon nanotubes (CNTs), graphene nanoribbons and transition metal dichalcogenides (TMDCs) are determined by their dielectric function. This complex, frequency dependent function is affected by excitonic resonances, charge transfer effects, doping, sample stress and strain, and surface roughness. Knowledge of the dielectric function grants access to a material's transmissive and absorptive characteristics. Here we introduce the dual scanning near field optical microscope (dual s-SNOM) for imaging local dielectric variations and extracting dielectric function values using a mathematical inversion method. To demonstrate our approach, we studied a monolayer of WS$_2$ on bulk Au and identified two areas with differing levels of charge transfer. Our measurements are corroborated by atomic force microscopy (AFM), Kelvin force probe microscopy (KPFM), photoluminescence (PL) intensity mapping, and tip enhanced photoluminescence (TEPL). We extracted local dielectric variations from s-SNOM images and confirmed the reliability of the obtained values with spectroscopic imaging ellipsometry (SIE) measurements.
Introduction
Advanced photonic and optoelectronic based integrated circuits (ICs) and devices depend on novel nanoscale materials such as CNTs, graphene nanostructures such as nanoribbons, and 2D materials like TMDCs. CNTs have already shown promise in creating novel optoelectronic ICs [2], [3], while graphene nanoribbons have been used in novel photonic ICs [4]. TMDCs have favourable properties for photonic applications due to excitonic states that exist at room temperature and the direct bandgap in monolayers, which have been used to realise photodetectors on photonic ICs [5]. To advance the applications of these nanoscale materials in photonics it is essential to characterise their fundamental electrical and optical properties like, for example, transmittance and absorption in addition to their frequency dependence.
An elegant way to gain access to the electric and optical characteristics of nanoscale materials is through determining their complex dielectric function which is a measure of the transmission and absorption of light through a material as a function of frequency [6]. This frequency dependence is not linear as it is affected by resonances around the exciton transition energies, a fact that enables the study of, for example, the influence of excitons on the optical properties of TMDC samples [6]. The dielectric function also provides insights into electrical and magnetic properties of nanoscale samples, important characteristics to consider when thinking about implementing them into novel applications.
The dielectric function is largely affected by material disorder. Extrinsic material disorder stems from the environment, while intrinsic disorder describes material disorder stemming from crystalline imperfections [7]. These effects can to some extent be characterised with established characterisation methods such as AFM and scanning tunnelling microscopy (STM) [7] in addition to far field methods like Raman and PL spectroscopies for identification of stress and strain [8] and KPFM for charge distribution information [9].
Along with these common sources of disorder, nanoscale systems add additional complexity stemming from their reduced dimensionality which results in a decrease of dielectric screening and an increase of the Coulomb interaction between charge carriers [10]. Considering TMDCs as an example, this has an effect on their electronic response such as allowing excitonic pairs to exist at room temperature as well as changes in the band gap when thinning TMDCs from multilayers to monolayers [11]. Inevitably intrinsic disorder like surface roughness, impurities, and defects will lead to a local fluctuation of the exciton binding energy as well as fluctuations in the bandgap leading to a new form of disorder now being referred to as dielectric disorder [10]. This form of disorder can affect samples down to the nanometre scale and can strongly affect their optical and electronic properties. To characterise this nanoscale form of disorder requires a technique with a spatial resolution on the nanometre scale that is sensitive to the local dielectric properties of the sample, and for 1D and 2D materials it needs to be surface sensitive.
The far field methods that are conventionally used to determine the dielectric function, like ellipsometry and reflectance experiments, can identify variations stemming from material disorder with a resolution down to the micrometre length scale [12]. Sample disorder and bandgap variations can be observed via PL/Raman spectroscopy [13]. However, the spatial resolution of these techniques would not serve well to investigate dielectric disorder due to the resolution being bound by the diffraction limit, which prevents nanometre resolution. AFM and KPFM can achieve nanometre resolution, and are used to image defects, intrinsic disorder, and surface potential information [14], [15]. SNOM itself was demonstrated recently in probing the dielectric screening of hBN in graphene integrated on silicon photonics at nanometre resolution [16]. Tip-enhanced Raman and PL spectroscopy can be implemented to identify nanoscale bandgap variations, strain, and defects [17], [18]. These techniques however are not suitable to quantify, or to identify nanoscale variations in, the dielectric function.
Here we show the dual s-SNOM as an advanced tool for measuring the dielectric function with nanometre scale resolution. To demonstrate this capability, we nano-imaged a sample consisting of monolayer tungsten disulphide (WS2) placed on a gold (Au) substrate with the aim to identify local variations in the optical properties of the sample. We find that the intrinsic disorder such as surface roughness, boundaries, and charge transfer influence the dielectric function and can lead to strong near field contrast changes in the s-SNOM images of the WS2. We recorded s-SNOM images at different harmonics and three different excitation wavelengths. We determined the dielectric values by using a pre-established inversion method that extracts the dielectric value from s-SNOM data (see Govyadinov et al. [19] and Tranca et al. [20]), as a function of tip position. The average dielectric values determined using the more volume sensitive second harmonic s-SNOM image data, gives the joint dielectric values of Au and WS2 for three excitation frequencies that are in excellent agreement with ellipsometry measurements. Average dielectric values from the more surface sensitive fourth harmonic, in contrast, differ from the bulk sensitive ellipsometry measurements due to their different penetration depth. We demonstrate the resolution of the dual s-SNOM by comparing the higher harmonic near field changes with the TEPL peak position, showing an inverse correlation. We correlate the s-SNOM measurements with spatially resolved PL spectroscopy and KPFM measurements, where we observe areas of strongly quenched PL and lower surface potential. This indicates that the strong variations of the dielectric function in monolayer WS2 are mainly due to charge transfer effects.
Material and Methods
The sample used in this work was made using a dry transfer method that utilises a viscoelastic polydimethylsiloxane (PDMS) stamp to mechanically exfoliate WS2 crystals (HQ graphene) onto a SiO2 substrate that had a 40nm layer of Au deposited on top using magnetron sputtering [21]. The approximate time between Au deposition and WS2 stamping was 30 mins.
For the pre-characterisation of the sample, we used PL spectroscopy and KPFM measurements. The PL spectra were recorded with a Horiba Jobin-Yvon XploRA micro-Raman spectrometer as a function of laser position on the sample with a 0.90NA (NA: numerical aperature) 100x objective. For excitation we used a laser with 532nm wavelength and 1 mW power on the sample, with an acquisition time of 1 s and a 600 grooves per mm grating. KPFM images were obtained with an AIST-NT scanning probe microscope, using Pt-Ir coated Si tips (ACCESS-EFM probes, AppNano, k = 2.7 N.m -1 ), and 1300 nm diode for the detection measurement. KPFM was operated in the amplitude modulated mode (AM-KPFM), which is sensitive to the electrostatic forces [22]. SIE was performed using an EP4 ellipsometer (Accurion Gmbh, Germany). Monochromatic light was provided by a xenon lamp with several interference filters. The angle of incidence was varied between 55 and 65°. The obtained data was analysed using Accurion's EP4Model software. Due to our sample being a monolayer, SIE is insensitive to the out-of-plane component so the dielectric function determined by SIE in this manuscript should be regarded as pseudo-dielectric.
The sample was characterised using dual s-SNOM and TEPL utilising a commercial s-SNOM (NeaSNOM from Neaspec GmbH, Germany). We used platinum-iridium coated AFM tips (23nm coating thickness) from Nano world, featuring a tip apex radius of below 25nm. A wavelength-tuneable cw laser (Hübner C-Wave, 450-650 nm wavelength) was used for excitation which was guided through a beam expander onto a parabolic mirror with a NA of 0.4. The parabolic mirror focusses the laser light onto the AFM tip which then acts as a near field probe in the visible spectral range. The parabolic mirror also collects the backscattered light. The laser power for all s-SNOM image measurements and TEPL measurements was ~1 mW at the tip with an integration time of 16 ms. The tip amplitude was 53.5 nm with a tapping frequency of 243 kHz. The tip-enhanced PL spectroscopy was taken via s-SNOM using side illumination equipped with a Kymera 328i spectrometer (Andor) and an Si CCD with a tip scanning step count of 50 nm per pixel. A sketch of the setup can be found in Ref [23].
Background suppression was done by oscillating the tip at an amplitude of 50nm at a frequency of about 250 kHz. The signal was then demodulated at higher harmonics of the tip frequency . The noise was further reduced by a pseudo-heterodyne interferometer which has a reference mirror that oscillates at frequency ≪ , changing the length of the reference beam path producing interference with the scattered signal. This produces sidebands around the fundamental harmonics at frequencies = ± where is an integer ≥ 1. Using this detection regime, the amplitude and the phase are recorded from the sample. Both are recorded as near field amplitude and phase images at various sidebands m of the fundamental harmonic. An increase in m leads to a decrease in noise present in the s-SNOM images.
Theory
To extract the dielectric constant of the sample at a particular wavelength from s-SNOM images, we follow an inversion method that was introduced in Ref. [19]. It is based on the prevailing theoretical description of the s-SNOM which describes the detected signal, containing both amplitude and phase information, by the scattering coefficient = / , where describes the electric field amplitude of the scattered light and describes the amplitude of the incident radiation [19], [24]. The electric field amplitude at the tip is given by 1 + , which accounts for light that is reflected by the sample to the tip, with the reflection coefficient of the sample. This electric field polarises the tip, yielding an effective dipole = α 1 + , where α is the effective polarizability of the tip that accounts for the near field interaction between sample and tip. The scattering of this effective dipole leads to an electric field amplitude ∝ 1 + at the detector, assuming that part of the scattered light is also reflected by the sample. Finally, the scattering coefficient can now be written as which is complex as and may have a phase difference [19], [24]. Thus, the signal that is recorded by the detector is proportional to the effective tip polarizability, which is determined by the interaction of the tip near field with the sample (see below).
The dual s-SNOM relies on this strongly enhanced and localised electromagnetic near field between the sample and the metallic tip which stems from mechanisms such as the lightning rod effect and localised surface plasmon resonances in the metallic tip [23]. To model this, we used a point dipole model which regards the tip apex as a perfectly conducting sphere with radius ≪ λ [19] [24]. By considering the sphere as a point dipole within the quasistatic approximation, one obtains an expression for the near field between the tip dipole and sample by considering a mirror point dipole whose direction is parallel to the tip dipole [24]. Solving these equations electrostatically yields the effective polarizability of the SNOM tip which depends on two functions: is related to the height # of the tip above the sample, and , the height above the Au substrate, with where ϵ 2 is the dielectric function of the tip material, and $ % = ϵ − 1 / ϵ + 1 5 only depends on the dielectric function of the sample % [24].
The dual s-SNOM demodulates the detected signal using a lock-in amplifier as well as a pseudo-heterodyne detection method [24] to suppress background. As a result of this background suppression, the dual s-SNOM generates near field images at different harmonics, with each harmonic being sensitive to different sample depths [25].
The demodulated detected signal at the th harmonic described by a complex Fourier transform σ 4 # = 5 6 4 7 8$, # 2 :; = < 8$, # 2 : => i Ω2 6 of the scattering coefficient 8$, # 2 : from the tip. Since the scattering coefficient depends on many unknown details of the detection pathway, we measure the near field contrast where , = is the scattering recorded at a reference sample position with known dielectric value % , = (in our case a bare Au substrate). For an unknown dielectric value of the sample % it is non-trivial to calculate % from η because of the non-algebraic relation of σ and % . We therefore use a Taylor expansion of the near field contrast (see Ref. [19] for details) where α ,B is the tip polarizability at the reference sample and α E an expansion coefficient.
When truncating the Taylor expansion at a specific order H, Eq. (6) can be inverted to find $. Once $ is found, it is then trivial to recover the dielectric constant of the sample at a specific wavelength using the relation ϵ = 1 + $ / 1 − $ [19]. Since we know the distance # between the sample and the tip, and we can input the tip-Au distance , by extracting it from the AFM topography, we can use this inversion method to recover the spatially resolved % from the s-SNOM images. By recording the s-SNOM images at different excitation energies I, this allows us to obtain the dispersion of the sample % I .
Results and Discussion
The WS2/Au sample ( Figure S1, Supplementary Information) was pre-characterised using KPFM and PL spectroscopy; KPFM provides surface potential information while at the same time providing an AFM image outlining the surface morphology of the sample. From the AFM image we identify a WS2 monolayer that covers the Au substrate, Figure 1(a). The WS2 is folded as a bilayer close to a wrinkle on the Au substrate identified as the magenta-shaded area. We furthermore identify an area of the WS2 monolayer surrounding the bilayer that has a different contrast, identified by the blue-shaded region. For clarity of sample orientation for Figure 1(a), (b) and (c), a wrinkle has been highlighted as a red-dashed line.
Concerning the KPFM measurements (Figure 1(b) -same sample area as Figure 1(a), rotated by roughly 8.5 degrees clockwise), the contrast difference show areas of differing work functions, determined from the differences in contact potential between the sample and the tip [26]. We find that the surface potential of the WS2 (blue-shaded area) is almost the same as of the Au substrate (yellow-shaded area) (145 mV) while the surrounding area shows a darker contrast (average surface potential of -175mV) indicating that charge transfer has occurred between the two materials. The work function of Au at room temperature is ~5.30 eV [27] while the work function of monolayer WS2 is ~4.90 eV [28] indicating that charge transfer of electrons should take place from WS2 into Au. We note that the KPFM measurements were performed in "dark" in relation to the possible exciton generation in WS2, because the cantilever deflection measurement is done using a 1300 nm laser diode. Hence, no photogenerated carriers are influencing the surface potential [29].
To gain more insight into the reason behind these areas of differing surface potential contrast, we conducted spatially resolved PL measurements [30]. By measuring the PL intensity as a function of laser position, we obtained a PL map, with a spatial resolution of ~570 nm [31] from a diffraction limited laser spot. Figure 1(c) shows the PL intensity map taken at the same sample area as the KPFM. The top part of the map (shaded in blue) shows an area of partially quenched PL, the borders of which bear a strong resemblance to the adhered area around the bilayer in the AFM and KPFM images. Quenched PL arises because of the charge transfer that has occurred in this area [32], backing up the KPFM findings. Moving beyond the borders of this adhered area, the PL increases in intensity while also exhibiting sizable peak shifts throughout this PL intense area of the sample (example shown in S2, Supplementary Information). To confirm these findings, TEPL measurements were conducted at specific positions of the sample as indicated in the PL map, TEPL spectra are shown in Figure 1(d). Like the conventional PL, TEPL spectra show partially quenched PL intensity in the adhered area. The PL intensity increases towards the non-adhered area of the WS2 while exhibiting significant peak shifts (tip up/tip down TEPL spectra shown in Figure S3, Supplementary Information).
A possible reason for the darker KPFM contrast (lower surface potential) in the nonadhered area is carbonaceous contamination of the Au surface which prevents charge transfer. Mechanical exfoliation of sulphur based TMDC's on Au substrates is facilitated by Au's affinity for sulphur, which is stronger than the van der Waals forces between the layers of the bulk TMD crystal [33]. With a prolonged (minutes) exposition of the Au surface to air [33], [34], this can lead to the accumulation of airborne organic contaminants on the Au surface, creating a buffer between the Au substrate and the WS2 sample. This very likely applies here as the mechanical exfoliation was done on an Au substrate by PDMS transfer, which due to the nature of viscoelasticity requires a slow and steady exfoliation process [21]. This is further supported by the unquenched PL area as it suggests that the charge transfer in this area has been inhibited. In addition, the variations of the PL position and intensity in this area indicates different levels of local strain and doping throughout the WS2 layer caused by the transfer method [8].
The surface roughness and sample contaminations lead to local variations in the dielectric function of WS2, which manifests as a dielectric disorder [10]. It is also important to note that the dielectric function of WS2 has a resonance peak around 620 nm (1.98 eV) [35]. Stress and strain indicated by the PL peak shifts could therefore cause a strong shift in the dielectric value [36].
The dual s-SNOM offers the opportunity to record optical images with nanometre resolution, and from the near field contrast variation to recover the dielectric values at different frequencies and at different fundamental harmonics of the tip frequency. The lower the harmonic, the higher the bulk sensitivity [25]. We recorded s-SNOM images of the precharacterized area at excitation wavelengths of 594 nm, 604 nm and 614 nm (Figure 2 and Figure S4, Supporting Information). Similar to the KPFM measurements, s-SNOM provides an AFM scan along with the near field scans which is shown in Figure 2(a). All the previously identified features can be identified in this AFM scan. Figure 2(b) shows an s-SNOM image of the sample area demodulated at the second harmonic ( = 2). The fringes seen in this image are attributed to surface plasmon polaritons (SPPs) from the Au substrate underneath the WS2 with a propagation constant of J || ≈ 0.9 × 10 ' '( which is similar to Takagi et al. [37], that observed SPP's in Au at this excitation with a propagation constant of J || ≈ 1.1 × 10 ' '( , also observed in other previous studies [38], [39], [40], illustrating the subsurface sensitivity of the second harmonic. Interestingly, the characteristic areas identified in the KPFM, and PL measurements are not observable in the second harmonic s-SNOM image. Figure 2(c), the third harmonic ( = 3), and (d), the fourth harmonic ( = 4), clearly show that the higher the harmonic, the easier it is to recognize the various features and correlate them to areas identified in Figure 1; the triangular bilayer and the area surrounding it have a darker contrast than the detached WS2 area, clearly correlating with the quenching of PL and the brighter KPFM contrast.
Thus, to study these contrasting areas of the monolayer WS2, we consider the more surface sensitive fourth harmonic s-SNOM image. As previously mentioned, the near field contrast is sensitive to the distance between the tip and sample, and to the local dielectric value [41]. In comparing the s-SNOM images (Figure 2 (b), (c), (d)) with the AFM, one can see the effect of bubbles and wrinkles on the near field contrast. The long diagonal wrinkle starting from the bilayer and moving to the bottom left appears in the near field scan as a long black line due to the large height separation between the tip and the Au substrate over this wrinkle. This can be seen in all the bubbles and in another wrinkle on the right-hand side of the s-SNOM images. There is however only negligible height variation between the dark and the bright near field contrast areas in comparison to the wrinkles. The change in contrast between these two areas therefore must stem from variations in the local dielectric value. Using the inversion method (see methods), we were able to transform near field line-scans into linescans featuring the local dielectric constant from the sample. These results are shown in Figure 3. Evaluating these line-scans through the inversion method provides the local dielectric constant of the sample (Figure 3(b) and (e)). The results suggest that the darker contrast area, corresponding to the area of strong WS2/Au adhesion, has a dielectric value (at 594nm) that fluctuates around −9, while the brighter surrounding area has a less negative dielectric value of around −6 (both values are the real part of the dielectric function). The dielectric value increases with a sharp step between the two areas, which is highlighted in grey. Figure 3(c) and (f) feature the AFM topography of these line-scans, to analyse the effect of the tip height from the sample on the near field contrast. However, the height changes do not show any correlation to the step seen in the near field scans, both of which were measured at the same time by the same tip.
To confirm the results of the inversion method shown in Figure 3, we performed SIE on the sample which is a well-established far-field method for measuring the dielectric function [42]. SIE measurements were performed on the dark and bright near field contrast area with the results depicted as solid black and solid red lines in Figure 4(b) and (c). SIE measurements carried out on the Au substrate are shown in Figure 4(b) and (c) as a solid yellow line. The spectral dependence of the dielectric function of free standing WS2 is shown in Figure 4(b) and (c) as a dashed grey line [35], offset by a negative value to account for the influence of the Au substrate. SIE has a lateral resolution of approximately 1 µm. For comparison to s-SNOM, we extracted a 1 µm-by-1 µm area from the s-SNOM images ( Figure S4, Supporting Information) and calculated the average near field contrast for = 2 and = 4. Figure 4(b) and (c) show the average dielectric values for = 2 and = 4 respectively at different excitation wavelengths. The black and red data points are the dielectric values from the bright and dark near field contrast areas (see Figure S4, Supplementary Information for SNOM images and areas for averaging). Figure 4(b) shows good agreement between the extracted dielectric values from = 2 s-SNOM contrast and the ellipsometry measurements. In contrast, the dielectric values from the = 4 s-SNOM contrast deviate from the ellipsometry measurements (Figure 4(c)). This is expected as ellipsometry is a far field technique that interacts considerably more with the bulk. The penetration depth of SIE through a metal surface is roughly 25 nm at 625 nm [43]. Considering the monolayer thickness of the WS2, the penetration depth of SIE in this case makes it a good match for the second harmonic. The fourth harmonic ( = 4) data points, featured in Figure 4(c), are more surface sensitive and thus do not follow the same trend as the ellipsometry measurements. The determined dielectric values are in the same negative range (between -6 and -9) as the second harmonic but they exhibit a slope with a positive trend, emulating free standing WS2. The black data points diverge more strongly from bulk Au than the red data points. This shows that WS2 modifies the dielectric function of Au more in the adhered area than in the less adhered area. Due to the surface sensitivity of the fourth harmonic, we can resolve this dielectric modification from the monolayer WS2 which is not visible in the second harmonic trend or in the SIE data. The larger sensitivity of the fourth harmonic to the surface dielectric disorder is also documented by the larger variation in near field contrast, expressed by the error bars in Figure 4(c) compared with the negligible error bars in Figure 4(b) for the second harmonics. To visualise the influence of dielectric disorder at the nanometre scale we performed TEPL measurements ( Figure 5). We recorded PL spectra as a function of tip position along a line moving from the dark to the bright contrast area in steps of 50 nm (line across the blue, red, and black dots in Fig. 1(e), raw spectra in Figure S6, Supporting Information). We extracted the peak position from the TEPL data ( Figure 5 (b)) and compared it with the near field contrast of the fourth harmonic ( Figure 5(a)). Figure 5(c) shows the PL intensity as a function of tip position where one can clearly see the transition from the dark contrast area to the bright. The grey stripes highlight two features in the near field contrast and the TEPL peak position where there is an increase in the former and a decrease in the latter. It is also clear that where the near field contrast increases, the peak position decreases. This demonstrates a resolution of 50 nanometres for both near field contrast variations and TEPL peak position changes. Additionally, since the near field contrast can be used to calculate the dielectric function of the sample, this leads to local dielectric resolution on the same scale as the near field signal.
With the ability to resolve the local dielectric environment at the nanometre scale, in addition to having access to sub-surface information as shown by Figure 4, the dual s-SNOM system is an excellent choice for characterising dielectric disorder. Since the monolayer surface is not uniformly flat, this leads to a varying Coulombic interaction between charge carriers resulting in local fluctuations of the permittivity [10]. This in turn produces spatial inhomogeneities of exciton binding energies, which can be seen from the PL peak position shifts in the bright near field contrast area of the sample (Figure 5(b) and Figure S6). KPFM and PL mapping (Figure 1) showed two areas with different levels of charge transfer happening between sample and substrate. With TEPL measurements recorded by the dual s- SNOM featured in Figure 5 we were able to come to the same conclusion at much higher resolution, evidenced by a defined border and sample roughness, in the fourth harmonic s-SNOM image of Figure 2(d).
Considering the capability of the s-SNOM shown in this work, it can be used for a variety of applications. Using different TMDC samples will provide deeper insights into the sample-substrate interaction. The sub-surface sensitivity of the fourth harmonic also enables the study of sandwiched 2D samples such as heterostructures encased in an insulator, like WSe2/MoS2 encapsulated in hBN for example. Additionally, because the insights into the dielectric constant it provides, as well as the resolution, it could be useful in experimentally determining the dielectric function of low dimensionality systems, such as individual CNTs and graphene nanoribbons.
Conclusion
We demonstrated the capability of the dual s-SNOM in measuring the spatially resolved dielectric values of nanoscale systems at different excitation energies. This enables the determination of the dielectric function with a resolution on the nanometre scale. As an example, we used a monolayer sample of WS2 exfoliated on Au. By comparing the s-SNOM characterisation of this sample with conventional sample characterisation methods like KPFM and SIE, and far field techniques like conventional PL mapping, we illustrated the superior resolution of the dual s-SNOM in comparison to these techniques. The dual-SNOM also provided local dielectric information as a result of the near field contrast being sensitive to the local dielectric environment of the sample. This was illustrated using an inversion method with which we extracted the local dielectric values at different wavelengths and, thanks to the selective penetration depths of the different near field image harmonics, from the sub-surface and surface of the sample. We believe this could be useful for identifying and characterising interlayer excitons by probing dielectric differences in the sample environment, probing sandwiched TMDC heterostructures using the different harmonics penetration depth, and determining the dielectric function of low-dimensional systems like carbon nanotubes and graphene nanoribbons. Figure S1 shows an optical microscope image of the sample used in this work. The black square indicates the sample area that was studied, looking closely you can see a triangular piece of bilayer which is highlighed magenta in Figure 1 Figure S3 shows tip enhanced photoluminescence (TEPL) spectra that were measured close to the transition from dark to bright near field contrast, blue for when the tip was down, and black from when the tip was up. Once the tip is down, you can see slightly larger intensity and a shift of the peak, (dark area means shift to higher energy, shown in Figure 5). Once the tip is up, the intensity decreases slightly as the conventional PL from the bright area dominates and we see the PL peak position shift to lower energy. The s-SNOM images used for the ellipsometry comparison in Figure 4 of the paper where 1µm x 1 µm squares of near field contrast values were taken from the bright area (black square) and the dark area (red square). The top row in Figure S4 show the second harmonic measurements, the bottom row shows the fourth harmonic measurements; (a) and (c) at 604 nm and (b) and (d) at 614 nm. The phase images that accompany the amplitude measurements seen in Figure 2 are seen in Figure S5 with the AFM scan included for the sake of comparison. As a result of the poor quality of the phase images, the imaginary part of the dielectric function could not be reliably extracted. In Fig. S6 we show the tip-enhanced photoluminescence spectra recorded as a function of tip position. From the spectra we estimate the PL position and plot it as a function of tip position shown in Fig. 5 in the main manuscript. Figure S6: TEPL spectra as a function of tip position. The PL position is plotted as function of tip position in Fig. 5
|
2021-08-11T01:16:04.773Z
|
2021-08-10T00:00:00.000
|
{
"year": 2021,
"sha1": "db0e5c449969984e7cd5f78f8cae0dba6cad5173",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2108.04573",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "db0e5c449969984e7cd5f78f8cae0dba6cad5173",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235540920
|
pes2o/s2orc
|
v3-fos-license
|
Digital System Design
The main objective of this chapter is to study and design various combinational circuits like Verification of Boolean Expression, Multiplexer, Demultiplexer Circuits, Code Converters circuits using LabVIEW tools. This chapter will make the user more comfortable towards learning of Design of Digital Systems. The various types of Boolean Expressions like SOP and POS, Combinational circuits like Adder circuit (Half adder and full adder), Subtractor circuit (Half Subtractor, Full Subtractor), some code converters like Binary to Gray and Gray to Binary, BCD to Gray and Gray to BCD and also Sequential circuits with D flip flop is also being carried out using this LabVIEW.
Introduction
The field of electronics are classified into two broad group namely analog electronics and digital electronics. Analog electronics deals with signals that are continuous with respect to time by nature such as any noise signal, any video streaming etc. and digital electronics deals with signals that are discontinuous or discrete with respect to time. The electronic amplifier such as op-amp circuit helps to amplify the continuous signals and such signals are termed as analog signals and the circuit used for such applications are called analog circuits. On the other hand, the discrete signals are fed as the input to the computer by electronic switches, which as two distinct values such as HIGH level and LOW level [1]. This discrete signals are further converted in to electronic signals with the help of suitable converters. Such discrete signals are called as digital signals and the electronic circuit used for such operation are termed as digital circuits.
Boolean Algebra
In a discrete signals the two distinct values such as HIGH and LOW has equivalent voltage levels such as 5 volts and 0 volts respectively. This two distinct levels are represented as value 1 and value 0 respectively. Any algebraic functions performed with respect to this discrete values are defined as Boolean algebra developed by George Boole. He also developed various suitable theorems associated with this boolean for manipulation and simplification. There are set of basic definitions which are assumed to be true which defines all the information about the system. The following are the basic definition used in boolean algebra.
• NOT: The NOT of a variable is 1 if and only if, the variable itself is 0 and vice versa • AND: The AND of two variables is 1 if and only if both the variables are 1.
• OR: The OR of two variables is 1 if either (or both) of the variables is 1.
• XOR: The Exclusive-OR of two variables is 1 if either of them but not both is 1
Combinational logic circuits
There are two types of circuits exists in the digital system, Combinational Logic Circuits and Sequential Circuits. A combinational logic circuit is a circuit where the output depends on the combination of present input state. The set of operations which these combinational circuits performs logically by a set of Boolean functions. A sequential logic circuit is a circuit where the output depends on the combination of present input state and past input or previous input values. The previous output values are stored in the memory elements.
A combinational circuit consist of variables for input and output, and basic logic gates to perform the boolean function. The output signals are generated according to the inputs as well as the logic circuits employed. Here both the input and outputs are binary values either 1 or 0. Figure 1 shows the simple block diagram of combinational logic circuits with n input variables and m output variables. If there are n number of inputs to the circuit then 2 n possible combinations of input states but each combination can produce only one output state [2]. For instance if the combinational logic circuit has 2 inputs A and B then there can be 4 possible input states. .
Design procedure
The following are the steps considered while designing a combinational logic circuits.
• The first step is the statement of the problem for which this combinational circuit need to be designed • Definition of the input and output variables and the variable name for inputs and outputs.
• Formation and tabulation of truth table which describes the relationship between the input and output.
• The simplified boolean expression is obtained for the output variable with the help of minimization techniques or Karanaugh map. • The logical diagram using logic gates is realized for the simplified expression obtained in the previous step • In practical design and real time implementation one should consider to use minimum number of gates.
Adder circuits
A combinational circuit that performs the addition of two bits is called halfadder. When the augend and addend numbers contain more significant digits, the carry obtained from the addition of two bits is added to the next higher order pair of significant bits. The combinational circuit that performs the addition of three bits is called a full-adder. The full adder can also be obtained by using two half adder circuits.
(i) Design of half-adders
A half adder is a combination logic circuit that uses two inputs (A and B) and two outputs (Sum S and Carry C). Table 1 shows the truth table the various combinations of inputs and its corresponding outputs. The output Sum S and Carry C is obtained and the k-map is used to get the logical equation. The Boolean expressions are Figure 2 shows the design and implementation of half adder circuit in LabVIEW environment, where the front panel that two inputs Input A and Input B, the outputs are Sum and Carry [3]. The block diagram in LabVIEW environment shows the logic gate implementation for the above obtained expression.
(ii) Design of full-adders
A full adder is a combination logic circuit that uses three inputs (A, B and C in ) and two outputs (Sum S and Carry C). Table 2 shows the truth table the various combinations of inputs and its corresponding outputs. The output Sum S and Carry C is obtained and the k-map is used to get the logical equation. Figure 3 shows the design and implementation of full adder circuit in LabVIEW environment, where the front panel that two inputs Input A, Input B, and Input C in , the outputs are Sum and Carry. The block diagram in LabVIEW environment shows the logic gate implementation for the above obtained expression.
Subtractor circuits
A combinational circuit that performs the difference of two bits is called halfsubtractor. When the first input (minuend) is 0 and the second input(subtrahend) is 1 then there exists a output variable as Borrow. The combinational circuit that determines the difference of three bits is called a full-subtractor.
(i) Design of half-subtractor
A half subtractor is a combination logic circuit that uses two inputs (A and B) and two outputs (Difference D and Borrow B). Table 3 shows the truth table the various combinations of inputs and its corresponding outputs. The output Difference D and Borrow B is obtained and the k-map is used to get the logical equation. Figure 4 shows the design and implementation of half subtractor circuit in LabVIEW environment, where the front panel that two inputs Input A and Input B, the outputs are Difference and Borrow Borrow ¼ AB (6)
(ii) Design of Full-subtractor
A full subtractor is a combination logic circuit that uses three inputs (A, B and B in ) and two outputs (Difference D and Borrow B). Table 4 shows the truth table the various combinations of inputs and its corresponding outputs. The output difference D and Borrow B is obtained and the k-map is used to get the logical equation
Multiplexer and demultiplexer
The most important form of a combinational circuit and which is widely used in the field of communication is Multiplexer and Demultiplexer Circuits. The multiplexer circuit is used to transfer large number of channels carrying information to a smaller number of channels. Such circuit used to transmit digital data or binary information is called as data selector or digital multiplexer. In this data selector, the input line is selected according to the combination of select lines, suppose if there exists 2 n input line then the number of select line is n and there will be only one output line. For example in a 4x1 multiplexer, the number of input lines is 4 (2 2 ) which shows there exists of 2 select lines. In these multiplexer circuits the inputs are named as I0, I1, I2 and I3 and the two select lines are named as S0 and S1. Table 5 shows the various combinations of select line and corresponding input line is selected and obtained as output Y [4]. The boolean expression for the output Y is given below. Figure 6 shows the Front panel and Block Diagram of 4x1 multiplexer. This design can be extended for higher versions like 8x1 and 16x1 types of multiplexer. The Boolean expressions are
Input Variables Output Variables
The term demultiplex is just a opposite way of multiplexer, here in this combinational circuit there are one input channel and distributes the data over several channels. Therefore if the number of input channel is 1 then the number of output will be 2 n output channels. The combination of select lines control the output channel through which the input data must be transmitted. Table 6 gives the truth table for 1-to-8 demultiplexer, the front panel and block diagram for 1-to-8 demultiplexer is shown in Figure 7.
The selection input line S0, S1, S2 are activated according to the bit combination for each output as given in Eq. (10) to Eq. (17). For instance, if the selection input combination is 010, the input I is transmitted to Y2. The Boolean expressions for each output line is given below
Code converters
For the same discrete elements of information, there are several different codes available, resulting in the use of different codes for different digital systems. It's sometimes necessary to connect two digital blocks that use different coding systems. Hence a conversion digital circuit is designed and implemented between two digital systems to use information of one digital system to another. The input lines must provide the bit combinations of elements as designed by binary code A and the output is generated by the bit combinations of code B. This code converters circuit consisting of logic gates to perform this transformation operations. Some of the few code conversion techniques are discussed below. Table 7 shoes the conversion of 4-bit binary code to its equivalent gray code values. The 4 bit binary code input is defined as B 0 , B 1 , B 2 , B 3 and corresponding
Selection Inputs
Output Channels output 4-bit gray is defined as G 0 , G 2 , G 3 , and G 4 as shown in Figure 8. The corresponding boolean expression for binary to gray code conversion is given below
Gray to binary code converters
Gray code is also called as Reflected Binary Code (RBC), Reflected Binary (RB) or Gray code, Cyclic Code, is defined as an ordering of the binary number system such that each incremental value can only differ by one bit. The main objective in this code converter is that while traversing from one step to another step, one bit in the code group changes as in Figure 9. This gray code is not applicable for arithmetic operations, but it is applicable in analog to digital converters, as well as error correction techniques in digital communications (Table 8).
Seven segment decoder
A digital decoder IC is a device that converts one digital format into another, and one of the most commonly-used device for doing this is the binary-coded decimal (BCD) to 7-segment display decoder. The 7-segment light emitting diode (LED) provides a convenient way of displaying information or digital data in the form of numbers, letters and alphanumeric characters. Typically, 7-segment displays consist of seven same coloured LEDs (called segments) within a single display package. In order to display the correct character or number, the correct combination of LED segments has to be illuminated. This LabVIEW program demonstrates the illumination of each segment by displaying hex values (0000 through FFFF) in decimal form from 0 through 9 and A through F. The standard 7-segment LED display has eight input connections, one for each LED segment and one that acts as a common terminal or connection for all internal display segments. Some displays also have an additional input pin for displaying a decimal point.
Types of digital display
There are two important types of 7-segment LED displays, namely, common cathode and common anode. In Common cathode display (CCD) display, all cathode connections of the LEDs are joined together to a low logic or ground or 0 [5]. The individual segment is illuminated by the application of high logic or + Vcc or 1 to the individual anode terminal. In Common anode display (CAD) In a CAD, all anode connections of the LEDs are joined together to a high logic or + Vcc and individual segments are illuminated by connecting individual cathode terminals to low logic or ground as shown in Figure 10. The boolean expressions of the outputs.
Conclusion
This chapter brings an overview of design of combinational logic circuits in LabVIEW. This LabVIEW programming tool is a graphical representation tool which helps the designer to simplify the design work. This tool can be further extended for designing sequential circuits as well as PLA and PAL logic design.
|
2021-06-22T17:55:35.979Z
|
2021-04-27T00:00:00.000
|
{
"year": 2021,
"sha1": "2db36c323c04d83b4b90432131cc06c814741317",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/intechopen.97611",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "340e19267f7de4aa9aafbe481f5a077d5ec2ecba",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256035782
|
pes2o/s2orc
|
v3-fos-license
|
Two loop QCD amplitudes for di-pseudo scalar production in gluon fusion
We compute the radiative corrections to the four-point amplitude g+g → A+A in massless Quantum Chromodynamics (QCD) up to order αs4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\alpha}_s^4 $$\end{document} in perturbation theory. We used the effective field theory that describes the coupling of pseudo-scalars to gluons and quarks directly, in the large top quark mass limit. Due to the CP odd nature of the pseudo-scalar Higgs boson, the computation involves careful treatment of chiral quantities in dimensional regularisation. The ultraviolet finite results are shown to be consistent with the universal infrared structure of QCD amplitudes. The infrared finite part of these amplitudes constitutes the important component of any next to next to leading order corrections to observables involving pair of pseudo-scalars at the Large Hadron Collider.
JHEP02(2020)121
loops compared to those in the full theory. Unlike the case of CP even Higgs boson, inclusive cross section for the production of pseudo scalar Higgs is known [15][16][17] only up to nextto-next-to leading order (NNLO) in pQCD. For N 3 LO predictions, one requires three loop virtual amplitudes and real emission contributions. The computation of virtual corrections is technically challenging [18] as pseudo scalar Higgs boson couples to SM fields through two composite operators that mix under renormalisation. In addition, these operators involve Levi-Civita tensor and γ 5 which are hard to define in dimensional regularisation. The three loop form factor thus obtained was later combined with appropriate soft distribution function [19][20][21] and mass factorisation kernels to obtain soft plus virtual contribution at N 3 LO in QCD [22]. Later, the process dependent resummation constants from the three loop form factors were used to perform threshold resummation in [23] and also make approximate prediction at N 3 LO level. This was possible due to the similarity of the interaction vertices of scalar and pseudo scalar Higgs bosons with the gluons.
Recently, there have been a surge of interest to study the production of pair of Higgs bosons to determine Higgs self coupling, whose strength is a prediction of the SM, if the mass of the Higgs boson is known. Measurement of this coupling will provide an independent test on nature of the Higgs boson. The gluon gluon fusion subprocess producing pair of Higgs bosons through a heavy quark loop [13,24] is the dominant one at the LHC, however the cross section is only few tens of fb, making it very difficult to observe. QCD corrections not only increase the cross section but also stabilise the predictions against renormalisation µ R , and factorisation µ F scales. NLO QCD corrections [14] and later on the top quark mass effects are systematically taken into account in [25][26][27][28][29][30]. Beyond NLO, an EFT where top quark degrees of freedom are integrated out is used. At present, production of pair of Higgs bosons in EFT is known to N 3 LO level [31], for NLO, NNLO, see [25][26][27][28][29][30][32][33][34]. All the two loop virtual amplitudes for g + g → hh that are required for the N 3 LO cross section for the di-Higgs production were obtained in [35]. The production of di-Higgs bosons through bottom quark annihilation was obtained up to NNLO level in [36]. In [37][38][39], the fully differential results at NNLO level are presented. While, there have been flurry of activities in the context of scalar Higgs boson, very little is known for the production of pair of pseudo scalar Higgs bosons at the LHC so far. In [14], LO contribution keeping finite top mass and NLO contributions using EFT framework where top quark degrees of freedom are integrated out have been obtained. Like the production of single pseudo scalar Higgs boson, pair production is also important to understand the nature of the extended Higgs sector. In order to reduce the theoretical uncertainties, it is important to have QCD radiative corrections under control. Due to EFT, it is now possible to go beyond NLO with available tools to make precise as well as stable predictions with respect to the unphysical scales. At NNLO level, we require two loop virtual, one loop single real emission and double real emission amplitudes. In this article, as a first step towards obtaining going beyond NLO QCD corrections, we compute all the one and two loop amplitudes that can contribute to the pure virtual part of the cross section in dimensional regularisation and perform ultraviolet (UV) renormalisation to obtain UV finite results.
The paper is organised as follows: in section-2, we describe how two loop virtual amplitudes are computed. In particular, we introduce the effective Lagrangian, the relevant JHEP02(2020)121 kinematics, describe how projector method can be applied to obtain the scalar parts of the amplitudes, the subtleties involved in defining the Levi-Civita tensor and γ 5 in dimensional regularisation, ultraviolet renormalisation of strong coupling, over all renormalisations for the composite operators and finite renormalisation for the γ 5 . In section-3, computation of the amplitudes and their infrared (IR) structure are briefly discussed. In section-4, we summarise our results and conclude.
2 Theoretical framework
Effective Lagrangian
We work with the effective Lagrangian [40] that describes the interaction of the pseudoscalar field Φ A (x) with the gauge field G aµν and the fermion ψ: The pseudo-scalar gluonic (O G (x)) and the light quark (O J (x)) operators are defined as where f abc is the SU(3) structure constant and µνρσ is the Levi-Civita tensor. The pseudoscalar fermionic operator is the derivative of the flavour singlet axial vector current The effective Lagrangian is obtained after integrating out the top quark fields in the large top mass limit. Hence, the corresponding Wilson coefficients C G and C J depend on the mass of the top quark m t . As a result of the Adler-Bardeen theorem [41], there is no QCD correction to C G beyond one-loop level. On the other hand, C J begins only at secondorder in the strong coupling constant a s ≡ g 2 s /16π 2 = α s /4π. The Wilsons coefficients are given by where G F is the Fermi constant, cot β -the ratio of the vacuum expectation values of the two Higgs doublets, in a model where the CP is not spontaneously broken. C F is the quadratic Casimir in the fundamental representation of QCD and µ R is the renormalisation scale at which a s is renormalised. We use the effective lagrangian (2.1) to obtain amplitudes for the production of pair of pseudo-scalar Higgs bosons A of mass m A up to two loop level in perturbative QCD. We restrict ourselves to the dominant gluon fusion subprocess:
JHEP02(2020)121
where p 1 and p 2 are the momenta of the incoming gluons, p 2 1,2 = 0 and p 3 and p 4 are the momenta of the outgoing pseudo-scalar Higgs bosons, p 2 3,4 = m 2 A . The Mandelstam variables for the above process are given by which satisfy s + t + u = 2m 2 A . It is convenient to express these amplitudes in terms of the dimensionless variables x, y and z as which lead to the constraint x −1 + x = y + z.
As in the case of di-Higgs production amplitude via gluon fusion [24], the di-pseudo scalar production amplitude, can also be decomposed in terms of two second rank Lorentz tensors T µν i (i = 1, 2), as follows: where µ (p i ) are the polarisation vectors of the initial state gluons. The Lorentz scalar functions M i , i = 1, 2 are independently gauge invariant. δ ab indicates that there is no colour flow from initial to final state. The second rank tensors are given by with p 2 T = (tu − m 4 A )/s is the transverse momentum square of the pseudo-scalar Higgs boson expressed in terms of the Mandelstam variables. The tensor T µν 1 depends only on the initial state momenta p 1,2 . Using momentum conservation, it can be seen that T µν 2 is symmetric under the interchange of the two pseudo-scalar Higgs momenta. The scalar functions M 1,2 can be obtained from M µν ab , by using appropriate d-dimensional projectors P µν i,ab with i = 1, 2, respectively and the projectors are given by: 12) where N corresponds to the SU(N ) colour group. In the following, we briefly discuss on the type of Feynman diagrams that contribute up to order O(a 4 s ) in QCD. To evaluate the 4-point amplitude g + g → A + A to any order in a s , one needs to calculate the contributing diagrams to that particular order and evaluate the scalar functions M 1,2 , using the projectors P µν i,ab , i = 1, 2. Using the effective Lagrangian eq. (2.1), the higher order corrections to g+g → A+A amplitude are calculated in massless QCD. There are two types of diagrams that contribute to this process. We JHEP02(2020)121 classify them as type-I and type-II. The form factor type diagrams where a pair of gluons annihilate to a single A, which branches into a pair of As belong to type-I and type-II contains t and u channel diagrams where each A is coupled to pair of gluons, or to quarks. In type-I, we have two classes of diagrams: type-Ia (figure 1 left panel) which contains only four point AAgg effective vertex and type-Ib (figure 1 right panel) containing both AAg and AAA vertices. These diagrams contribute at tree level (O(a s )) and we need to calculate them to O(a 4 s ) i.e., up to 3-loop order. Since these diagrams are related to form factors of O G between gluons states and O J between quark and gluon states, we can readily obtain them from [18,42,43].
The type-II diagrams consist of (a) two Agg effective vertex (figure 2) and (b) one Agg effective vertex and one Aqq effective vertex as shown in figure 3. Due to the axial anomaly, the pseudo-scalar operator for the gluonic field strength mixes with the divergence of the singlet axial vector current. The Agg effective vertex is proportional to the C G Wilson coefficient (eq. (2.4)) which is constrained to order O(a s ) due to the Alder-Bardeen Theorem. The tree level diagram in type-IIa (figure 2) starts at order O(a 2 s ) and each higher loop order adds an order O(a s ). The Aqq effective vertex is proportional to C J , the Wilson coefficient (eq. (2.5)) which starts at order O(a 2 s ). The type-IIb diagrams (figure 3) which consist of one Agg effective vertex and one Aqq effective vertex start at one loop level at O(a 4 s ). Since, type-I diagrams are known to required order in a s , the results presented in this paper will mainly include the type-II amplitudes up to two loops in massless perturbative QCD i.e. order O(a 4 s ). We use dimensional regularisation (d = 4 + ) to regularise both UV and IR singularities which appear as poles in in the UV, soft and collinear regions. Since we will have to deal with the Levi-Civita tensor in O G operator and γ 5 in O J operator, both of which are constructs inherently in 4-dimensions, a consistent method to deal with them in 4+ dimensions is essential. We discuss the details of a consistent and practical prescription to go over to 4 + and its implications in the next section. Hence, the scalar amplitudes M i can be written as a sum of amplitudes resulting from types-I and II diagrams as and in the following we concentrate only on M II i .
γ 5 within dimensional regularisation
Due to the axial anomaly, the pseudo-scalar gluonic operator O G = µνρσ G aµν G aρσ is related to the divergence of the axial vector current O J = ∂ µ (ψγ µ γ 5 ψ). Computation of higher order corrections with chiral quantities, involve inherently d = 4 dimensional objects like γ 5 and the Levi-Civita tensor µνρσ , and this warrants a prescription in going away from 4-dimension i.e. d = 4 + . There exist several prescriptions to deal with γ 5 in dimensional regularisation [44,45]. In multi-loop computations that use dimensional regularisation, we use the self-consistent prescription for γ 5 that was proposed by 't Hooft and Veltman [44].
In this prescription, one defines γ 5 as where Levi-Civita tensor is purely 4-dimensional, while the Lorentz indices on the γ µ i are in d = 4 + dimensions. To maintain the anti-commuting nature of γ 5 with d-dimensional γ µ i , the symmetrical form of the axial current has to be used this is in concurrence with the above definition of γ 5 in eq. (2.14), and will lead to The O G and O J operators now take the form Contraction of two Levi-Civita tensors that result from either O G operator or the mixing of O G and O J operators is given by the Lorentz indices in this determinant, could now be considered as d-dimensional and the consequence would be, addition of only the inessential O( ) terms to the renormalisated quantity [46]. This prescription though is not without consequence -a finite renormalisation of the axial vector current [47] is required in order to fulfill the chiral Ward identities and the Adler-Bardeen theorem. This will be discussed further in the next section. JHEP02(2020)121
UV renormalisation, operator renormalization and mixing
In dimensional regularisation with d = 4 + , the bare strong coupling constant denoted bŷ a s is related to its renormalized coupling by a ŝ 20) up to O(a 3 s ). β i are the coefficients of the QCD β function and are given by [48] where n f is the number of flavors and T F = 1/2. As we work in an effective theory obtained after integrating out the top quark fields in the large top quark mass limit, n f = 5. The Casimirs of SU(N) are given by C F and C A : For type-I diagrams which begin to contribute at LO, the Z as up to order O(a 3 s ) will be needed while for type-II diagrams, one order lower is sufficient.
Apart from the renormalisation of strong coupling in the massless QCD, the amplitudes require the renormalisation of vertices resulting from the composite operators O G and O J of the effective Lagrangian eq. (2.1). The renormalised operators are denoted by [ ] parenthesis, while the bare quantities without the parenthesis.
The renormalisation of O J is related to the renormalisation of the singlet axial vector current J µ 5 which needs the standard overall UV renormalisation constant Z s M S and a finite renormalisation constant Z s 5 . The later is necessary in dimensional regularisation in order to ensure the nature of operator relation resulting from axial anomaly [49] [ with the corresponding renormalisation constants Z GG and Z GJ . Combining the above two equations in a matrix form, we have where Z JG = 0 to all orders in perturbation theory and Z JJ ≡ Z s 5 Z s M S . The renormalisation constants required for above equation are available up to O(a 3 s ) [46,50] which was computed using OPE. For earlier works on this, see [51,52]. Using a completely different method the same quantities were calculated by some of us [18] and found to be in full agreement. The UV renormalisation constant of the singlet axial vector current J µ 5 in the M S scheme is , (2.28) and the finite renormalisation constant Z s 5 is The renormalisation constants for O G and O J operators up to two loops are given by Similarly for M II GJ,g , we find where M II(1) We find that the UV singularities that appear at one-loop and two-loop levels can be taken care of by the coupling constant renormalisation Z as and operator renormalisation Z ij . At this point we would like to stress that there could be additional contact terms required as a result of the behaviour of product of operators O G O G or O G O J at short distances. As shown in [50], we find that there are no contact terms as a result of these product of operators at short distances. For earlier works on this, see [51,52].
Calculation of the amplitude
Our task of computing the amplitude g + g → A + A has reduced to the type-II diagrams up to O(a 4 s ). This involves diagrams with two Agg effective vertices, up to two-loop level in QCD (Type-IIa) and diagrams with one Agg effective vertex and one Aqq effective vertex which involves terms up to one loop in QCD (Type-IIb). Diagrams involving two Aqq effective vertex start at O(a 5 s ) and are not considered here. Applying the projectors P µν i,ab on the amplitudes, we extract the scalar coefficients M i with i = 1, 2 at every order in the perturbation. All the tree level, one loop and two loop Feynman diagrams in massless QCD are generated using QGRAF [53] where additional vertices resulting from effective lagrangian eq. (2.1) are incorporated. There are two tree level diagrams, 35 one-loop diagrams and 789 two-loop diagrams of type-IIa. For type-IIb which involves effective quark and gluon couplings to pseudo-scalar Higgs, there are no tree level diagrams but 8 diagrams that contribute at one-loop which suffices to generate diagrams up to O(a 4 s ). The raw QGRAF
JHEP02(2020)121
output is converted with the help of in-house codes based on FORM [54] to include appropriate Feynman rules and to perform trace of Dirac matrices, contraction of Lorentz indices and colour indices. At this stage, we encounter huge number of one and scalar 2-loop Feynman integrals, which contain a set of propagator denominators and a combination of scalar products between loop momenta and independent external momenta. These Feynman integrals can be classified in terms of propagator denominators, that they contain. It is hence important to identify the momentum shifts that are required to express each of these diagrams in terms of a standard set of propagators called auxiliary topology. We use REDUZE2 package [55] to achieve this. The auxiliary topologies needed for the present case are same as those found in vector boson pair production [56,57] at two loops. As expected these large number of scalar integrals are not all independent. To establish the relations, some properties of the Feynman integrals in dimensional regularisation are used. Exploiting the fact that, the total derivative with respect to any loop momenta of these integrals, evaluates to a surface term, which vanishes, leads to integration-byparts (IBP) identities [58,59]. In addition, the fact that all integrals are Lorentz scalars, gives rise to Lorentz invariance (LI) identities [60]. As a result, these integrals can in turn be expressed in terms of a much smaller set of integrals which are irreducible and appropriately called master integrals (MI). Several automated computer algebra packages are available [55,[61][62][63][64] that use the Laporta algorithm [65] to reduce these Feynman integrals to the MIs. We have used the Mathematica based package LiteRed [64] to perform the reductions of all the integrals to MIs. At one-loop, there are 10 MIs, while at two-loop the number is 149. These two-loop MIs are the same as two-loop four-point functions with two equal mass external legs. The analytical result for the each MI in terms of Laurent series expansion in is given in [56,57].
At this stage, the renormalisation of the strong coupling constant and of the operators O G and O J , described in section 2.3, removes all the UV singularities. The singularities that still remain are purely of infrared origin and the next section is devoted to it.
Infrared factorization
The UV finite amplitudes that we have computed contain only divergences of infrared origin, which appear as poles in the dimensional regularization parameter . They are expected to cancel against real emission diagrams for the IR safe observables. While these singularities disappear in the physical observables, the amplitudes beyond leading order show a very rich universal structure in the IR. In [66], Catani predicted the IR poles of two-loop n-point UV finite amplitudes in terms of certain universal IR anomalous dimensions. Later, in [67], factorization and resummation properties of QCD amplitudes were used to understand the IR structure and subsequently the attempts were made to predict the structure of IR poles beyond two loops in [68,69]. Following [66], we obtain M JHEP02(2020)121 where I (1) g ( ), I (2) g ( ) are the IR singularity operators given by
4)
At one loop level, it is straight forward to show analytically that the IR poles are in agreement with the predictions. For the two loop case a fully analytical comparison was possible only for poles −i with i = 2 − 4. However, due to the large file size for the −1 pole term, we made a comparison only at the numerical level. We found full agreement with the predictions of Catani up to two loop level for all the IR poles. Having obtained the IR pole cancellations, the finite part defined in eq. (3.1) can be extracted by subtracting the IR poles. Expressions for the finite part are too large to be presented here, however, they are being provided as supplementary material attached to this paper. The finite part of the amplitude corresponding to the projectors 1 and 2 as a function of x for different values of cos θ is plotted normalising appropriately, in 4 Discussion and conclusions In this paper, we have presented the two loop virtual amplitudes that are relevant for studying production of pair of pseudo-scalar Higgs bosons in gluon fusion subprocess at the LHC. This is the dominant sub process that is sensitive to its self coupling. We have done this computation in the EFT where top quark degrees of freedom is integrated out. In the EFT, the pseudo-scalar Higgs boson directly couples to gluons and light quarks through two local composite operators O G and O J respectively with the strengths proportional to Wilson coefficients that are calculable in perturbative QCD. We used dimensional regularisation to regulate both UV and IR divergences. The composite operators being CP odd, contain Levi-Civita tensor and γ 5 which are inherently four dimensional objects. Hence, a careful treatment was needed to deal with them in d-dimensions. We followed the prescription advocated by Larin. This requires additional renormalisation for the singlet axial vector current up to two loops. In addition, Larin's prescription requires finite renormalistion constant for singlet axial current and is also available. Note that the composite operators mix under UV rerenormalistion. The corresponding renormalisation constants are already known and we use them to obtain UV finite two loop amplitudes. Unlike the amplitudes involving pair of Higgs bosons, we do not need any UV contact counter terms. The UV finite amplitudes thus obtained contain IR divergences due to the presence of massless partons in QCD. We found that these IR poles are in agreement with the predictions by Catani and it provides a test on the correctness of the computation. Our results provide one of the important components relevant for studies related to production of pair of pseudo-scalar Higgs bosons at the LHC up to order O(a 4 s ).
|
2023-01-21T14:52:10.466Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "65b268d86dc731623051618502e2523b6e819c06",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP02(2020)121.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "65b268d86dc731623051618502e2523b6e819c06",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
103380994
|
pes2o/s2orc
|
v3-fos-license
|
A facile synthesis of a cobalt nanoparticle–graphene nanocomposite with high-performance and triple-band electromagnetic wave absorption properties
Ferromagnetic metal nanoparticle/graphene nanocomposites are promising as excellent electromagnetic (EM) wave absorption materials. In this work, we used a facile method to synthesize a cobalt nanoparticle–graphene (CoNP–G) nanocomposite. The obtained CoNPs–G exhibited a saturation magnetization (Ms) of 31.3 emu g−1 and a coercivity (HC) of 408.9 Oe at 298.15 K. In particular, the CoNPs–G nanocomposite provided high-performance EM wave absorption with multiband, wide effective absorption bandwidth, which was mainly attributed to the synergy effects generated by the magnetic loss of cobalt and the dielectric loss of graphene. In the range of 2–18 GHz, the sample (55 wt% CoNPs–G) held three effective reflection loss (RL) peaks (frequency ranges of 2.4–3.84, 7.84–11.87 and 13.25–18 GHz, respectively, RL ≤ −10 dB) with the coating thickness of 4.5 mm, and the effective bandwidth reached the maximum of 10.22 GHz, and the minimal RL reached −40.53 dB at 9.50 GHz. Therefore, the CoNPs–G nanocomposite presents a great promising application in the electromagnetic wave absorption field.
Introduction
With the development of electronic devices, electromagnetic (EM) wave interference and radiation problems are increasing, such as the harm to human bodies and information safety. The fabrication of high-efficiency EM wave absorption materials with a low density, wide effective frequency range and strong absorption properties [1][2][3][4][5][6][7] has attracted more and more attention in civil and military elds. There are some kinds of materials applied in EM wave absorption, for instance, ferromagnetic metals and carbon materials.
Because of the large saturation magnetization (M s ) and their Snoek limit at a high-frequency level, ferromagnetic metals and their alloys especially cobalt (Co) exhibit strong absorption intensity, whereas the frequency ranges are usually narrow. [8][9][10][11][12][13][14][15] According to the literatures, nano/microstructure Co materials, such as hollow porous cobalt spheres, 9 hierarchical sword-like cobalt particles 10 and cobalt nanoplatelets 13 with large anisotropy eld have been synthesized and applied for EM wave absorption. However, the above-mentioned materials have various disadvantages when used alone. For instance, as the coating layers, the high density of metals hinders their practical applications. Moreover, they are easily oxidized and agglomerated when exposed in the air, which lead to the decrease of EM wave absorption capacity and greatly limit their application. In this direction, more advanced EM materials should be designed to provide high absorption capacity, wide absorption frequency range, light weight, good thermal stability and antioxidative capability. [16][17][18] Carbon materials, such as graphite, mesoporous carbon, carbon nanotube and graphene, have attracted more attention on the EM wave absorber since World War II. They can produce eddy current under the action of electromagnetic waves and convert the electromagnetic energy into heat energy. Importantly, they can partly meet the advanced EM materials demand. Among these carbon materials, we pay more attention to graphene (G) due to its better microwave absorption properties than that of graphite and carbon nanotube. 19 G, a twodimensional nanostructure carbon material, with large specic surface area, high conductivity, exible, corrosion resistant and low density, 20-22 may be applied as a lightweight EM wave absorber. However, it usually exhibits low absorption property because its conductive and electromagnetic parameters are too high to meet the requirement of impedance match. 19,23,24 Adding magnetic loss nanoparticles (Co 3 O 4 , Fe 3 O 4 , Co, Ni, NiFe 2 O 4 , CoFe 2 O 4 , etc.) into G should be an effective method to deal with it, which can alter the values of dielectric permittivity and magnetic permeability. [25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44] Briey, the aforementioned composites improve the absorption of electromagnetic energy due to combine the magnetic properties of ferromagnetic nanomaterials with the excellent electrical conductivity of graphene. Nevertheless, these composites are usually synthesized via solvothermal, hydrothermal or calcination method, which need more energy. Furthermore, the products provide EM wave absorption with single absorption peak in a certain coating thickness; on the side, only several materials 45,46 can meet the following points: the value of minimal reection loss (RL) reached À40 dB and the effective absorption bandwidth (RL # À10 dB) was $4 GHz simultaneously.
In this study, we exhibited a liquid phase reduction method to prepare the Co nanoparticles-graphene (CoNPs-G) nanocomposite. The CoNPs-G nanocomposite with excellent EM wave absorption properties was obtained by controlling the concentration and proportion of the reactants. In addition, the method was more simply and less energy consumption than that of solvothermal, hydrothermal and calcination methods. The microwave absorption data of the CoNPs-G nanocomposite showed that the synthetic nanocomposites held highperformance EM wave absorption with multiband, wide effective absorption bandwidth, and the possible reason was the magnetic loss of cobalt and the resistance loss of graphene. 45 To the best of our knowledge, the EM wave absorption materials of metals-graphene or metal oxide-graphene composites, which have three adsorption peaks among range of 2-18 GHz with a certain coating thickness, the effective absorption bandwidth (RL # À10 dB) is more than 8 GHz, and the value of minimal reection loss (RL) reach À40 dB, have rarely been reported. Moreover, this route will be likely to extend to the preparation of other metal-graphene EM wave absorption materials.
Synthesis of samples
Graphene oxide (GO) was prepared from powdered ake graphite (400 mesh) by a modied Hummers' method as described previously. In a typical synthesis procedure, a certain amount of GO was dispersed in 10 mL D.I. water, with ultrasonication for 0.5 h. And then 4 g NaOH was added into the solution with vigorous stirring. When the solution cooled to room temperature, 10 mL a certain concentration of CoCl 2 solution was dropped into it. Aer that, 1 mL N 2 H 4 $H 2 O was added into the above solution and the reaction system was kept stirring for 1 h. Aer reaction, the dark precipitate was separated and washed with D.I. water and ethanol at least three times, respectively, and then dried through vacuum freezedrying for 24 h.
Characterization
Field emission scanning electron microscope (SEM, JSM-7500F, Japan) equipped with energy dispersive X-ray spectroscopy (EDS) was applied to morphology and elemental analysis of the samples, and transmission electron microscopy (TEM, G2F20S, FEI-Tecnai, USA) was used for transmission electron microscopy analysis. The crystal structure of the as-synthesized samples was identied by X-ray diffractometer (X'Pert Pro MPD, Philips, The Netherlands), using Cu Ka (l ¼ 0.154249 nm) radiation. The magnetic properties of coercive force and magnetization were analyzed by a vibrating sample magnetometer (VSM, Lakeshore, model 7410 series) at 298 K. X-ray photoelectron spectroscopy (XPS, AXIS Ultra DLD, Kratos Analytical) was employed to perform surface chemical analysis and qualitative analysis of elements by using a monochromated Al Ka excitation radiation (1486.6 eV).
Electromagnetic wave absorption measurements
To investigate the EM wave absorption properties of the obtained absorbers, paraffin was selected as the matrix material. A sample containing 55 wt% of obtained samples was pressed into a ring with an outer diameter of 7 mm, thickness of 2.5-3.5 mm and an inner diameter of 3 mm for EM measurement. The complex permittivity (3 r ¼ 3 0 À j3 00 ) and relative complex permeability (m r ¼ m 0 À jm 00 ) were determined using the T/R coaxial line method in the range of 2-18 GHz using a network analyzer (Agilent Technologies N5244A). The reection loss (RL) curves calculated from the relative permeability and permittivity at the given frequency and absorber thickness were employed to evaluate the electromagnetic wave absorption properties. 28 The RL was calculated according to following equations: where f is the frequency of the electromagnetic wave, d is the thickness of absorber, c is the velocity of light, m r and 3 r are the relative complex permeability and permittivity, respectively, Z 0 is the impedance of free space, and Z in is the input impedance of the absorber.
Results & discussion
CoNPs-G nanocomposite is synthesized by a facile method with the liquid reduction reaction. The synthetic process is illustrated in Fig. 1. Through the modied Hummer's method, the as-prepared GO has some functional groups such as -OH and -COOH, and these functional groups can provide it with a negative charge. When CoCl 2 $6H 2 O solution is added to the GO solution, Co 2+ ions are absorbed by GO sheets via electrostatic interaction. Aer the addition of N 2 H 4 $H 2 O, the Co 2+ ions of the surface of GO are reduced into Co nanoparticles rst, and it also immediately drive the reduction of GO. At last the reduction of GO and the deposition of Co nanoparticles on graphene sheets take place almost simultaneously. The phases of the as-prepared GO, G and CoNPs-G nanocomposite are investigated by XRD (Fig. 2). The diffraction peak of GO is observed at 2q ¼ 9.8 in Fig. 2(a), indicating the distance between atomic layers of graphite is expanded to 0.80 nm. The complete oxidation of graphite into the GO makes it possible for Co to be assembled on GO sheets. 47 As shown in Fig. 2(b), the broad peak of G is 2q z 25 , suggesting GO can be deoxygenated by hydrazine hydrate. 20 The diffraction peaks of the CoNPs-G nanocomposite are exhibited in Fig. 2(c), the peak can be assigned to the (101) plane of Co (JCPDS card no. 05-0727), and no other apparent diffraction peaks due to the impurities can be identied, implying that the high purity CoNPs-G can be obtained by this method.
The chemical component of CoNPs-G nanocomposite is further investigated by Raman. Fig. 3 exhibits the Raman spectra of GO (black line) and CoNPs-G (red line) nanocomposite. The G band is assigned to the in plane vibration of sp 2 carbon atoms, and the D band is related to the vibration of sp 3 carbon atoms. The typical G band (1593.06 cm À1 ) and D band (1340.01 cm À1 ) of GO are showed in picture. However, compared with GO, the G band and D band for CoNPs-G are 1596.42 cm À1 and 1335.54 cm À1 , which both have a small spectral shi. It indicates that there may be chemical interplays between the groups of G and the reactive sites of Co. As we know, the ratio of intensities of the two band (I D : I G ) can be used indicate the ordered and disordered crystal structure of carbon. The I D : I G will increase when the GO is reduced. The I D : I G is 1.07 for GO and 1.41 for CoNPs-G. The change explains the fact that GO has been reduced to G in this experiment. Fig. 4(a-c) show the representative SEM and TEM images of the CoNPs-G nanocomposite. From the Fig. 4(a), it can be seen that Co nanoparticles are assembled on the surface of G. In Fig. 4(b and c), the results of element mapping distribution of CoNPs-G are presented. To eliminate the probable inuence from the conductive resin, the sample before testing is dropped on the Si substrate. It can be deduced that Co and C elements are uniformly distributed on the lm. The presence of C element is due to the graphene. Morphology information on the CoNPs-G nanocomposite is given in Fig. 4(d-f). In the corresponding low magnication TEM images are shown in Fig. 4(d and e), G sheets are decorated randomly by Co nanoparticles. Fig. 4(e and f) show a high-resolution TEM (HRTEM) image of Co nanoparticles (marked by red circles) with a diameter approximately 5 to 10 nm. The inset of Fig. 4(f) exhibits a selected area electron diffraction (SAED) pattern, where the labeled diffraction rings can be indexed to the (101) plane of Co and the (002) plane of G, 45 respectively. In terms of the SEM, TEM and XRD analyses, CoNPs-G nanocomposite can be fabricated by our present method. The surface composition of the composite is characterized by X-ray photoelectron spectroscopy (XPS). The survey spectrum ( Fig. 5(a)) shows that CoNPs-G nanocomposite consists of Co, O and C elements. C 1s XPS spectrum ( Fig. 5(b)) can be deconvoluted into three peaks located at 284.5 eV, 285.6 eV and 286.7 eV, which are attributed to carbon atoms in different functional groups: C-C, C-O and C]O respectively, 48,49 indicating that GO is reduced to G, but the surface has a small number of unreducible carboxyl groups (-COOH) or hydroxyl groups (-OH). Fig. 5(d), the Co 2p peak is composed of two peaks centered at 780.2 eV and 781.4 eV. The peak at 780.2 eV should be attributed to Co 2p 3/2 , while the weak one at 781.4 eV corresponds to Co 2p 1/2 . Compared with Co 2p 3/2 (778.10 eV) and Co 2p 1/2 (793.3 eV), the peaks have moved to high valence, suggesting that there is a chemical bonding between Co and G rather than simple physical adsorption.
The magnetic hysteresis loop at room temperature for the CoNPs-G nanocomposite microspheres is displayed in Fig. 6. The measured saturation magnetization (M s ) and coercivity (H C ), are 31.3 emu g À1 and 408.9 Oe, respectively. The smaller M s value of CoNPs-G nanocomposite compared to bulk Co (168 emu g À1 , 298.15 K) 54 is mainly ascribed to the existence of graphene. On the other hand, CoNPs-G nanocomposite also shows greater coercivity (408.9 Oe) than bulk cobalt (10 Oe, 298.15 K). According to the following equation: where m is the magnetic dipole moment; R is the magnetic particle radius; K, L is the interaction constant between magnetic particles, and n is the number of each particle in the sphere chain model. It shows that coercivity is inversely proportional to the cube of particle radius, namely, the smaller the particle size, the greater the coercivity. The Fig. 4(e and f) shows the image of Co nanoparticles (marked by red circles) with a diameter approximately 5 to 10 nm, hence the coercivity of Co-G is greater than that of bulk Co. The electromagnetic parameters are listed in Fig. 7. EM wave absorption properties of CoNPs-G nanocomposite are highly associated with its permittivity and permeability, where the real parts of permittivity (3 0 ) represent the storage capability of electric energy, the imaginary parts (3 00 ) stand for the loss capability of electric energy, the real parts of permeability (m 0 ) represent the degree of polarization and the imaginary parts (m 00 ) indicate the energy loss caused by the rearrangement of the magnetic dipole moment. In Fig. 7(a and b), the values of 3 0 and 3 00 decrease from 16.5 to 13.2 and 2.25 to 1.05, respectively, as the frequency increases from 2 to 18 GHz range. The real parts of permittivity (3 0 ) and the imaginary parts (3 00 ) both exhibit more than one resonance peaks around 6-18 GHz, which may be attributed to the interfacial polarization resonance due to the Paper electronegativity difference between cobalt and graphene. As shown in Fig. 7(c and d), the real parts of permeability (m 0 ) declines from 2.11 to 1.99 with the increase frequency from 2-18 GHz, on the contrary, the imaginary parts (m 00 ) display wavelike rises from 0.28 to 0.36 and exhibit strong magnetic resonance peaks at 6.19, 12.16 and 16.49 GHz. The dielectric and magnetic dissipation factors, tan d3 ¼ 3 00 /3 0 and tan d M ¼ m 00 / m 0 , respectively, provide a measure of the power lost in the EM wave absorption material versus the amount of power stored. 55,56 From the Fig. 7(e and f), it can be described that the dielectric loss uctuates (from 0.14 to 0.07) slowly with the increase frequency. Moreover, the magnetic loss displays wavelike rises from 0.12 to 0.18 in 2-18 GHz frequency range, which is similar to the imaginary parts (m 00 ). It is interesting to note that the position of dielectric loss wave trough corresponds to that of the he magnetic loss peak. The results show that that the values of the complex relative permittivity (3 0 and 3 00 ) and permeability (m 0 and m 00 ) were better-matched, indicating a resonance behavior, which should be conducive to the EM wave absorption performance. Besides, the dielectric loss values (tan d3 ¼ 3 00 /3 0 ) are smaller than the magnetic loss values (d M ¼ m 00 /m 0 ) with the increase frequency, indicating that the EM wave absorption property of CoNPs-G nanocomposite may mainly originate from its magnetic loss.
The magnetic loss type of the material can be obtained by analyzing the variation law of the function (u 00 (u 0 ) À2 f À1 ). The function can be calculated according to the following equation: where m 0 is the vacuum permeability, s is the electric conductivity of the composite, d is the thickness. According to the eqn (4), if the magnetic loss only stems from eddy current loss, the value of (u 00 (u 0 ) À2 f À1 ) should be constant with a change in the frequency. If not, the magnetic loss is a consequence of natural resonance and exchange resonance. In Fig. 8, the value of (u 00 (u 0 ) À2 f À1 ) varies with the frequency, which means the magnetic loss of the CoNPs-G nanocomposite is attributed to the synergy effects of natural resonance and exchange resonance, which are benecial to widen the bandwidth of microwave absorption. 10 The rst resonance peak at 5.36 GHz is due to natural resonance, which would occur in magnetic particles when the frequency of incident microwave can be line with the intrinsic frequency of the magnetization spinning oscillation. The two resonance peaks at 11.52 and 16.48 GHz is ascribed to exchange resonance. It can be discussed with the exchange resonance mode developed by Aharoni. [57][58][59] The frequency exchange resonance of magnetic materials is determined by their particle size, morphology and the composition. The microwave absorption properties of CoNPs-G nanocomposite are illustrated in Fig. 9. A 3D image map and a contour map of RL for CoNPs-G nanocomposite are shown in Fig. 9(a) and (b). It can be observed that the material displayed superior microwave absorption performance in both the value of RL and absorption width. There existed three excellent microwave absorption "green gorge" and "yellow islands" in the two gures, respectively, which covered almost half of the map area. Fig. 9(c) shows the frequency-dependent RL curves of the CoNPs-G nanocomposite with addition amount of 55 wt% in the paraffin matrix at different thicknesses (d ¼ 1.5-5.5 mm). Compared with Fig. 9(e), the frequency-dependent RL curves of the CoNPs, it can be found that most RL curves of the CoNPs-G nanocomposite have at least two peaks except for the coating thickness of CoNPs-G nanocomposite at 1.5 mm and 2 mm. The minimal RL value is up to À40.53 dB at 9.5 GHz at d ¼ 4.5 mm, which the effective absorption bandwidth (RL # À10 dB) is 4.03 GHz, and the other two peaks of this RL curve appear at 3.04 GHz and 16.13 GHz, the minimal RL value is À18.82 dB and À18.95 dB, respectively. Moreover, the broad effective bandwidth reached the maximum of 10.22 GHz, which covered the S, X, and Ku band at a single thickness, and to our best knowledge, no previous literature reported it. By adjusting the thickness, the effective absorption bandwidth could cover almost all the S, C, X, and Ku band. It can be seen from the Fig. 9(d) that the coating thickness of CoNPs-G nanocomposite has great inuence on its minimal RL. First, the value of RL decrease with the increase coating thickness, then it increases when the coating thickness is beyond 4.5 mm. Furthermore, compared with other recently reported materials, the CoNPs-G nanocomposite has more wider operation absorption bandwidth, and effective absorption range has covered the whole frequency from 2 to 18 GHz. Notably, the introduction of graphene in Co nanoparticles can improve its EM wave absorption performance, including high absorption efficiency, wide operation absorption bandwidth, and multiband absorption function, as shown in Table 1.
Conclusions
In summary, CoNPs-G nanocomposite was synthesized via a facile liquid reduction method. Multiple dielectric, magnetic resonance peaks were exhibited in the microwave range of 2-18 GHz. The natural resonance and exchange resonance of the material were benecial to widen the bandwidth of microwave absorption. In particular, three effective RL peaks across low, middle and high frequencies (2.4-3.84, 7.84-11.87 and 13.25-18 GHz), had been observed in the curve with the coating thickness of 4.5 mm, the minimal RL reached À40.53 dB at 9.50 GHz, and the widest effective absorption bandwidth was 10.22 GHz. The effective absorption bandwidth could cover almost all the S, C, X and Ku band with the thickness in the range of 1.5-5.5 mm, which indicated that the CoNPs-G nanocomposite has potential application for EM wave absorption.
Conflicts of interest
There are no conicts to declare.
|
2019-04-09T13:07:58.440Z
|
2018-01-02T00:00:00.000
|
{
"year": 2018,
"sha1": "a3012e8cd8aed8bd49a7295a2c42620eafec62bc",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c7ra12190c",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "520c9e107e0b98318bcec827d2fff76973210ac4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
11727690
|
pes2o/s2orc
|
v3-fos-license
|
Fertilization Recovery after Defective Sperm Cell Release in Arabidopsis
In animal fertilization, multiple sperms typically arrive at an egg cell to "win the race" for fertilization. However, in flowering plants, only one of many pollen tubes, conveying plant sperm cells, usually arrives at each ovule that harbors an egg cell. Plant fertilization has thus been thought to depend on the fertility of a single pollen tube. Here we report a fertilization recovery phenomenon in flowering plants that actively rescues the failure of fertilization of the first mutant pollen tube by attracting a second, functional pollen tube. Wild-type (WT) ovules of Arabidopsis thaliana frequently (∼80%) accepted two pollen tubes when entered by mutant pollen defective in gamete fertility. In typical flowering plants, two synergid cells on the side of the egg cell attract pollen tubes, one of which degenerates upon pollen tube discharge. By semi-in vitro live-cell imaging we observed that fertilization was rescued when the second synergid cell accepted a WT pollen tube. Our results suggest that flowering plants precisely control the number of pollen tubes that arrive at each ovule and employ a fertilization recovery mechanism to maximize the likelihood of successful seed set.
In animal fertilization, multiple sperms typically arrive at an egg cell to ''win the race'' for fertilization. However, in flowering plants, only one of many pollen tubes, conveying plant sperm cells, usually arrives at each ovule that harbors an egg cell [1,2]. Plant fertilization has thus been thought to depend on the fertility of a single pollen tube [1]. Here we report a fertilization recovery phenomenon in flowering plants that actively rescues the failure of fertilization of the first mutant pollen tube by attracting a second, functional pollen tube. Wild-type (WT) ovules of Arabidopsis thaliana frequently (w80%) accepted two pollen tubes when entered by mutant pollen defective in gamete fertility. In typical flowering plants, two synergid cells on the side of the egg cell attract pollen tubes [3][4][5], one of which degenerates upon pollen tube discharge [3,6]. By semi-in vitro live-cell imaging [7,8] we observed that fertilization was rescued when the second synergid cell accepted a WT pollen tube. Our results suggest that flowering plants precisely control the number of pollen tubes that arrive at each ovule and employ a fertilization recovery mechanism to maximize the likelihood of successful seed set.
Results and Discussion
Identification of the g21 Mutant Showing a Higher Rate of Successful Fertilization than Expected The unique mode of sexual reproduction in angiosperms involves the production of two sperm cells and their delivery by a pollen tube to the female gametophyte (FG; egg producing tissue) within the ovule where double fertilization takes place [1]. Due to the limited number of studies of male-female interactions in vivo and their molecular mechanisms, we performed genetic screening in Arabidopsis thaliana plants for reduced fertility mutants. The screening was carried out in plants harboring the synergid-specific MYB98::GFP [9] marker by observing GFP signal from the synergid cells. As a primary screen, siliques of T-DNA mutagenized plants were opened and visually screened for reduced seed set. The secondary screen involved observation of the GFP signal from synergid cells when the siliques of T1 mutants showing reduced seed set were opened. This effectively allowed us to exclude mutants with genome rearrangements or early female gametophytic lethal mutants showing up to 50% of ovules that fail to show GFP signals from their synergid cells. The g21 mutant was isolated as a candidate gametophytic fertility mutant because it showed reduced seed set and all ovules had a GFP signal. Heterozygous g21 plants had a single spermlike cell in 49.0% (n = 298) of the pollen population ( Figures 1A and 1B). In theory, fully penetrant male gamete defective mutations are expected to show 50% fertility, but the fertility of such mutants has been a controversial issue [10,11]. Curiously, +/g21 plants showed 64.6% 6 6.8% (mean 6 SD; n = 22 pistils) fertility, a higher rate of successful fertilization than expected (50%) (Figures 1C and 1D).
We considered two possibilities for this enhanced-fertility phenotype in the g21 mutant. One, that a proportion (w30%) of g21 pollen tubes would not fail and so be able to fertilize and develop into seed. The other was that due to guidance defects in g21 pollen tubes, wild-type (WT) pollen tubes would preferentially increase the percentage seed set. Based on reciprocal testcross analysis (see Table S1 available online), the g21 allele showed no male transmission and most pollen tubes behaved normally when stained with aniline blue [10,12] compared with the WT. We concluded that the enhanced fertility phenotype neither resulted from successful fertilization by a proportion of g21 tubes nor pollen tube guidance defects in g21 pollen tubes.
Attraction of Two Pollen Tubes Results in the Enhanced-Fertility Phenotype
Because techniques for the dissection of fragile plant tissues were required for critical whole-pistil observations, we improved methods for dissection and for microscopic observation. Remarkably, we observed that an ovule attracts two pollen tubes with high frequency in +/g21 pistils ( Figures 1E and 1F), a phenomenon rarely observed in WT plants. We hypothesized that ovules receiving two pollen tubes would increase fertility in +/g21 mutants. To investigate this hypothesis, we counted the number of ovules that had one or two pollen tubes and scored their fertility when plants of +/g21 were crossed as the male parent to WT plants. We observed that 50.0% 6 4.9% (mean 6 SD, n = 12 pistils) of developing seeds ( Figure 1E) were fertilized by single pollen tube insertions, which would result from fertilization by pollen carrying the WT allele. Conversely, 18.2% 6 3.8% of developing seeds ( Figure 1F) were fertilized ovules that received two pollen tubes, which likely accounts for the seed increase. Interestingly, similar to the ratio of large seeds with two pollen tubes, 16.6% 6 4.0% were undeveloped seeds penetrated by two pollen tubes ( Figures 1G and 1H). Because the +/g21 mutant showed an obvious male-specific phenotype with a single sperm-like cell in pollen and we mapped the responsible gene to chromosome 1, this indicated that the g21 mutation may be allelic to duo3-1, a loss-of-function mutation in the DUO POLLEN3 (DUO3) gene [13]. DNA sequence analysis and a complementation test confirmed g21 as a null allele of DUO3 that was designated duo3-2 ( Figure S1). These data suggest that when the first pollen tube carrying duo3-2 fails to fertilize, a second WT pollen tube can surprisingly compensate for fertilization. Moreover, our data indicated that undeveloped seeds penetrated by two pollen tubes arise when first and second pollen tubes each carrying duo3-2 fail to fertilize.
Because a previous report mentioned that the male gamete defective mutant hapless 2-1 (hap2-1) also had enhanced fertility [11], we carried out a similar analysis for hap2-1. We also investigated duo pollen 1 (duo1-1) [14], duo3-1 [13], and generative cell specific 1 (gcs1) [10] (allelic to hap2-1) mutants because they also have defective male gametes leading to failure of fertilization. All mutants showed an increased fertility phenotype similar to duo3-2 ( Figure 1I). Furthermore, the percentage of developing seeds penetrated by two pollen tubes ( Figure 1J; dark blue) corresponded to that of undeveloped seeds penetrated by two pollen tubes ( Figure 1J; dark orange) as observed in duo3-2, indicating that two types of second pollen tube carrying either WT or mutant allele proportionally enter ovules, such that only half of the ovules with two pollen tubes develop into seeds. These data suggested that the enhanced-fertility phenomenon that is common to all three male gamete defective mutants may be explained by the same mechanism-failure of fertilization by a first mutant pollen tube that is rescued by a second WT pollen tube. In angiosperms, a pollen tube delivers nonmotile sperm cells accurately to the FG and completes fertilization. This is called ''siphonogamy,'' a mechanism that is thought to have evolved from zooidogamy (fertilization by motile sperm) [15]. Here we define the term ''polysiphonogamy'' for cases in which an ovule accepts multiple pollen tubes.
Visualization of the Fertilization Recovery Phenomenon by a Semi-In Vitro Assay
To confirm that the fertilization recovery phenomenon is accomplished by polysiphonogamy and to visualize the moment of the recovery event, we crossed pollen from +/duo3-2 plants to WT stigmas and observed the destiny of the sperm cells using a semi-in vitro fertilization assay [7,8,16] In WT, the two sperm cells successfully fertilized the egg cell and the central cell (Figures 2A-2F; Movie S1). However in +/duo3-2, the mutated sperm cell was successfully released into the FG but the cell remained arrested near the degenerated synergid cell without fertilization (Figures 2G-2L; Movie S1). Next, we observed that the first pollen tube failed to fertilize, but remarkably, two sperm cells of the second pollen tube were discharged to the FG and their nuclei migrated to the nuclei of the egg cell and the central cell, respectively (Figures 2M-2R; Movie S2). We also confirmed these phenomena in the +/gcs1 mutant [10] (Figure S2; Movie S3). In total, we observed six examples in +/duo3-2 and +/gcs1 mutants, all of which showed that the first mutant pollen tube failed but the WT second pollen tube succeeded in double fertilization. These data indicate that the second pollen tube and the remaining synergid cell allow fertilization even though the first pollen tube fails to fertilize, thereby rescuing the defect in seed set.
Attraction and Insertion of the Second Pollen Tube by the Second Synergid Cell
We made another significant observation; that is, we never observed a third pollen tube entering the micropyle. As shown in Figures 2M-2R and Movie S2, every time the pollen tube discharges its contents into the FG, the synergid cell degenerates. Loss of both synergid cells would prevent pollen tube attraction [4,9], possibly due to the loss of attractants [5,17], explaining why only two, but not three or more pollen tubes, are attracted in male gamete defective mutants. Although female gametophytic mutants defective in pollen tube guidance (myb98) [9] and pollen tube reception (fer/sir and lolelei) [18][19][20] sometimes attract more than three pollen tubes, the number of pollen tubes appears to be strictly controlled in WT ovules. We conclude that the recovery of fertilization is limited to the second pollen tube, indicating that there is no third chance for fertilization in two-synergid celled plants. In theory, multiple-synergid celled plants as reported in the ig1 mutant of maize [21] may therefore have a greater capacity to attract multiple pollen tubes. In fact, the three-synergid celled female gametophytes present in Amborella trichopoda [22] sometimes attract three pollen tubes [23], suggesting that some plants might have a third chance to compensate for failed fertilization.
Fertilization Recovery by the Second Pollen Tube Is Restricted Only When the First Tube Fails
To confirm that the recovery event takes place only when the first tube fails during fertilization, we double-stained +/hap2-1 pollen tubes for b-glucuronidase (GUS) activity , Movie S1). In the case of single insertion of a duo3-2 pollen tube, a single SLC is released from a pollen tube but is arrested in the middle of the FG without fertilization (G-K; Movie S1). (L-R) In the case of pollen tubes from +/duo3-2 heterozygotes, the first pollen tube from a duo3-2 allele releases a SLC and the cell fails to fertilize (M-Q). However, the second pollen tube from a WT allele is inserted to the same ovule and releases two sperm cells to complete fertilization (Movie S2). At first pollen tube discharge, one of the two synergid cells is collapsed, and upon second pollen tube discharge, the other synergid cell is collapsed. followed by aniline blue staining to trace the behavior of the first and the second pollen tubes (Figures 3A-3D). Because hap2-1 mutant pollen tubes are marked by the pollen tubespecific reporter gene, LAT52:GUS [11,24], we could trace the destination of hap2-1 mutant pollen tubes in vivo. Moreover, the hap2-1 mutant was generated by an insertion of T-DNA harboring the LAT52:GUS reporter gene, so GUS-positive signals originate from hap2-1 alleles, whereas WT alleles are GUS-negative. We counted the number of the GUS-positive ovules, providing evidence of a burst hap2-1 pollen tube and gamete release into the FG, by crossing +/hap2-1 as a male parent to WT flowers. As shown in Figure 3E, 10 hr after pollination, we observed that 49.4% 6 4.8% (n = 5 pistils) of the ovules accepted a hap2-1 allele, suggesting that hap2-1 and WT pollen tubes were similarly competent to enter each FG and release their contents successfully. It has been suggested by von Besser et al. [11] that hap2-1 sperm cells affect pollen tube guidance [11]. However, judging from duo3-2 (g21) data (E) Percentage of GUS+ ovules with one (blue) or two (orange) pollen tubes. At 8 or 10 HAP, only one pollen tube was inserted to almost all GUS+ ovules but at 28 HAP or later, the ratio of two pollen tubes reached a maximum, suggesting that the second pollen tube was positively inserted to rescue fertilization. (F) Percentage of GUS2 ovules. One pollen tube was inserted to most ovules, and the second pollen tube was scarcely observed. FG, female gametophyte. Error bars indicate SD from the means of at least four independent experiments. (G) Schematic drawing of the fertilization recovery system. Once a single pollen tube is inserted to an ovule, the pollen tube bursts and releases two sperm cells. When the sperm cells complete fertilization, the ovule blocks the entry of the other pollen tubes and develops into a seed by forming embryo and endosperm. When the sperm cells fail to fertilize, the ovule attracts the second pollen tube to rescue fertilization. The rescued ovule develops into a seed, resulting in increased fertility. In the case of failure of fertilization by the second pollen tube, the ovule does not attract a third pollen tube possibly due to depletion of pollen tube attractant from synergid cells because both synergid cells are collapsed upon double entry of pollen tubes. Scale bar in (A) represents 200 mm and in (B) and (C) represent 40 mm.
( Figure S1) and our hap2-1 data, we conclude that the sperm cells appear to be passive cargo of the pollen tube and do not influence pollen tube guidance.
To explore the temporal sequence of events involved in the rescue of fertilization after failure of the first mutant pollen tube in vivo, we performed a time course experiment by doublestaining of pollen tubes. Ovules began to accept pollen tubes from 5 hr after pollination (5 HAP), and at 10 HAP, all ovules accepted at least one pollen tube as reported previously [6]. At 10 HAP, 6.3% 6 2.7% (mean 6 SD, n = 5 pistils) of GUSpositive ovules had attracted two pollen tubes ( Figure 3E). However, by 20 HAP, 38.5% 6 8.7% (n = 7) of GUS-positive ovules had attracted two tubes (note that the frequency of GUS-positive ovules with a single hap2-1 pollen tube decreased accordingly). In contrast, at 10 HAP, no ovules (n = 5 pistils) accepting a WT allele had attracted two pollen tubes ( Figure 3F) and by 20 HAP, only 2.7% 6 2.7% (n = 7) of ovules had attracted two WT tubes, indicating that second pollen tubes were attracted at a much higher frequency by ovules that had first accepted a hap2-1 pollen tube. We conclude that ovules have an unknown system that senses the completion of fertilization and might prevent other pollen tubes from entering the micropyle. However, if the first pollen tube fails to fertilize, ovules do not sense the completion of fertilization, allowing the attraction of a second pollen tube and its entry into the micropyle ( Figure 3G).
It has long been suggested that only one pollen tube principally enters and releases sperm cells into an ovule [1,7]. Although classical studies [1], in at least 13 species, have reported rare cases of two pollen tubes in an embryo sac and Mori et al. showed that an ovule rarely had two sets of sperm cell pairs after pollination with +/gcs1 pollen [10], these have been regarded as anomalous events [1]. We showed that most ovules have one pollen tube at 10 HAP, indicating that until several hr after the arrival of the first pollen tube, the one tube-one ovule system might be maintained by a blocking system to avoid polysiphonogamy [2]. Then, the second pollen tube starts to be attracted again by ovules that failed at fertilization in hap2-1. In this case, the persistent synergid cell, which may begin to degenerate in fertilized ovules [1], continued to attract pollen tubes, resulting in 76.7% 6 2.7% (n = 5) of failed ovules accepting the second pollen tube at 28 HAP. No particular role has been proposed for the persistent synergid cell after the arrival of the first pollen tube [1,3,7]. However, we demonstrate that the second synergid cell retains its function and is able to attract and accept a second tube to rescue fertilization. This could be one of the reasons why two synergid cells are present in many higher plants. Further, we observed that it takes 28 hr for all second tubes to complete their entrance into the ovules. This result indicates that some pollen tubes are maintained inside the pistil for an extended time providing an opportunity to compensate for the failure of other pollen tubes.
Conclusions
Overall, we have unveiled a fertilization recovery system that allows plants to restore ovule fertility after initial failure of fertilization within the ovule. Further investigation of this phenomenon could advance the understanding of how natural populations of sympatric species are maintained. For example, heterospecific pollen tubes interfere with the fertilization of ovules by conspecific pollen by various mechanisms [25]. When heterospecific pollen tubes are able to enter the ovules but fail at steps leading to gamete fusion, fertilization recovery by conspecific pollen tubes could diminish the detrimental effect of pollen competition, contributing to the maintenance of reproductively isolated populations of sympatric species.
Supplemental Information
Supplemental Information includes two figures, one table, Supplemental Experimental Procedures, and three movies and can be found with this article online at doi:10.1016/j.cub.2012.03.069.
|
2016-10-06T22:58:57.636Z
|
2012-06-19T00:00:00.000
|
{
"year": 2012,
"sha1": "98ece0032d874707931712faecd8cefe8c78f7c9",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0960982212004022/pdf",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "98ece0032d874707931712faecd8cefe8c78f7c9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
271429320
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Analysis of Hulless Barley Transcriptomes to Regulatory Effects of Phosphorous Deficiency
Hulless barley is a cold-resistant crop widely planted in the northwest plateau of China. It is also the main food crop in this region. Phosphorus (P), as one of the important essential nutrient elements, regulates plant growth and defense. This study aimed to analyze the development and related molecular mechanisms of hulless barley under P deficiency and explore the regulatory genes so as to provide a basis for subsequent molecular breeding research. Transcriptome analysis was performed on the root and leaf samples of hulless barley cultured with different concentrations of KH2PO4 (1 mM and 10 μM) Hoagland solution. A total of 46,439 genes were finally obtained by the combined analysis of leaf and root samples. Among them, 325 and 453 genes had more than twofold differences in expression. These differentially expressed genes (DEGs) mainly participated in the abiotic stress biosynthetic process through Gene Ontology prediction. Moreover, the Kyoto Encyclopedia of Genes and Genomes showed that DEGs were mainly involved in photosynthesis, plant hormone signal transduction, glycolysis, phenylpropanoid biosynthesis, and synthesis of metabolites. These pathways also appeared in other abiotic stresses. Plants initiated multiple hormone synergistic regulatory mechanisms to maintain growth under P-deficient conditions. Transcription factors (TFs) also proved these predictions. The enrichment of ARR-B TFs, which positively regulated the phosphorelay-mediated cytokinin signal transduction, and some other TFs (AP2, GRAS, and ARF) was related to plant hormone regulation. Some DEGs showed different values in their FPKM (fragment per kilobase of transcript per million mapped reads), but the expression trends of genes responding to stress and phosphorylation remained highly consistent. Therefore, in the case of P deficiency, the first response of plants was the expression of stress-related genes. The effects of this stress on plant metabolites need to be further studied to improve the relevant regulatory mechanisms so as to further understand the importance of P in the development and stress resistance of hulless barley.
Introduction
Hulless barley is widely distributed in the Tibetan cluster area in northwest and southwest China, which all belong to high-elevation regions.As the main staple food crop in Tibet, hulless barley is still the dominant food among Tibetans.Natural phytochemicals can promote people's health, especially because they have antidiabetic activity, which is due to their high contents of β-glucan, phenolics, and flavonoids [1].Hulless barley has the characteristics of cold tolerance, barren tolerance, low-temperature tolerance, and strong drought resistance.Moreover, hulless barley is the only grain crop that can mature normally Life 2024, 14, 904 2 of 17 at an altitude of 4200 m [2].The "Du Lihuang" highland barley as the research subject is extensively cultivated in Qinghai, Tibet, and other major highland barley planting regions in China.It is a precocious variety and reaches a height of 75-95 cm.This variety demonstrates strong tillering ability and produces abundant grains on its panicles.Moreover, it possesses remarkable resistance to waterlogging, cold weather conditions, light hail damage, and drought stress and exhibits resistance against diseases and pests.Furthermore, this variety displays adaptability to various environments resulting in high yields per unit area.
As one of the macronutrients, phosphorous (P) is related to plant growth and development [3].It is an important essential nutrient and the structural and functional component of nucleic acids, nucleotides, phospholipids, and high-energy molecules (ADP and ATP), and it is an activated intermediate in the photosynthetic carbon cycle.In addition, inorganic phosphate (Pi) also plays an important role in metabolism, protein regulation, and signal transduction cascades [4].However, the Pi concentration is extremely low in soil and easily forms insoluble complexes, which is because of its prospect of binding strongly to soil surfaces [5].However, plants can only use less than 30% of the Pi fertilizer, and the rest is lost in the environment, leading to soil degradation and the eutrophication of water bodies [6].The lack of phosphorus (P) can inhibit cell development, resulting in reduced seed or fruit growth and even lower yields.Approximately five billion hectares of farmland are short of available P, which requires an annual 2% increase to maintain the current food production [7].
The phenomenon of the P shortage and high cost can be resolved by enhancing the utilization of P in crops through genetic improvement technology, which is the key to viable, sustainable yield production [8].However, plants have evolved complex responses and adaptive mechanisms to preserve the development and homeostasis in the absence of soil P [9].Nevertheless, the relevant molecular mechanisms and regulatory elements have only been verified and analyzed in some crops [10].Phosphate transporters (PHTs) are a group of structurally related proteins that mediate the transmembrane transport of organic anions under low-P stress [11].Moreover, PHT1 mainly participates in root-mediated P uptake from the soil in Arabidopsis under low P conditions.Other PHT1 homologous transporters also play crucial roles in different parts of plants [12].After being absorbed by the root cells, phosphate1 (PHO1) can translocate P from the root to the shoot and load to the xylem [13].Moreover, some members in the WRKY transcription factors (TFs) can promote PHT1 expression [14] and coordinately inhibit PHO1 expression [15].Although relevant genes have been reported in model plants, more detailed and intensive research is still under way, especially in other crops.
In the present study, hulless barley cultivar "Du Lihuang" (ZDM01467) was used to investigate the change in response to low-P stress.The study also aimed to disclose the mechanisms of low-P tolerance and identify the relevant candidate genes through transcriptome analysis.
Plant Growth
The hulless barley seeds "Du Lihuang", which was the early maturing diploid variety cultivated in Qinghai, were disinfected with sodium hypochlorite for 8 min, washed with clean water, and placed in the culture dish for germination at 25 • C under continuous light.After germination, the seedlings with similar growth potential were selected and fixed on the floating plate.Six plants of each kind of hulless barley were placed in two plastic boxes (600 × 500 × 160 mm 3 ).The modified Hoagland medium (1 mmol/L KH 2 PO 4 as the only source of element P) was used for culture; 20 L of Hoagland solution was added to each box under a light/dark cycle of 16 h/8 h at 25 • C/18 • C. When the plant grew to the 3-leaf stage with endosperm nutrient depletion, the medium in one box was changed to low-P-modified Hoagland medium (10 µmol/L KH 2 PO 4 ).The physiological indexes of hulless barley, including shoot height, root length, fresh weight of shoots and roots, dry weight of shoots and roots, total P content, soluble sugar content, protein content, free proline content, MDA content, and SOD, POD, CAT, and ACP activities, were measured when the plants grew to the five-leaf stage.
RNA Extraction, Library Construction, and Sequencing
The total RNA was extracted using a TRIzol reagent kit (Invitrogen, Carlsbad, CA, USA).The RNA quality was assessed on an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) after extracting the total RNA.Then, the enriched mRNA was fragmented into short fragments and reverse-transcribed into cDNA using the NEBNext Ultra RNA Library Prep Kit for Illumina (New England Biolabs, Ipswich, MA, USA).The purified double-stranded cDNA fragments were end repaired, a base was added, and ligated to Illumina sequencing adapters.The ligation reaction was purified with the AMPure XP Beads (1.0×).The ligated fragments were subjected to size selection by agarose gel electrophoresis and PCR amplified.The resulting cDNA library was sequenced using Illumina Novaseq6000 (Gene Denovo Biotechnology Co., Guangzhou, China).
Alignment with a Reference Genome
An index of the reference genome was built, and paired-end clean reads were mapped to the reference genome using HISAT 0.1.6[16] and other parameters set as a default.The mapped reads of each sample were assembled using StringTie [17] in a reference-based approach.For each transcription region, an FPKM value was calculated to quantify its expression abundance and variations, using the RSEM v1.3.3 software [18].
Differentially Expressed Genes
The RNA differential expression analysis was performed using the DESeq2 1.25.9[19] software between two different groups and using the edgeR 3.32.1 software [20] between two samples.The genes/transcripts with a false discovery rate below 0.05 and absolute fold change ≥2 were considered differentially expressed genes/transcripts.GO [21] is an international standardized gene functional classification system that offers a dynamicupdated controlled vocabulary and a strictly defined concept to comprehensively describe the properties of genes and their products in any organism.GO has three ontologies: MF, CC, and BP.Each GO belongs to a type of ontology.The GO enrichment analysis showed that all GO terms were significantly enriched in DEGs compared with the genome background, and the DEGs that corresponded to biological functions were filtered.All DEGs were mapped to GO terms in the GO database (http://www.geneontology.org/);gene numbers were calculated for every term, and significantly enriched GO terms in DEGs compared with the genome background were defined using the hypergeometric test.Genes usually interact with each other to play roles in certain biological functions.The pathway-based analysis helped further understand the biological functions of genes.KEGG [22] is the major public pathway-related database.Pathway enrichment analysis identified significantly enriched metabolic pathways or signal transduction pathways in DEGs compared with the whole-genome background.
Gene Set Enrichment Analysis
We performed GSEA using the software GSEA v1.0 and MSigDB v7.5.1 [23] to identify whether a set of genes in specific GO terms/KEGG pathways showed significant differences in the two groups.Briefly, we input gene expression matrix and rank genes using the signalto-noise normalization method.The enrichment scores and p values were calculated in default parameters.
Validation and Analysis of DEGs Using qRT-PCR
The qRT-PCR analysis was performed using the Bio-Rad Real-Quantitative real-time PCR analysis Time System (CFX96, BioRad, Hercules, CA, USA).The primers were designed using Primer Premier 6.The gene-specific primer sequences for qRT-PCR are listed in Table S14.We use a 96-well Polypropylene Flat Top PCR Microplate, Low Profile, No Skirt, Clear, Nonsterile (PCR-96-LP-FLT-C, Axygen, Union City, CA, USA) for the qPCR reaction.The qPCR reaction system (20 µL) was as follows: forward primers 1 µL, reverse primers 1 µL, cDNA (500 ng/µL) 2 µL, THUNDERBIRD SYBR qPCR Mix 10 µL, ddH 2 O 6.0 µL, totally 20 µL.The amplification procedure is predenaturation 95 • C 30 s; denaturation 95 • C 5 s, annealed 55 • C 20 s, extended 72 • C 30 s, 40 cycles.We performed three technical repeats and three independent biological replicates and used the most stable gene 18S ribosomal RNA as the reference gene in qRT-PCR analyses [24].Quantitative analysis and statistics were performed using the 2 −∆∆Ct method [25].
Analysis of Hulless Barley Morphology under Low-P Treatment
The phenotypic differences of the hulless barley cultivar "Du Liang," which included shoot height, root length, fresh weight of shoots and roots, dry weight of shoots and roots, and root-shoot ratio, were measured under the low-P treatment (Figure 1A and Table S1).The results showed that the hulless barley cultivar "Du Liang" was significantly sensitive to the P level.The lower-P treatment limited growth and biomass accumulation in plants (Figure 1A), and the values of the root-shoot ratio calculated under different treatments were opposite to those for the growth potential.
In addition, the endogenous content (Figure 1B and Table S1) and enzymatic activity assayed in leaves and roots (Figure 1C and Table S1) showed that the total P, total proteins, and free proline contents reduced and soluble sugar and malondialdehyde (MDA) contents increased under low-P treatment.The superoxide dismutase (SOD), peroxidase (POD), catalase (CAT), and acid phosphatase (ACP) activity were all enhanced under low-P.Interestingly, all values were higher in the leaves than in the roots, except for the total P content, which was different for the different P concentrations (Figure 1B,C).
Database Quality and Mapping Gene Analysis by RNA-seq
We used transcriptome sequencing technology to detect and analyze the samples to comprehensively understand the molecular regulatory mechanism of "Du Liang" under the low-P treatment.The RNA-seq databases (all uploaded in the NCBI database, which is shown in File S1), containing roots and leaves in normal-and low-P treatments, were used to acquire clean reads, which all accounted for more than 99.4% in each sample (Figure 2A and Table S2).Each clean data contained more than 6 billion bases, and even reached 7.4 billion.Nevertheless, only less than 6.3% of bases had the probability of false identification (Q30 database in Table S2).The GC content was approximately 50% and the average base sequencing quality was nearly 40, implying that the composition and distribution of bases were of high quality, providing a good data source for the subsequent analysis.
The coverage of the genome alignment of leaf samples reached more than 93.7% and the root samples covered more than 72.5% (Table S3).The analysis of the total mapped reads results in each sample indicated that more than 80.8% of the reads were blasted in exons, and about 9% were located in introns and intergenic regions (Figure 2B and Table S3).The analysis of the blasted genes showed that 3588 genes were defined as novel genes and referred to about 107 plant signal regulation and synthesis pathways (Table S4).Moreover, these databases provided a basis for the following analyses.
The original read count data were corrected to obtain more accurate fragment per kilobase of transcript per million mapped reads (FPKM) expression data to further improve the accuracy of gene expression (Table S5).Combined with these data, the analysis of the relationship between samples showed that the data of leaf samples (WL and ZL) and root samples (WG and ZG) were significantly different; also, obvious differences were found between the leaf samples under the low-P (WL) and normal-P (ZL) conditions.The root samples were very small and basically clustered together (Figure 2C).The cluster analysis of the replicates within the treatment showed that, except for the small differences among the leaf samples under the low-P treatment, basically no differences were found among the other samples with good repeatability (Figure 2D).The original read count data were corrected to obtain more accurate fragment per kilobase of transcript per million mapped reads (FPKM) expression data to further improve the accuracy of gene expression (Table S5).Combined with these data, the analysis of the relationship between samples showed that the data of leaf samples (WL and ZL) and root samples (WG and ZG) were significantly different; also, obvious differences were found between the leaf samples under the low-P (WL) and normal-P (ZL) conditions.The root samples were very small and basically clustered together (Figure 2C).The cluster analysis of the replicates within the treatment showed that, except for the small differences among the leaf samples under the low-P treatment, basically no differences were found among the other samples with good repeatability (Figure 2D).
Analysis of DEGs and TFs
The analysis of gene expression in samples treated with different P concentrations showed that 325 and 453 genes were differentially expressed by more than twofold in leaf and root samples, respectively.Moreover, 20 genes responded to the regulation of the P element in both leaf and root tissues (Figure 3A and Table S5).The comparative analysis of the gene expression data also showed that 132 genes were upregulated and 193 were
Analysis of DEGs and TFs
The analysis of gene expression in samples treated with different P concentrations showed that 325 and 453 genes were differentially expressed by more than twofold in leaf and root samples, respectively.Moreover, 20 genes responded to the regulation of the P element in both leaf and root tissues (Figure 3A and Table S5).The comparative analysis of the gene expression data also showed that 132 genes were upregulated and 193 were downregulated in the leaf expression data, and 300 were upregulated and 153 were downregulated in the root expression data (Figure 3B).downregulated in the leaf expression data, and 300 were upregulated and 153 were downregulated in the root expression data (Figure 3B).S16).
In this study, we found 1763 TFs from the whole genes, which belonged to 55 families (Figure S1 and Table S6).Differential analysis showed that only 58 TFs were DEGs, which is two times less than under low-P treatment compared with the normal-P treatment.Further, 17 and 44 TFs belonged to the leaf and root samples, respectively.Also, three TFs were present in both leaf and root samples.Among these, 7 TFs were downregulated and 10 TFs were upregulated in the leaf samples; and 7 TFs were downregulated and 44 TFs were upregulated in the root S16).
In this study, we found 1763 TFs from the whole genes, which belonged to 55 families (Figure S1 and Table S6).Differential analysis showed that only 58 TFs were DEGs, which is two times less than under low-P treatment compared with the normal-P treatment.Further, 17 and 44 TFs belonged to the leaf and root samples, respectively.Also, three TFs were present in both leaf and root samples.Among these, 7 TFs were downregulated and 10 TFs were upregulated in the leaf samples; and 7 TFs were downregulated and 44 TFs were upregulated in the root samples.These differentially expressed TFs belonged to 12 TF families, which contained 5 and 12 families in the leaf and root samples, respectively (Table S6).
Classification of GO Functional Annotations and KEGG Pathways for DEGs
The GO annotations were predicted to further analyze the function of DEGs.These genes were mainly divided into three categories, including biological process (BP), cell component (CC), and molecular function (MF).The genes were enriched in the metabolic and cellular processes in BP, cells and cell parts in CC, and binding and catalytic activities in MF in the leaf samples.Except for these annotations, the root DEGs also participated in membrane and membrane parts in CC in different treatment samples (Figure S2).In addition, we analyzed the GO annotations by twofold DEGs, and the top 20 enriched GO terms were displayed by q values (Figure 4).A total of seven BP, nine CC, and four MF annotations were relatively more accurate in gene structure comparisons in leaf DEGs.These annotations were mainly clustered in cell wall-related BPs, plasmids, and oxidationreduction functions; also, a small number of them were related to the xyloglucan metabolic process (Figure 4A and Table S7).The analysis of the results of the top 20 GO annotations enriched in root samples revealed that the participating functions/processes of their DEGs were different from those in leaves.Then, 15 BPs, 1 CC, and 4 MFs were clustered, which mainly participated in abiotic stresses, the nicotianamine-related bioprocess, and biogenic amine biosynthetic process.Some signaling pathways, transferase activities, and other annotations were also predicted to be involved in genes (Figure 4B and Table S8).The use of the KEGG database to compare and analyze the DEGs and the relevant data showed that the DEGs were mainly clustered as "Global and overview maps" in metabolism.Compared with the blasted data of two different tissues, "Environmental adaptation" in organismal systems was significant and more DEGs were clustered in "Folding, sorting, and degradation" in the genetic information processing and in "Signal transduction" in the environmental information processing in root samples (Figure S3).The top 20 KEGG pathways by DEGs were mainly clustered in the biosynthesis of secondary metabolites and metabolic pathways under the P treatments in leaves (Figure 5A and Table S9).In the root tissues, the DEGs focused on protein processing in the endoplasmic reticulum, cysteine and methionine metabolism, and plant-pathogen interaction (Figure 5B and Table S9).Similar to the GO annotations, a significant difference was found between the KEGG pathway of root and leaf DEGs.Thus, the results suggested that the differential expression of functional genes might affect multiple metabolic pathways in different tissues.the results suggested that the differential expression of functional genes might affect multiple metabolic pathways in different tissues.
Gene Set Enrichment Analysis in Whole-Expression Genes
The gene set enrichment analysis (GSEA) uses all genes, rather than just DEGs, to identify functional gene sets that are not significantly different but have similar differential expression trends and to determine whether the corresponding pathways are activated or repressed.A total of 441 GO annotations were enriched in leaf treatments.Moreover, 365 GOs were upregulated in the ZL samples and 76 GOs were upregulated in the WL samples.A total of 223 GOs were enriched in root treatments.Then, 200 GOs were upregulated in the ZG samples and 23 in the WG samples.Contrary to leaf treatments, the higher enrichment score was clustered in the ZG samples, which mainly included the photosystem, transport-and binding-related functions, protein structure regulation, and other functions.Further, 76 GOs existed in leaf and root treatments simultaneously ("go.both"sheet in Table S10).These Life 2024, 14, 904 10 of 17
Gene Set Enrichment Analysis in Whole-Expression Genes
The gene set enrichment analysis (GSEA) uses all genes, rather than just DEGs, to identify functional gene sets that are not significantly different but have similar differential expression trends and to determine whether the corresponding pathways are activated or repressed.A total of 441 GO annotations were enriched in leaf treatments.Moreover, 365 GOs were upregulated in the ZL samples and 76 GOs were upregulated in the WL samples.A total of 223 GOs were enriched in root treatments.Then, 200 GOs were upregulated in the ZG samples and 23 in the WG samples.Contrary to leaf treatments, the higher enrichment score was clustered in the ZG samples, which mainly included the photosystem, transport-and binding-related functions, protein structure regulation, and other functions.Further, 76 GOs existed in leaf and root treatments simultaneously ("go.both"sheet in Table S10).These GOs were involved in a lot of functions.
The KEGG pathway by GSEA enrichments showed that 28 pathways were mainly enriched in leaf treatments and 31 in root treatments.A total of 4 KEGG pathways were upregulated in WL and 24 in ZL in leaf treatments, and 15 were upregulated in WG and 16 in ZG in root treatments.Alkaloid biosynthesis, ester metabolism, and nucleotide regulation were aligned in the leaf samples, and photosynthesis, hormone transduction, plant-pathogen interaction, biosynthesis, amino acid metabolism, and other modifications were aligned in the root samples (Table S10).A total of 13 KEGG pathways existed in the leaf and root treatments simultaneously ("kegg.both"sheet in Table S10).These KEGG pathways were clustered in energy and amino acid metabolism, translation, and replication and repair.
Verification of the Expression of DEGs under P Deficiency Using qRT-PCR
We selected 19 DEGs, which all had higher expression differences and good repeats in leaf and root samples, to identify the expression of the main DEGs.The expression pattern of 19 DEGs using the quantitative reverse transcription polymerase chain reaction (qRT-PCR) is shown in Figure 6 (Table S15).Twelve genes (HORVU3Hr1G086500, HORVU1Hr1G073900, HORVU2Hr1G099830, HORVU3Hr1G002980, HORVU6Hr1G065240, HORVU6Hr1G077710, HORVU6Hr1G082360, HORVU7Hr1G049370, HORVU7Hr1G098280, MSTRG.19126,MSTRG.33383, and HORVU7Hr1G089910) were upregulated following a decrease in the concentration.On the contrary, six genes (HORVU0Hr1G017690, HORVU1Hr1G000440, HORVU1Hr1G081410, HORVU3Hr1G007500, HORVU3Hr1G108670, and HORVU7Hr1G090410) were downregulated and no significant difference was found in the expression level of one gene (HORVU5Hr1G072700) in the leaf samples.Further, seven genes (HORVU1Hr1G073900, HORVU3Hr1G086500, HORVU5Hr1G072700, HORVU6Hr1G065240, HORVU7Hr1G089910, HORVU0Hr1G017690, and HORVU7Hr1G090410) were upregulated following a decrease in the concentration, seven genes (HORVU1Hr1G081410, HORVU3Hr1G007500, HORVU6Hr1G077710, HORVU6Hr1G082360, HORVU7Hr1G049370, HORVU3Hr1G108670, and HORVU3Hr1G002980) were downregulated, and five genes (HORVU1Hr1G000440, HORVU2Hr1G099830, HORVU7Hr1G098280, MSTRG.19126, and MSTRG.33383) had no significant differences in the root samples.The blue column denotes the leaf samples, and the red one denotes the root samples.
Discussion
Green revolution is a key direction of crop research, which mainly focuses on the efficient use of fertilizers [26].In this study, we analyzed "Du Lihuang" hulless barley under the low-P treatment.The phenotypic traits and genetic correlation in roots and leaves were analyzed using RNA-seq to study genome-wide changes in gene transcription and screen existing gene resources in response to low P concentrations [27].It is necessary to acquire the plant response in low-P treatments, especially at the physiological and transcriptomic levels, to enhance the P-use efficiency [28].
Under the P-deficiency treatment, the shoot height, root length, fresh weight of shoots and roots, and fresh weight of total roots were significantly reduced compared with those of controls (Figure 1).
Discussion
Green revolution is a key direction of crop research, which mainly focuses on the efficient use of fertilizers [26].In this study, we analyzed "Du Lihuang" hulless barley under the low-P treatment.The phenotypic traits and genetic correlation in roots and leaves were analyzed using RNA-seq to study genome-wide changes in gene transcription and screen existing gene resources in response to low P concentrations [27].It is necessary to acquire the plant response in low-P treatments, especially at the physiological and transcriptomic levels, to enhance the P-use efficiency [28].
Under the P-deficiency treatment, the shoot height, root length, fresh weight of shoots and roots, and fresh weight of total roots were significantly reduced compared with those of controls (Figure 1).These phenomena also appeared in other plants [29].Aimen et al. also reported this result from another perspective; they suggested that higher P concentration could increase the plant height and root length [30].Moreover, the root dry weight was not obviously different under different treatments [31].These phenomena indicated that the distribution of dry matter to roots could increase under low P concentrations and the root-shoot ratio could be higher, which was similar to the results obtained by Reddy [32].The higher root-shoot ratio could be an adaptive strategy for increasing P acquisition under P-deficiency treatment.Liu et al. also indicated that the genotypes with higher root length could significantly enhance P absorptivity under low-P conditions [33].
Compared with other studies, the decreasing content of total P and proteins acquired the same tendencies under P-deficient conditions [34].Yao et al. examined persistent deficiency of the P element, which led to the gradual decrease in the common bean total P content [35] and synchronously decreased the content of total proteins because P is a key synthetic substrate of proteins.Nadeem et al. found that P nutrition improved photosynthesis [36], and starch synthesis was closely related to photosynthesis and provided ATP for starch synthesis through photophosphorylation [37].However, our results showed that the soluble sugar content was higher under P-deficient conditions, which was attributed to low sink demand and limited leaf expansion under P starvation [38].Some other studies also reported similar results [39].The results also indicated that low P concentration could enhance plant stress resistance and antioxidant activity.Also, the related enzyme activities (SOD, POD, and CAT) [40] and MDA content were significantly enhanced (Figure 1B,C) [38].Under P-deficient conditions, plants synthesize and secrete ACP, which degrades organophosphorus into inorganic P or regulates cell wall structure, thus improving the adaptability of plants to P-deficiency stress [41].Thus, the activity of ACP was obviously higher under P-deficient conditions (Figure 1C).The analysis of the morphology databases showed that P deficiency could influence plant development; on the contrary, it could enhance plant stress resistance for plant survival.
The transcriptome information can be used to analyze the phenotypic differences under P treatments.All testing samples had a higher number of clean reads, fewer false identification in Q30 databases, and high quality of base composition, indicating that we acquired high-confidence data to ensure the accuracy of subsequent analysis [42].Plenty of clean reads were mapped to the reference genome from Hordeum vulgare ssp.vulgare L. because the cultivar "Du Liang" belonged to a branch of the barley genus.In addition, we also acquired some new genes that could provide a possibility to explore new regulatory mechanisms.In this study, we mainly focused on the expression patterns to find out the major DEGs.The sample repeats needed higher uniformity, and the sample cluster also indicated their good repeatability [43].The study provided a good data source and an important reference for subsequent data analysis.
DEGs, as important data, could directly reflect the molecular evidence of differences between samples [44].Compared with gene transcriptional expression and clustering, the function by DEGs and GO annotation and KEGG pathway analysis could provide candidate genes for subsequent related studies.These candidate genes might have a potential role in increasing P-use efficiency.We performed clustering and expression pattern analysis on DEGs under different treatments, which could intuitively highlight the differences in the expression of related genes under different treatments.Although certain differences existed in the expression of small parts of different samples under the same processing, the overall trend was still consistent (Figure 3C).
The results of GO annotation and KEGG pathway prediction of GSEA and DEGs showed that plenty of genes participated in various functions and multifarious pathways.A total of 272 and 360 DEGs were annotated in the GO term, and 77 and 113 DEGs might take part in different pathways.A total of 10 GOs (blue color in Table S10) were clustered compared with the top 20 GO annotations in DEGs.The higher enrichment score was clustered in WL samples, which mainly included carbon fixation, photorespiration, sugar-related modification, and fatty-acyl-CoA, which did not appear in the DEG analysis.Moreover, 13 GOs (red color in Table S10) were clustered compared with the top 20 GO annotations in DEGs.The photosystem and some transport and cell part components did not appear in DEGs.The results mainly indicated that these DEGs were involved in DNA, RNA, GTP, protein, and ATP binding; some transporter, enzyme, and amino acid activities; and sugar, glucose, and fat binding in the leaf samples (Figure S2 and Table S11) [45].These enzymes were mostly related to stress regulation and some phosphatase-related activities [46].Moreover, iron and metal ion binding also clustered and participated in chloroplast function and transfer [47].More enzyme activity and transporters were predicted in the root samples compared with the leaf samples.Calcium ion binding (GO:0005509) significantly appeared in the roots, and Liu also identified the correlation of calcium ions with plant response to low-P stress [48].A large number of DEGs were related to the carbohydrate metabolic process, oxidation-reduction process, and phosphorylation by BP.Except for phosphorylation, the DEGs were also clustered in the regulation of transcription in root samples [49].This also indicated that the roots were more involved in the absorption and transport of P. The KEGG results were also consistent with GO annotation findings, and DEGs were involved in photosynthesis, plant hormone signal transduction, glycolysis, phenylpropanoid biosynthesis, and the synthesis of metabolites.These pathways also appeared in other abiotic stresses [50].These databases indicated that P deficiency was closely related to stress resistance and photosynthesis.Plants initiated multiple hormone synergistic regulatory mechanisms to maintain growth under P-deficient conditions.Thus, some TFs also appeared in KEGG pathways (Figure S3 and Table S12).
TFs played an important role in plant development and could regulate gene expression at the transcriptional level such that the plants maintained normal physiological activity under stress [51].The whole genes mainly clustered in ARR-B TFs, which played an important role in plant stress defense and development according to positive regulation in the phosphorelay-mediated cytokinin signal transduction [52].AP2/EREBP (ethyleneresponsive element-binding proteins), GRAS (gibberellin), and ARF (auxin) played an important role of plant hormones in abiotic stress responses and also gathered under different P treatments [53].NAC, bHLH, WRKY, and bZIP, as the larger family of TFs, regulated plant stress, development, metabolism, and some other pathways, and also responded to P deficiency [54].FAR1, MADS, and ABI3VP1 were more associated with plant growth and light signal transduction [31].Some of these transcriptions also took part in other stresses [55].The analysis of DEGs showed that 17 TFs were predicted and belonged to ARR-B, bHLH, GRAS, MADS, and NAC (Table S13).Zhao et al. studied the response of growth characteristics and endogenous hormones of Sophora davidii to low-P stress.Five phytohormones (abscisic acid, cytokinin, strigolactone, indole-3-acetic acid, and gibberellin) were regulated by P deficiency in the leaf samples [56].Han et al. showed that the MADS TF gene (TaMADS2-3D) regulated phosphate starvation responses in plants [57].NAC TFs also underwent intensive posttranslational regulation, including ubiquitinization, dimerization, phosphorylation, or proteolysis [58].These TFs participated in regulating the P deficiency.Except for these TFs, the other six TFs were clustered in root DEGs (Table S11).Lei et al. also found that AP2-EREBP and bHLH TFs were among the most significantly differentially regulated genes identified under both Pi-sufficient and Pi-deficient conditions [59].C2C2-CO-like and C2C2-Dof belonged to zinc finger protein, which could be involved in the geotropic growth of roots, and GRAS TFs also influenced the development in roots [60].Further, P primarily acted on the roots.TIFY, WRKY, and HSP all reportedly regulated the plant growth under P deficiency.A large number of ARR-B TF-related genes were found in both leaves and roots.Therefore, the TF family was more closely related to P regulation.All of these gene clusters also revealed that a large number of TFs played an important role in improving the ability of crops to resist P starvation during growth and development.
To find out the main regulated genes under P deficiency in hulless barley, 19 genes were analyzed for their expressions by RT-PCR, which all had significant expression differences and good repeatability of each sample.Some of these DEGs participated in carotenoid biosynthesis (HORVU0Hr1G017690), arginine and proline metabolism (HORVU7Hr1G090410), and some abiotic stresses (HORVU3Hr1G007500, HORVU3Hr1G086500, HORVU6Hr1G077710, HORVU6Hr1G082360, HORVU6Hr1G082360, and HORVU7Hr1G049370).HORVU7Hr1G089910 responded to phosphate starvation [61], and some other genes did not have predicted function annotation in the GO term and KEGG pathway.Thus, we selected two different low P concentrations compared with normal concentrations to identify the expression patterns of these genes.The expression patterns showed a few differences compared with those in the RNA-seq data, but a large number of genes exhibited a similar tendency.The results indicated that P deficiency could influence a lot of pathways, and the regulation of abiotic stress, heat stock, and phosphate starvation were normally influenced in these pathways.MSTRG.19126 and MSTRG.33383, as new genes, were significantly upregulated under P deficiency in leaf samples, and low-P treatment also induced carotenoid biosynthesis, phosphate starvation, and arginine and proline metabolism in the roots.The expressions of HORVU6Hr1G065240 and HORVU1Hr1G000440, which had no annotation information, also showed significant differences at different P concentrations.The functional and regulatory mechanisms require further experimental verification.
Conclusions
According to the RNA-seq results, we analyzed the relationship between the whole gene transcriptional processes and P deficiency response in hulless barley.The results primarily indicated that the regulatory genes participated in some pathways, including photosynthesis, amino acid biosynthesis, glycolysis, glycerolipid metabolism, carotenoid biosynthesis, and flavonoid biosynthesis.Oxidative phosphorylation and some TFs, which were related to phytohormones, could influence the transport and accumulation of P in the leaf and root samples under P deficiency.Some DEGs were enriched in phytohormone biosynthesis, photosynthesis, and some other transports, limiting the development under P deficiency.The present study enhanced the knowledge of the enrichment of gene networks and regulatory elements under P deficiency and provided a way for future research on P-use efficiency in hulless barley.
Supplementary Materials:
The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/life14070904/s1,File S1: Object IDs and corresponding URLs; Table S1: Difference in the phenotype, endogenous content, and enzymatic activity under P deficiency; Table S2: Statistics of RNA-seq data filtering and base information; Table S3: Reference gene coverage and gene alignment region; Table S4: Novel genes; Table S5: Gene and DEG expression in leaf and root samples; Table S6: Transcription factor information in whole genes and DEGs; Table S7: Level 2 GO terms in leaf and root samples; Table S8: Top 20 GO annotations in DEGs; Table S9: KEGG pathways in leaf and root samples; Table S10: Gene set enrichment analysis in leaves and roots; Table S11: DEG GO annotation under twofold difference expression; Table S12: DEG KEGG pathway under twofold difference expression; Table S13: Transcription factors under twofold difference expression; Table S14: Primer sequences using qRT-PCR; Table S15: The database of qRT-PCR results in 19 genes; Table S16: the expression pattern of all samples;
Figure 2 .
Figure 2. Database quality and mapping gene analysis using RNA-seq.(A) Read filter stat in each sample; (B) the coverage of genome alignment and gene location; (C) relationship between samples (Figure S4); (D) the cluster analysis of the replicates within the treatment.
Figure 2 .
Figure 2. Database quality and mapping gene analysis using RNA-seq.(A) Read filter stat in each sample; (B) the coverage of genome alignment and gene location; (C) relationship between samples (Figure S4); (D) the cluster analysis of the replicates within the treatment.
Figure 3 .
Figure 3. Analysis of DEGs.The Venn diagram (A) shows the number of DEGs and co-responsive genes in the leaf and root samples.The upregulated or downregulated DEGs were also counted (B), and these gene expression patterns are shown in the heatmap (C) (TableS16).
Figure 3 .
Figure 3. Analysis of DEGs.The Venn diagram (A) shows the number of DEGs and co-responsive genes in the leaf and root samples.The upregulated or downregulated DEGs were also counted (B), and these gene expression patterns are shown in the heatmap (C) (TableS16).
Life 2024, 14 , 904 10 of 22 Figure 4 .
Figure 4. Classification of GO functional annotations for DEGs.(A) indicates the ZL/WL and (B) indicates the ZG/WG.The circles size indicated the enrichment of gene numbers.
Figure 4 .
Figure 4. Classification of GO functional annotations for DEGs.(A) indicates the ZL/WL and (B) indicates the ZG/WG.The circles size indicated the enrichment of gene numbers.
Figure 5 .
Figure 5. Classification of KEGG pathways for DEGs.(A) indicates the ZL/WL and (B) indicates the ZG/WG.
Figure 5 .
Figure 5. Classification of KEGG pathways for DEGs.(A) indicates the ZL/WL and (B) indicates the ZG/WG.
Figure 6 .
Figure 6.Verification of the expression of DEGs under P deficiency using qRT-PCR.The heatmap (A) shows the FPKM value according to RNA-seq databases, and the column charts (B) show the qRT-PCR results in each gene.The blue column denotes the leaf samples, and the red one denotes the root samples.
Figure 6 .
Figure 6.Verification of the expression of DEGs under P deficiency using qRT-PCR.The heatmap (A) shows the FPKM value according to RNA-seq databases, and the column charts (B) show the qRT-PCR results in each gene.The blue column denotes the leaf samples, and the red one denotes the root samples.
Figure S1: Transcription factor family members in whole genes; Figure S2: GO annotation in DEGs in leaves and roots; Figure S3: KEGG pathway annotation in DEGs in leaves (A) and roots (B); Figure S4: the PCA of relationship between root treatment samples.Author Contributions: Methodology, Z.W.; Software, Z.W. and Y.C.; Formal analysis, Y.Y.; Investigation, X.Y.; Resources, K.W.; Data curation, L.A. and Y.B.; Writing-original draft, L.A.; Writing-review & editing, X.Y. and K.W.; Visualization, Y.B.; Supervision, Y.C. and Y.Y.; Project administration, X.Y.; Funding acquisition, K.W.All authors have read and agreed to the published version of the manuscript.
|
2024-07-25T15:17:44.361Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "ad96d5113299529262996a2c28e778d30560e7ff",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/life14070904",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c0ad4ad057bb9a0db9c23a806385d70a84f28ec",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213055483
|
pes2o/s2orc
|
v3-fos-license
|
‘Are they out to get us?’ Power and the ‘recognition’ of the subject through a ‘lean’ work regime
Critical studies of ‘lean’ work regimes have tended to focus on the factory shop floor or public and healthcare sectors, despite its recent revival and wider deployment in neoliberal service economies. This paper investigates the politics of the workplace in a United Kingdom automotive dealership group subject to an intervention inspired by lean methods. We develop Foucauldian studies of governmentality by addressing lean as a technology of power deployed to act on the conduct of workers, examining how they debunk, distance themselves from and enact its imperatives. Our findings support critiques of lean work regimes that raise concerns about work intensification and poor worker health. Discourses of professional autonomy allow workers to distance themselves from lean prescriptions, yet they are reaffirmed in their actions. More significantly, we illustrate the exercise of a more encompassing form of power, showing how lean harnesses the inherently exploitable desire for recognition among hitherto marginalised workers, and its role as a form of ‘human capital’. The paper contributes to critical studies of lean by illustrating its subtle, deleterious and persistent effects within the analytical frame of neoliberal governmentality. We also demonstrate how studies of governmentality can be advanced through the analysis of contested social relations on the ground, highlighting the ethico-political potential of Foucauldian work.
Introduction
The automotive industry has historically been at the forefront of efficiency drives to enhance productivity and capital accumulation, from the mass production assembly line of Ford (Wilson & McKinlay, 2010), to the 'lean production' regime of the Toyota production system (Ezzamel, Willmott, & Worthington, 2001;Krafcik, 1988). Lean 'philosophies' and 'tools' are designed to match production and service provision with the market, to eliminate 'waste', costly stockpiles and shortages, thus creating a symmetry between demand and supply for the reduction of costs and the enhancement of profit. Critical analysis of lean regimes, both in the automotive industry and through its diffusion into the civil service, health and social care, demonstrates concerns about work intensification, deskilling and poor worker health (Carter et al., 2011(Carter et al., , 2013(Carter et al., , 2014Charlesworth, Baines, & Cunningham, 2015;Stewart, 2013;Stewart, Mrozowicki, Danford, & Murphy, 2016;Stewart et al., 2009). Calls have thus been made for a critical analysis and understanding of the persistence of lean and its socio-cultural implications at work today (Rees & Gauld, 2017;Stewart et al., 2016).
Critical scrutiny of lean in the automotive industry has focused on the production line and the factory shop floor (Ezzamel et al., 2001;Zanoni, 2011). Servicing and aftersales, as in our case, remains a neglected area of study. Nevertheless, it is intimately bound up in the networks of power produced by major transnational automotive manufacturers at the heart of fossil-fuel driven economies (Fleming & Spicer, 2007;Rhodes, 2016), and controlled via quasi-judicial franchise contracts granted to dealership groups (Arruñada, Garicano, & Vázquez, 2001). This paper investigates an organisational intervention inspired by lean methods in a major United Kingdom dealership over a two-year period. We ask, how does lean, as governmental technology of power, act upon the conduct and subjectivity of workers in the 'witches' brew' of everyday practice? More specifically, we ask: How are workers 'made up' through lean and its associated rationalities, and how might they resist or distance themselves from it? What kind of constraining and 'enabling' effects does lean have for workers operating in the context of aftersales and servicing? How do workers resist or consume lean amid the politics of organisational transformation?
Our analysis illustrates how workers live through neoliberal discourses that sustain the legitimacy of lean, yet they are not fully determined by them. Critiques of surveillance and workflow standardisation call upon a discourse of professional autonomy that emphasises the tacit selfknowledge of workers beyond lean prescriptions. Yet, despite these dis-identifications, lean discourses are nonetheless reaffirmed in the actions of workers (Fleming & Spicer, 2003). Moreover, we illustrate how among increasingly demanding organisational circumstances lean discourses create common modes of perception (Rose, 1999) that serve to harness the inherently exploitable desire for recognition and respect among hitherto marginalised workers. This, coupled with the appeal of lean as an individualised and reproducible form of 'human capital' (Weiskopf & Munro, 2012), demonstrates the encompassing effects of lean as a technology of 'government' power within and beyond the workplace (Fleming, 2014;Foucault, 2008). This paper makes three original contributions to Foucauldian studies of work and organisation and lean work regimes. First, we demonstrate how studies of governmentality may be advanced through an examination of contested social relations in the everyday (McKinlay, Carter, & Pezet, 2012). Moving beyond the 'programmer's perspective', we argue, allows for the workplace to be conceptualised as a site of political struggle between the ethical demands of others, and what we may wish for in actualising our freedom (Foucault, 1996). Second, we identify the harnessing of the inherently exploitable desire for 'recognition' and 'respect' as a technique of subjectivisation among previously marginalised workers. Third, the paper contributes to critical studies of lean (Stewart et al., 2016) by extending critique to account for its more subtle and deleterious power effects within the analytical frame of neoliberal governmentality.
The paper is organised as follows. First, we outline the field of Foucauldian organisation studies and the importance of the perspective of governmentality. Second, we discuss lean in public and automotive sectors, detailing its persistence as a technology of 'government' power. We then discuss the research methods and data analysis undertaken. The paper then turns to the organisational intervention in question before analysing its reception among workers 'on the ground'. We begin, however, by discussing Foucauldian studies of work and organisation and their importance for this research.
An Argument for the Perspective of Governmentality
Foucault's writings shifted over time from the archaeology of knowledge and discourse to the genealogy of knowledge and power. Yet, there is neither a pre-nor post-archaeology or genealogy in Foucault, but rather clear changes in emphasis (Dreyfus & Rabinow, 1982, pp. 104-8). Foucault's work began to encompass not solely autonomous discourses as regulative 'discursive formations' (Foucault, 1972), but rather the hazardous realities and relations of power which frame 'discursive regimes' (Foucault, 1980). Foucault's genealogical work adopted a more general interpretive analytics of what forms, restricts and institutionalises discursive regimes according to specific historical power/knowledge configurations. Our thoughts and actions, the games of truth or rationalities which we play out, have a history and are a product of particular struggles and contingencies (Brown, 1998). Foucault argued that relations of power significantly curtail the degree to which the human subject can fashion their own existence. Nevertheless, his later work on the art of governing, ethics and care of the self signalled a transformative agenda and a concern for what autonomy could look like for our present (Barratt, 2008;Foucault, 1996).
Foucault's oeuvre has been influential in the field of organisation studies for over three decades. Four interweaving waves of influence, drawing on (i) discipline and disciplinary power, (ii) discourse, (iii) governmentality (iv) and subjectivity and care of the self, demonstrate a wide-ranging 'Foucault effect' (Raffnsoe, Mennicken, & Miller, 2019). Inspiration from Foucault's 'middle' genealogical period, drawing from Discipline and Punish and The History of Sexuality, volume one (Foucault, 1977(Foucault, , 1978, brought a welcome yet partial initial reading to the field, arguably creating a misrepresentation of his contribution (Barratt, 2008;McKinlay & Taylor, 1998;Raffnsoe et al., 2019). A focus on disciplinary power within prisons, schools and factories (Foucault, 1977) legitimised commonsense notions of concrete organisations with clear-cut boundaries (Knights, 2002). Studies of the workplace, including that of lean regimes (Barker, 1993;Sewell, 1998), emphasised omniscient surveillance through electronic, human resource and peer review systems. The constitutive nature of power became encapsulated in 'self-discipline' and in the delegation of responsibilities for performance to teams through both 'vertical' and 'horizontal' surveillance (Sewell, 1998;Sewell & Wilkinson, 1992a). Nevertheless, by depicting obedient and normalised subjects, there was little or no illustration of agency, subversion or contestation in these studies (Newton, 1998). As already 'disciplined' and docile subjects (see Barker, 1993;Sewell, 1998), workers were depicted as complicit in their own subjugation, overlooking the manner in which power and knowledge may play out (McKinlay & Taylor, 1998;Raffnsoe et al., 2019). Power became synonymous with repression, coercion and limitation, erroneously creating a dichotomous interplay between passive recipients and opponents of repressive structures. Foucault's approach to power, then, is best understood not as a theory, but as a tautology, implying that the dynamic of power/knowledge and resistance is a matter for empirical investigation.
Readings of the 'later' Foucault have opened up new possibilities for Foucauldian scholarship in the field (Barratt, 2008;Fleming, 2014;Munro, 2012). The interrelated concepts of biopower (Foucault, 2008) and governmentality (Foucault, 1982), although peripheral in organisation studies, offer critical viewpoints upon work in neoliberal societies through which individuals are not only targets of power but active in its operation (Fleming, 2014;Munro, 2012). Ambitions and desires are mobilised and hastened rather than coercively shaped through rigorous procedures alone. The instrumentalisation of a population's intuition, sociality and desire, what Fleming (2014) elucidates as 'biocracy', means that life itself, in activities, work, joys and miseries, can become politically and managerially useful (Foucault, 2008).
Nevertheless, it is perhaps the concept of governmentality that has been adopted most enthusiastically from Foucault's experimental toolkit (Barratt, 2008;McKinlay & Pezet, 2017). In contrast to discipline, 'government' is predicated on the premise that the governed will continually disrupt, adjust, resist and distance themselves from the practices of governing (Foucault, 1982). 'Government', recalling sixteenth-century connotations, forms a practical and strategic relation between the governors and the governed, the 'modes of action, more or less considered or calculated, which were destined to act upon the possibilities of action of other people. . .to structure the possible field of action of others' (Foucault, 1982, p. 790). 'Governmentality' is a link between techniques of domination and techniques of the self, implying forms of agency as power/knowledge relations structure possibilities not only in ostensibly repressive ways, but also in seemingly seductive and appealing ways. In neoliberalism delimited 'freedom' is the architect of control, as subjects are required to recognise themselves, readily or not, as entrepreneurs of themselves (Foucault, 2008). Through notions such as learning, competency, employability and career, personal choices are delimited according to a logic of self-fulfilment and through the accumulation of 'human capital' (Weiskopf & Munro, 2012). Individuals and collectives are then 'offered' to participate in action to resolve matters previously in the hands of their superiors. This can be understood as a kind of 'responsibilisation', corresponding with ways in which 'the governed are encouraged, freely and rationally, to conduct themselves' (Burchell, 1996, p. 29).
Inspired by Foucault's genealogical methods, the 'London governmentalists' (Miller & Rose, 2008;Rose, 1996Rose, , 1999 have charted modern power in 'advanced liberalism' as the effect of diverse calculative techniques, forms of expertise and professional vocabularies designed to reconfigure identities at work. Power is not exercised by the powerful but instead through 'grey sciences' (Rose, 1996, p. 54) of efficiency and productivity, those that govern by creating calculable spaces in which workers calculate for themselves, and begin to know themselves accordingly, 'to seek to maximise productivity for a given income, to cut out waste, to restructure activities that were not cost effective' (Rose, 1999, p. 153). Managerial expertise plays the role of relay between the aspirations of corporate authorities and the ambitions of individuals and groups. Forms of 'translation' in codified knowledge and practice, such as lean, produce loosely affiliated networks and attempt to construct common modes of perception. When 'translation' is achieved between the values of others into one's own terms, judgements and conduct, then rule is established 'at a distance' (Rose & Miller, 2010).
Nevertheless, there is more to governmentality than an individual's assimilation into managerial techniques and corporate networks. The discursive and 'technical' means of influence on which 'governmentality' depends necessarily must align with an individual's or group's delimited 'freedoms' for its legitimation and neutrality (Foucault, 1982). McKinlay and Pezet (2017) suggest that the 'London governmentalists' have under-theorised resistance as always inherent in power relations, instead relying on the 'programmer's perspective' (see Rose, 1999) as a textual approach to historical writing (McKinlay et al., 2012). Moreover, the governmental rationality of 'enterprise' (du Gay, 1996) has been portrayed deterministically, often without reference to individual or collective agents (Fournier & Grey, 1999). Calls have thus been made for empirical research that explores how discourses are received, modified and resisted in everyday practice (Fleming, 2014;McKinlay et al., 2012). As McKinlay and Pezet argue, 'every governmentalist technology is depicted as always producing a full-blown neoliberal subjectivity, irrespective of how promising the ground is' (McKinlay & Pezet, 2017, p. 18).
Governmentality studies of work have typically 'borrowed' from the genealogies of (neo)liberal governmentality (Foucault, 2008;Miller & Rose, 2008;Rose, 1999) and applied them to the politics of the workplace. They contrast with the discipline power/knowledge couple by placing an emphasis on the mundane and ostensibly 'liberating' aspects of modern power. Studies, for example, of teamworking (Knights & McCabe, 2003) and project management (Clegg, Pitsis, Rura-Polley, & Marosszeky, 2002) show how liberal technologies render subjects ever more calculable according to economic criteria. By querying the idea of power as repressive, insight is gained into the production of new identities, forms of agency and practices of resistance amid complex organisational circumstances.
Nevertheless, for us 'governmentality' is less a coherent theoretical perspective on 'what works' organisationally (Clegg et al., 2002), and more a comparatively open-ended approach to the politics of situated research (McKinlay et al., 2012). Equally for us it is not that 'governmentality matters' (Clegg et al., 2002), but rather that governmentalities matter (Barratt, 2008). That is to say, the perspective of governmentality does not depend on formal texts and official programmes of neoliberal rationalities alone, but also on heterogeneous social relations and governmentalities that take shape 'on the ground'. We argue that the relationship between individuals and regulative governmental technologies such as 'lean' are best examined by taking account of 'what matters to them'. This permits a non-deterministic analysis of power relations beyond the received dichotomies of power and freedom, compliance and resistance (Raffnsoe, Gudmand-Hoyer, & Thaning, 2016). By acknowledging both the productive and limiting aspects of modern power, we are sensitive to the ethico-political ambitions of Foucauldian work and the possibilities of fashioning alternative ways of being (Barratt, 2008;Munro, 2014;Weiskopf & Willmott, 2013). With this in mind, below we outline 'lean' and its associated techniques as a governmental technology of power, before turning to the case in hand.
Governing Through Lean
Lean production, commonly known simply as 'lean', has been progressively implemented in workplace organisation in industrialised economies over the last three decades. Its 'philosophy' and techniques originated in the global automotive industry, and as some argue (Bhasin, 2015;Womack, 1990), more specifically the Toyota production system (Ohno, 1988). Following Japan's apparent immunity to the late 1970s financial crisis, principles and techniques under the banner of 'lean' were imported into the US and latterly British manufacturing and service sectors (Stewart, 2013). Although lean emerged in the highly regulated economy of post-Second World War Japan, it maintains significant appeal in neoliberal capitalist economies in which market rationalities advocate consumer authority and 'efficiency' to maintain profit, compete, or survive in recessionary conditions (Stewart et al., 2016(Stewart et al., , 2009. Lean entails the adoption of so-called 'neo-Taylorist' (Crowley, Tope, Chamberlain, & Hodson, 2010) techniques for fragmenting tasks, standardising operating procedures, performance monitoring, and eradicating any work that will not produce a profit or 'customer value'. Advocates profess that it removes 'obstacles' in the flow of production and quality through 'continuous improvement' (kaizen), and through teamworking arrangements stated to 'build up a system that will allow the workers to display their full capabilities by themselves' (Sugimori, Kusunoki, & Uchikawa, 1977, p. 554). Prescriptive texts claim that lean eradicates wasted time (muda) and overburden (muri) through 'just-in-time' (JIT) delivery and inventory systems (kanban) (Bhasin, 2015). As a form of workplace control, it is deployed to streamline work processes and heighten productivity while lowering costs, often involving a reduction in the labour force (Carter et al., 2013;Smith, 2000). Advocates argue that lean is participatory and democratic by 'offering' workers the means to be more involved in response to market pressures (see Bhasin, 2015;Womack, 1990). Workers, it is claimed, have enhanced 'freedom' to control their work through decision-making authority and skills enhancement. The imperative of 'working smarter not harder' proposes that lean is not only less wasteful of resources, but supposedly less burdensome (muri) by involving employees as 'agents of change'.
Nevertheless, research in both manufacturing and service sectors demonstrates that lean involves more intensive and centralised control, deskilling, work intensification, low staff morale, poor worker health, and a strengthening of the functions of capital through performance targets, surveillance and teamworking from above (Carter et al., 2014(Carter et al., , 2017Ezzamel et al., 2001;Stewart et al., 2016;Stewart et al., 2009). Delbridge (1995) plainly noted that the adoption of JIT and total quality management (TQM) in automotive manufacturing implied that workers were being coerced into 'surviving rather than resisting their exploitation' (Delbridge, 1995: 814). Lean regimes in the automotive industry have involved the speeding up of production and service provision with fewer workers, thus increasing the risk of work overload and poor worker health (Graham, 1995;Stewart et al., 2009Stewart et al., , 2016. Claims of a more democratic style of management following the decline of 'Fordism' (Crowley et al., 2010), whereby responsibility is devolved to workers in the efficient design and execution of work (see Sugimori et al., 1977), are discredited in empirical accounts where, for example, kaizen represents the imperative to 'produce more or risk the sack' (Ezzamel et al., 2001(Ezzamel et al., , p. 1072. The claim that lean 'empowers' workers through teamwork are invalidated in the analysis of systems such as JIT and TQM (Sewell & Wilkinson, 1992b), which instil higher degrees of surveillance upon the workforce. Lean, then, is representative of a growth in bureaucratic and technical control (Carter et al., 2014) and a reduction in the discretion of middle managers and supervisors through expanding hierarchies (Stewart, 2013).
The last two decades have witnessed efforts to transfer lean from manufacturing to service and administrative sectors, and particularly to the British civil service, health and social care sectors amid the politics of 'austerity' (Carter et al., 2011(Carter et al., , 2013Charlesworth et al., 2015;Radnor & Osborne, 2013;Rees & Gauld, 2017). The 'new public management' regime, with its doctrine of reduced public funding, commercialism, value-for-money and management-by-measurement control (Lapsley, 2008), aligns neatly with the 'better for less' principles of lean (Radnor & Osborne, 2013). Yet, in public service and clerical work, employees have eschewed claims of 'empowerment' by pointing to a reduction in decision-making, increased workload expectations and a narrowing of tasks (Carter et al., 2011(Carter et al., , 2013. In social care settings lean interventions have entailed overworking, low pay and stress related to overstretched budgets and resources (Charlesworth et al., 2015). Teamworking is shown not to be the harnessing of employee 'voice', multiskilling or job rotation, but rather a top-down scheme that places more demanding responsibilities and targets upon middle managers and non-managers, diffusing stress throughout the workforce (Carter et al., 2017;Procter & Radnor, 2014).
In healthcare, the deployment of lean has prioritised 'tool-based' applications as stand-alone interventions, such as value stream mapping exercises for identifying and eliminating 'wasteful' activity, and rapid improvement events where staff are required to meet, evaluate and streamline processes (Radnor, Holweg, & Waring, 2012, p. 370). 'Tool-based' approaches are thought to undermine the 'true' value of lean thinking as a broader 'model of cultural change' (Radnor & Osborne, 2013, p. 283), where kaizen denotes a broader programme of professionalisation (Radnor et al., 2012). Such perspectives demonstrate an academic 'cottage industry' (McCann, Hassard, Granter, & Hyde, 2015: 1559 on the subject of lean adoption in healthcare, where problems are not correlated with lean itself, but rather its incorrect implementation. While lean may improve processes such as patient turnaround time, information duplication and lengths of stay (Radnor et al., 2012), hopes of resolving overdue workplace frustrations are met with heavy workloads, insufficient resources and the view that lean is ill-suited to complex healthcare settings, where the importance of professional judgement outweighs the importance of process improvement (McCann et al., 2015). Here, a familiar critique is advanced, outlining lean as managerial rhetoric designed to obscure the reality of 'little more than an extension of Taylorism' (McCann et al., 2015(McCann et al., , p. 1560, and where 'continuous improvement' is ascribed to almost any kind of commonsense organisational improvement as an example of leaning. Nevertheless, while the efficacy of lean interventions in themselves is dubious, its 'philosophy' and techniques persist and have a long and embedded history, particularly in the automotive industry (Stewart, 2013;Stewart et al., 2009Stewart et al., , 2016. The 'continuous rationalization' of work (Stewart et al., 2016) involves not just the control and coordination of labour, but also the inculcation of discourses related to competency and career. The persistence of lean, as Stewart (2013) note, is in response to the continuing crisis of twentieth-and twentyfirst-century capitalism and recurring attempts to implicate workers in the 'strategic struggle' of production. It is therefore a mistake to write off lean as a collection of ill-placed techniques and managerial fantasies, given that its rationalities of efficiency, cost reduction and productivity lie at the heart of efforts to responsibilise workers for the risks and costs of modern capitalism (Stewart et al., 2016).
As a technology of power, it is precisely lean's apparent superficiality and mundanity that warrants further investigation of its effects (Foucault, 1982). Its extension as a 'grey science' (Rose, 1996, p. 54) of efficiency into automotive aftersales and servicing involves the neoliberal imperative that workers should make corporate ambitions their own (Kiff, 2000). Crucially, lean's regime of truth produces effects by encouraging individuals and groups to know themselves differently, not simply through prescriptive interventions, but also in relation to professional expertise, competency and career, extending the reach of capital into life itself (Fleming, 2014), and through the circulation of 'human capital' (Weiskopf & Munro, 2012). Lean production, then, is much more than a series of prescriptive tools and professional know-hows with rules to enforce. As a governmental technology it relies on harnessing the self-governing capacities of workers, willingly or not, for its effect. It therefore constrains subjectivities while producing others among the politics of everyday working life.
The Study and Research Method
This paper draws from a larger study conducted at an independent family-owned business, which we label VehicleCo, employing 1500 people. It is one of the largest private companies in the motor retailing business in the UK, with a chain of 34 franchised dealerships. The study focused, first, on how the company organised and governed its franchised dealerships, management and workers. Second, on how systems and processes influenced by major car manufacturers (under the banner of 'lean') were used to govern VehicleCo, its management and workers. Under the authority of major automotive manufacturers, margins in distribution, aftersales and servicing are small and downturns can have major financial implications for dealerships.
The study investigated a knowledge transfer programme (KTP) set up to implement transformational change. The objective was to improve 'efficiency', which, senior management hoped, would increase the 'capacity' of the company's resources and increase output. The KTP would instil kaizen through process improvements. It also proposed a comprehensive programme to develop the skills and knowledge of workers in line with lean thinking. The training of employees, it was argued, would allow for new skillsets that would aid recruitment practices and align performance criteria with those delimited by major car manufacturers.
A KTP associate was active in the intervention and worked across a variety of dealership sites evaluating existing processes using lean 'tools'. For example, the technique of value stream mapping (Womack & Jones, 2003) was deployed to eradicate task duplication, standardise operating procedures and reduce 'waste' in terms of labour time. The aim was to produce streamlined, convenient (and thus more profitable) customer appointments and services. The KTP associate was regularly interviewed by the researchers on the intervention's reception. We use a pseudonym, 'Work Savvy', to refer to the intervention under investigation.
The empirical research comprised interviews, observations, weekly meetings with VehicleCo's Strategic Director, attendance at management meetings, and analyses of company audit and job specification documents. Data collection occurred between January 2014 and January 2016. Audio-recorded interviews were conducted across four different company franchises. In two franchises, selected for longitudinal access, interviews were conducted in three stages: before, during and after the intervention. Data were transcribed from 74 interviews, five strategic management meetings, weekly team meetings and observations of work practices. Interviews were conducted using semi-structured questions accompanied by follow-up questions to explore in more depth the experiences of Work Savvy. As the researchers progressed, questions were refined to probe more deeply into emerging discourses. The first round of interviews typically lasted one hour, and in stages 2 and 3 between 30 and 45 minutes. For quotations, we use pseudonyms and refer to the participant's job title and stage of the research process.
Data analysis
Data analysis followed a Foucauldian approach to discourse analysis, remaining sensitive to particular 'regimes of truth' (Foucault, 2008) in official company texts, patterns of talk, assumptions and rationalities in the discursive strategies of participants (Alvesson & Karreman, 2011). The methodological commitments of Foucauldian discourse analysis propose the study of relations of power in the historical constitution of individuals and groups as subjects (Foucault, 1980(Foucault, , 1982. The constitution of subjectivity is dependent on norms which are facilitated through structures of recognition. Yet these norms are not deterministic and emerge and fade depending on the operation of power in specific contexts. Our analysis adopted an iterative approach between empirical material and theory to investigate how participants related to their work, others and the self (Alvesson & Karreman, 2011). We address workplace 'culture' as that which arises from particular power/ knowledge configurations and problematisations. Such problematisations were evident not only in official statements, but also in the self-governing discourses of managers and workers 'on the ground' (McKinlay et al., 2012). By paying attention to discursive rules, as 'what counts as what', we remained sensitive to linguistic practices as they emerged. Selves are situated in discourse, and what becomes pertinent is not the interpretation but the discursive rule as to which it serves (Potter & Wetherall, 1987).
The two-year study enabled the research team to document changes to processes and working practices, and the responses of workers. Two researchers independently coded data using NVivo software. Each researcher reviewed interviews at stage 1 to identify the discursive strategies that employees drew upon. Four interviews were independently coded by the two researchers to develop participant-derived, first-order codes (Miles, Huberman, & Saldana, 2014). The researchers then compared the similarities and differences between coding to agree consistency (Silverman, 2012). Interviews were then independently coded, and meetings were held to discuss themes. The same procedure was followed for stages 2 and 3. Thereafter, the two researchers developed an iterative dialogue between data and theory to identify relationships between first-order codes and group them into second-order codes. The latter were refined and grouped into more robust categories, and theoretical themes were established. The final phase of coding identified discourses that highlighted distinct relations of power in the context of the study (Foucault, 1991). This analytical process resulted in the identification of three primary categories of discourse outlined in the 'Analysis' section below. First, however, the paper turns to a summary of the 'Work Savvy' intervention.
'Work Savvy' -The governmental intervention
VehicleCo's business strategy was 'to be a "world class" retailer delivering the best customer experience in a fun and expert way' (Work Savvy proposal document, August 2013). The aftersales division, dealing in maintenance and repairs, was problematised by senior management as failing to generate income. Aftersales, it was argued, could provide a greater source of profit that would insulate the company against seasonal variations. The strategic director identified 'lean' methods as a favourable approach. By assessing and standardising work in terms of its predictability, 'wasted' time would be eradicated and job turnaround time would be lessened, allowing for the sale of maintenance and repairs to more customers and thereby increasing profits. The expertise to deliver this change was thought not to exist within the company itself, and previous attempts to use consultants had failed (notes from meeting, July 2013).
The intention was to undertake transformational change throughout the whole dealership network. Lean processes were to become 'embedded' (project proposal document, August 2013) among the workforce. The project was branded as 'Work Savvy', officially commencing in January 2014. VehicleCo's senior management sanctioned a review to assess current knowledge of lean knowledge and practice, and 'readiness' for change, in order to develop and test Work Savvy. Obtaining 'buy-in' from employees was considered necessary before inculcating lean thinking. After the review, the company sought to redesign the flow work and remove 'wasted' activity, thereby heightening customer turnaround and satisfaction. The aim was to 'involve a wide spread of staff across the project' with a view to 'standardising work and improving efficiency' (Work Savvy briefing document, 2013). Time and motion studies of technicians, including process maps (or 'spaghetti maps'), were conducted, videotaped and reviewed by management and Work Savvy teams. The aim was to identify and eliminate 'waste' and produce standard operating procedures for predictable work. The Work Savvy programme aligned with audits from car assemblers which stipulated times and processes. Teams included those who were 'championing' Work Savvy, typically made up of volunteers. Senior management monitored the progress of changes through the 'continual improvement' plan, do, check, act (PDCA) cycle (see Deming, 1950).
Senior management agreed (project proposal document, August 2013) that the first year of Work Savvy would be spent on 'groundwork' for transformational change. Weekly meetings were held at head office where the KTP associate's activities and outcomes were communicated. In May 2014, the strategic director 'pushed' for the use of a 'pilot site' as a way to 'promote' Work Savvy and demonstrate its benefits across the company (notes from weekly meeting, May 2014). In June 2014, an induction event took place at the pilot site introducing Work Savvy for aftersales. In late 2014, two more pilot sites were identified by the strategic director (notes from weekly meeting, December 2014). Weekly meetings focused on 'efficiency' in the aftersales function, and how effective attempts to achieve employee 'buy in' had been. Additionally, 'Masterclass' training sessions (see Pullin, 1998) were introduced focusing on the structuring of processes for technicians and customer service staff. Furthermore, a project team made up of employees engaged with external lean consultants who put on training sessions inclusive of 'exemplary' lean practices from different industries. In May 2015, drawing on observed 'best practice', the intervention incorporated the rationalisation of work into standard operating procedures, grouped as follows: (i) predictable work (the green lane), (ii) may cause unanticipated issues (the amber lane) and (iii) could become complex (the red lane). In the section below, we discuss three prominent categories of discourse that emerged in our analysis.
Staff shortages and overworking
A consistent discourse running throughout the interviews we conducted at multiple dealership sites centred on staff shortages. Both managers and workers bemoaned that staff shortages not only were producing stress through overworking, but that such conditions were significantly curtailing time available to contribute to Work Savvy. Jamie, a dealership manager, for example, commented that 'resource has been tough, particularly since the 2008 crash'. The industry had become 'scared of having too much resource, and so we find where we might think two administrators would be okay, instead of working one really hard, we're not, we're working one very hard' (Jamie, business centre manager, stage 1, 28/06/2015). Working during days off and during holiday time was considered commonplace, with one account manager, Steve, stating that: If you look downstairs at the moment, there's a chap who's on holiday. He's in sorting cars out. . . You quite often come in on your day off to sort stuff out. . .it is quite stressful, because you have to do everything. (Steve, account manager, stage 1, 22/6/15) As noted elsewhere (Stewart et al., 2016), workplaces subject to lean interventions are characterised by a form of responsibilisation through which workers are made accountable, directly or indirectly, for labour utility problems. Claims of 'empowerment' and participation (Bhasin, 2015;Womack, 1990) are therefore misleading given that, when workers assume more decision-making responsibility, they do so in combination with a heightened responsibility for the effectiveness of their labour and that of their colleagues. As the above accounts show, this can give rise to overworking and work-related stress (Carter et al., 2014;Stewart et al., 2016).
Participants also emphasised long working hours, which for some, were indicative of a 'sink or swim' working environment, denoting the sense that 'all you're doing is standing there with a hosepipe' (Simon, service advisor, stage 2, 22/12/2015). As Simon, a service advisor, elaborated: Well, again, to get support, you need people, and there aren't any people. Within this context, Work Savvy was frequently understood to be exacerbating, rather than alleviating, stressful working conditions. As Paul, another service advisor, commented, 'Well, to be honest with you, at first, I think that [Work Savvy] was quite frowned upon. . . It is a lot of stress and strains when you take people out of a dealership for any period of time' (Paul, service advisor, stage 2, 29/12/2015). The issue was that 'it's difficult to put something like this in place when there's simply not enough staff. . . to do something smartly, you need time, and nobody has any time at the minute' (Simon, service advisor, stage 2, 22/12/2015). One's obligation to become involved in Work Savvy was, for some, undermining the more immediate responsibilities of fulfilling day-to-day tasks. Working 'smartly', then, seemed beyond reach for some of these workers. Such accounts support claims that lean interventions, by attempting to involve employees in the 'strategic struggle' of production, deepen experiences of work intensification and work-related stress (Carter et al., 2013;Stewart, 2013).
Surveillance, professional autonomy and distancing
Against the background of intensive working conditions participants regularly commented on the nature of surveillance and its effects. Eric, a dealership manager, worried about workforce morale, outlined his concerns in regard to new time and motion studies and the monitoring and accounting of performance to reduce 'wasted' activity: Some of these guys in the aftersales might have been there twenty years and then [for] somebody to say, 'Right we're monitoring on this, this, this and this, it's just changing, you have to.' This account taking is not well received, and they [workers] view it like a negative. (Eric, general manager, stage 1, 24/6/2015) For those subject to monitoring, discursive strategies regularly called upon a discourse of professional autonomy to counter surveillance and the rationalisation of tasks. As Brad, an experienced technician, commented when discussing the initial stages of Work Savvy: It's just. . . I think at first when people were in watching you, I think that's a bit nerve-racking for everybody. . . And everyone was a little bit under pressure. There's nothing worse than being watched by someone, like how you're doing something. In Brad's statement his professional experience ('it's something you've done for years') is destabilised in the process of being observed by others in the early stages of Work Savvy ('there's nothing worse than being watched by someone. . . especially when it's something you've done for years'). Brad, then, is struggling with practices designed to assess and standardise his performance, those which are thought to detract from his autonomy as an established and experienced professional. His comments are not directed at a particular group of professionals or colleagues, but instead the techniques of power by which his performance is to be measured, inscribed and transformed (Rose, 1996). Such accounts are indicative of a lack of professional agency in shaping the manner in which standard operating procedures are to take shape, and where 'teamworking' equates to 'claustrophobic monitoring' (Carter et al., 2017, p. 463).
Elucidations on the theme of professional autonomy were commonplace among technicians, recurring during discussions about Work Savvy. As Marcus, another experienced technician, remarked: Back in the day. . . you know, if you messed up there's no one breathing down your neck. Your manager would take you in the office, explain to you, you'd get shouted at and then you'd come out. But at the moment. . . everyone's got that feeling now of like, 'Am I being watched? Is someone watching us on the CCTV? Is someone checking all my paperwork? Are they out to get us?' (Marcus, technician, stage 2, 4/1/2016) In this account Marcus describes the power effects of a 'vertical' system of surveillance analogous to Sewell and Wilkinson's (1992b) analysis of the JIT/TQM labour process. Nevertheless, rather than a 'superstructure of control' to which docile bodies do not respond other than to submit (Foucault, 1977;Newton, 1998), Marcus is critiquing the effects of disciplinary power, illustrating how the governed continually distance themselves from the practices of governing (Foucault, 1982). The ethical demands of others, in this case, do not align with what Marcus wishes for in actualising his own professional freedom (Weiskopf & Willmott, 2013). Marcus is responding as an inventive and thinking worker with capacities of his own (Barratt, 2003(Barratt, , 2008McKinlay et al., 2012), and calls upon historical ideals of clear hierarchical relationships ('Your manager would take you in the office, explain to you, you'd get shouted at and then you'd come out') to counter the moral efficacy of workplace surveillance deployed to increase his productivity.
By standardising operational procedures, Work Savvy was, for some, discouraging established professional practices that had been developed over years. Dave, a technician, stated that 'changing the way you've worked for years [is the biggest challenge]. . . If you're in a routine, it's hard to alter that routine without overlooking something. . . Everybody does every job differently' (Dave, stage 2, 29/7/16). Within the discourse of professional autonomy discursive strategies were not only set in opposition to the standardisation of procedures, but also emphasised concerns about deskilling. In these incidences, the effects of workflow standardisation were thought to place limitations on the possibilities for both job satisfaction and professional development. Tom, a technician, stated that: One of the lads had had enough of the green lane and asked to be moved, because it was pretty much servicing all day, every day. So it can get quite mundane. You're repeating stuff over and over and over again. (Tom, technician 2, stage 3, 20/12/2015) For Dave, the monotony of tasks also posed a threat to his ability to learn a sufficiently broad range of skills, given that 'you get people set in a routine because they're doing the same job all the time', and therefore, '[you're] not getting the experience of our trade' (Dave, technician, stage 2, 29/7/16). One's potential standing in the labour market, and therefore one's 'human capital' (Weiskopf & Munro, 2012), were considered to be under threat for these workers. Such accounts contest claims that teamworking through lean provides employees with control over their work, and instead emphasises vulnerability, ambivalence and a lack of agency over the rationalisation of tasks. Indeed, as another technician, Luke, aptly summed up, 'I don't have any control, you get what you're given' (Luke, technician, stage 1, 27/06/2015).
Aside from the notable discourse of professional autonomy, participants also found ways in which to express cynicism in order to distance themselves (Fleming & Spicer, 2003) from prescribed subject positions delineated through Work Savvy. Discursive strategies questioned aspects of the new governmental regime which participants felt detracted from, or ignored, who they were as experienced and knowledgeable workers. In these incidences, participants articulated ambivalence concerning their own role as a vehicle for power through lean discourses (Foucault, 1982). As Dave, a technician, argued: I don't know, they seem to think they want you 100% efficient and they think you've got nothing better to do than fill out these silly little bloody 'be Work Savvy' things. But it's like being back at school. We don't want that. As long as we do the job and we do it correctly and on time. . . they [senior management] think you've nothing better [to do], they think you've got loads of time on your hands. (Dave, technician, stage 1, 27/06/2015) The obligation to take part in practices intended to shape appropriate conduct detracts from Dave's and his colleagues' understanding of organisational matters, and of themselves. 'Investing' in Work Savvy is not considered a means to achieve meaningful engagement with work. Such a perspective not only calls upon a discourse of professional autonomy, but also serves as a critique of the 'conduct of conduct' (Dean, 1999), illustrating that governmentality is determined only insofar as what is wanted from workers matches what they want for themselves. The requirement to inscribe a 'version' of oneself in Work Savvy training materials detracts from being able to recognise oneself as a dependable worker ('As long as we do the job and we do it correctly and on time'). 'Resistance' thus manifests as a kind of 'irresponsibility', expressed in a reluctance to fully recognise oneself as being involved in the 'strategic struggle' of Work Savvy. Being responsible for one's performance is therefore not synonymous with the rationalities and principles of lean (McCann et al., 2015). Rather, lean is understood to undermine the day-to-day operation of running a productive dealership, reducing the time needed to perform well.
'Recognition' and the production of new identities through Work Savvy
In this section we examine the more 'productive' aspects of governmental power exercised through Work Savvy. Here, we ask what kind of subjectivities and experiences it served to 'produce' in this context. We now have a better understanding of the discursive strategies deployed by participants emphasising a 'distancing' (Fleming & Spicer, 2003) from Work Savvy, often construed as a time-consuming encumbrance that overlooked who they were as experienced and autonomous professionals. Nevertheless, in conditions in which the value of one's labour was at stake, Work Savvy was at times both seductive and seemingly necessary for managers and workers. The following section illustrates that the negotiation of subject positions among the artificial 'freedoms' of this governmental intervention are both complex and contradictory; power both produces and constrains (Foucault, 1982). John, for example, discussed the personal benefits of partaking in Work Savvy: I'm respected a lot more by the management. . .I seem to get approached a lot, so I think it has been positive, being part of the Work Savvy team. And that's one of the reasons I did stick by it. . . it could lead to better things. (John,technician,stage 2,4/1/16) In John's comment subjectivity is a process of becoming as he recognises himself in light of the power relations in which he is involved. Being a part of the Work Savvy team is directly related to a heightened sense of self-respect. The appeal of Work Savvy as an ostensibly 'liberating' technology of governmental power is emphasised, as John addressed it as a means to enhance his individual 'freedom' in relation to his career and the wider labour market ('it could lead to better things'). Rather than discussing the benefits for the workforce, John is turning in on himself as an individual in recognition of his own 'human capital' (Weiskopf & Munro, 2012). Problematising one's identity as a worthy contributor in the eyes of one-s superiors gives rise to a responsibility to oneself, and not to one's colleagues. This would also suggest that under conditions of stress and overworking, a preoccupation with identity concerns may support a more concrete sense of one's own significance (Knights & McCabe, 2003 ). Yet, at the same time Work Savvy provides 'liberating' potential by highlighting Mark's and his colleagues' worth to others. As Mark commented further, 'Work Savvy seems to be pushing the case of people going, 'Right, we'll stop and listen to the technician because he knows how to do his job and he's been doing this a lot longer than most people' (Mark, technician, stage 2, 4/1/16). Rather than addressing his new-found visibility in terms of surveillance and discipline, the Work Savvy intervention appears as a way to demonstrate a particular set of concerns about the value of one's contribution. At the same time, however, this form of 'empowerment', through the production of new 'lean' identities, aligns these workers with the governmental schemes that define them; as those who must produce more 'output' as responsibilised managerial subjects. Recognising the self as a 'lean' subject, then, produces a more concrete sense of self as one's ambitions become more intimately and subtly aligned with corporate objectives (Rose, 1999). As another technician commented: By encouraging contributions from workers, the governmental rationality of lean, in this case, hails a new form of managerial agency for these technicians. By actively disclosing one's abilities and capabilities to superiors, one becomes more amenable to intervention and evaluation, while simultaneously assuming more responsibility for productive activities. These workers are thus assuming ownership of responsibilities that were previously in the hands of their superiors. Yet at the same time, as a technology of agency and performance (Dean, 1999), lean serves to animate these workers to act on themselves as they become more 'aware' of their new identities within a network of recognition. Discipline and surveillance do not adequately explain these power effects, insofar as work intensification is intimately tied up with a heightened sense of self-control and self-government. Surveillance is thus 'designed in to flows of everyday existence' (Rose, 1999, p. 234) and it does not take shape as exhaustive regulation (see Sewell & Wilkinson, 1992b). Instead, a devolved governmental framework is deployed to frame the ways in which choices and decisions are to be made in the interests of 'efficiency' and in relation to one's involvement in the 'strategic struggle' of production. Productivity and efficiency become personal matters, so much so that these workers actively construct the means by which productivity is to be increased.
Discussion and Conclusion
Our findings make a novel contribution to critical studies of lean regimes (Carter et al., 2013(Carter et al., , 2017Stewart et al., 2016) by illustrating its deleterious and subtle power effects within the analytical frame of governmentality studies. Overworking, stress and the detrimental effects upon worker health were clearly evident within what was dubbed a 'sink or swim' working environment. In addition to a strain on labour, articulated most expressively in relation to staff shortages and a lack of available time, participants also challenged claims of working 'leaner' and 'smarter' by drawing upon a discourse of professional autonomy to emphasise their tacit knowledge and experience. This, in combination with a critique of enhanced surveillance and workflow standardisation, put to question the moral efficacy of this lean intervention, given that for some it detracted from, or ignored, who they were as experienced and knowledgeable professionals. Concerns about deskilling were articulated in relation to standardised operating procedures that were thought not to provide a sufficient range of tasks or skills necessary to maintain a trade or produce job satisfaction. Such findings add a new viewpoint upon critical studies of lean, where overstretched resources produce stress (Charlesworth et al., 2015), professional self-knowledge is seen to be antagonistic to narrowly defined lean process improvements (McCann et al., 2015) and teamworking is often indicative of a lack of professional agency over how work will be organised and executed (Carter et al., 2017). Expressions of ambiguity in our analysis related to struggles over which fields of judgement were in play, be that professional or managerial, illustrating that this governmental intervention did not cultivate its subjects exhaustively (McKinlay et al., 2012). For some, 'investing' in Work Savvy was not considered to be a meaningful way in which to engage with work, demonstrating that lean must align with an individual's or group's delimited 'freedoms' for its legitimation and neutrality (McKinlay & Pezet, 2017). Yet, responses to lean principles of employee involvement and 'respect' (Bhasin, 2015;Sugimori et al., 1977) did not always constitute explicit acceptance or rejection. Rather, employees dis-identified with managerial prescriptions and subject positions and developed a kind of professional 'irresponsibility' towards the 'strategic struggle' of production. Although lean discourses were shunned through these practices of 'dis-identification', they were nonetheless reaffirmed, often begrudgingly, in the actions of these workers (Fleming & Spicer, 2003). The possibility for and indeed the very notion of practical refusal was noticeably absent in the discourse of these participants.
Without downplaying the undoubtedly harmful effects of lean, we suggest that its persistence in work and organisation can be better understood through a more encompassing analysis of its effects. The perspective of governmentality demonstrates how in increasingly demanding circumstances (Stewart et al., 2016), and despite the discontent of many workers, the boundaries between work and life can become less distinct (Fleming, 2014;Munro, 2012). Subtler power effects were evident in the harnessing of the inherently exploitable desire for respect and recognition among workers. Within this frame, acknowledgement of one's labour was facilitated through a common network of perception, where lean principles of efficiency and productivity began to translate into one's own terms, judgements and conduct (Rose & Miller, 2010). For some, then, the effect of the Work Savvy intervention produced 'self-respect', where the issue of work intensification was diluted through networks of mutual recognition (Knights & McCabe, 2003). Lean was surprisingly well received among technicians after it ostensibly enabled them to 'think for themselves' (Knights & McCabe, 2003, p. 1613 as 'worthy' decision-making subjects. This process of identity affirmation gave rise to enthusiasm, not explicitly for lean itself, but instead as a way out of historical identity-related problems as estranged workers. The role of 'recognition' and 'respect' through lean as a form of 'translation' (Rose, 1999), then, points to a subtle technique of subjectivisation in this context.
Moreover, lean was addressed as a form of professional competency that could potentially assist one's prospects in the eyes of superiors and in the wider labour market, possibly leading to 'better things'. The reframing of labour as individualised 'human capital' illustrates an important effect of lean within the broader frame of neoliberal governmentality (Weiskopf & Munro, 2012). Not only were workers targets of power as disenfranchised professionals, they were also active in its operation through neoliberal modes of self-government (McKinlay & Pezet, 2017). Implicitly, such examples show how economic security and risk are at play in the constitution of the 'lean subject', where an individual cost-benefit analysis of gains and losses can permeate into a broader life project (Fleming, 2014) amid increasingly competitive and precarious circumstances (Stewart et al., 2016).
The perspective of governmentality shifts beyond 'discipline' not only by querying how lean places restraint on the freedom of skilled workers (Carter et al., 2011;Sewell & Wilkinson, 1992a), but also by querying the conditions by which specific forms of agency are animated, and at what cost (McKinlay et al., 2012). Our analysis has sought to go beyond reductive dichotomies such as power and freedom, compliance and resistance (Raffnsoe et al., 2016), pointing to a more intimate form of power than the genealogy of discipline can explain. Power did not explicitly dominate subjects, but instead presupposed the autonomy of agents in attempting to align them with centralised corporate objectives. A contribution of this paper has thus been to show that the perspective of governmentality can be adapted to address strategic interventions on the one hand, and the divergent responses of 'real' actors on the other; those who are resourceful subjects with histories and capacities of their own (Barratt, 2003).
Lean, as a technology of governmental power, is one way for capital to endeavour to increase its hold on labour through a more encompassing and competitive framework of reasoning (Dean, 1999;du Gay, 1996;Rose, 1999). Nevertheless, as our analysis of an automotive dealership demonstrates, the virtuous claims of upskilling, job rotation and employee voice (Womack, 1990) are met with accounts of work intensification (Stewart, 2013), claustrophobic monitoring (Carter et al., 2017) and discourses of professional autonomy deployed to critique a lack of agency over the pace and standardisation of work. However, we suggest that the study of responses to prevailing economic rationalities in particular socio-historical contexts can provide a more nuanced perspective upon their effects and costs (McKinlay et al., 2012). In doing so we have highlighted some of the more subtle power effects of lean at work today, remaining alert to post-disciplinary forms of power in a neoliberal society (Fleming, 2014).
Although the incidences of identity affirmation we observed may be understood and felt individually as 'empowering', the costs for collective labour and the quality of work are unmistakably deleterious. Security, in this case, comes as a matter of aligning with representative truths, those that solidify individualised material and symbolic possibilities in increasingly competitive and precarious circumstances (Foucault, 2007). If what is understood as intrusive surveillance, for some, becomes identity-giving for others, then there is a pressing need for a more radical critique of the persistence and politics of lean interventions, and post-disciplinary work, in work and life today. One way in which this project could take shape is by exploring how discourses of professional autonomy may evolve into a broader transformative politics of the workplace. As Foucault (1996, p. 448) noted, 'the concept of governmentality makes it possible to bring out the freedom of the subject and its relationship to others, which constitutes the very stuff of ethics'. Governmentality on the ground, as we have observed, implies that social relations are a site of struggle between the ethical demands of others, and who we might wish to become in actualising our freedom (Weiskopf & Willmott, 2013). Following Foucault, we suggest that this freedom is fundamentally political. To that end, it must involve a care of the self that goes beyond prescriptive neoliberal imperatives, and where 'being free means not being a slave to oneself' (Foucault, 1996, p. 437).
Chris Hicks is Professor of Operations Management at Newcastle University Business School in the UK. Chris has a strong interest in lean manufacturing and has participated in European Regions for Innovative Manufacturing, an EU-funded project that has helped transfer lean expertise into small companies throughout the North Sea region of Europe. He also undertook a project that evaluated transformational change in the National Health Service in the northeast of England.
Tracy Scurry is a Senior Lecturer in human resource management at Newcastle University Business School in the UK. Her research focuses on careers from the perspectives of the individual and the organisation. She has worked with a number of organisations, in the public and private sectors, evaluating the processes and impact of organisational change. Work to date has adopted a multi-stakeholder perspective, examining both the organisational and individual implications of the practices for all those involved. Current research interests include graduate careers, underemployment, career resilience, dual careers, mobility and organisational change.
|
2020-03-12T10:41:20.548Z
|
2020-04-15T00:00:00.000
|
{
"year": 2020,
"sha1": "e91a5e08f0a2e33973de0fefb3b246f4e2968cdb",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0170840620912708",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "f9dd2f7de57eafc0c30fb4a9fac56abe6e37e469",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology",
"Political Science"
]
}
|
267112893
|
pes2o/s2orc
|
v3-fos-license
|
The Assessment of the Quality of Existing Parks along Phewa Lake, Pokhara, Nepal
This paper aims to identify the quality criteria to assess the quality of parks and examines the quality of urban parks along Phewa Lake in Lakeside, Pokhara. After a comprehensive study of relevant literature, six criteria are selected for the quality assessment of the urban parks under study; Accessibility, Distinctive Characteristics, Activities, Condition of landscape elements, Condition of utilities and services, and Cleanliness. The criteria are further divided into sub-criteria for the study. Field visits, observation, and key informants’ interviews were conducted to collect data and triangulate results. The findings revealed the existing quality of the urban parks in terms of the selected criteria of assessment. The quality condition of a park differs from one criterion to another and also differs from one park to another. The results from the study will be beneficial in improving the quality of the parks by focusing on the criteria where the park is lagging.
Introduction
Parks are public spaces that encourage both social and individual well-being.The parks serve as venues for a variety of social and athletic activities and provide diverse options for people regardless of age, gender, class, or religion.People can actively participate in leisure activities and passively develop experiences in the park.(Pokharel & Khanal, 2018).Parks as one of the major community features, provide various physical, psychological, social, economic, environmental, ecosystem services, sustainability, and many other benefits.The significance of urban parks is related to the well-being of the urban population and plays a vital function in the urban system.The parks provide a backdrop for various physical activities, which helps boost mood and sense of well-being.It helps to break the monotony of buildings in urban areas by offering green and attractive glimpses of nature.It is also a place of social value, as it facilitates social interactions, enhancement of social capital, better social integration, and attracts people.The areas around a park typically have better economic values and also provide environmental benefits such as environmental preservation and purification, pollution reduction, and temperature moderation.(Bedimo-Rung, Mowen, & Cohen, 2005).Various empirical evidence shows that the availability of parks and other natural assets like forests, and greeneries in urban areas helps to enhance the quality of life and improves the livability of places through environmental, social, economic, aesthetical, and psychological benefits (Chiesura, 2004).
First, the parks need to maintain their quality to capture the various benefits of parks.Various research on quality assessment has been conducted, but a unified set of assessment criteria has yet to be determined due to the varying nature and scale of the environment.Furthermore, each study has its definition of environmental quality.In this context, Neighbourhood Green Space Tool (NGST) was developed as a quality assessment tool for quality assurance.The tool is divided into six categories: accessibility, recreation, convenience, natural features, incivilities, and usability.the tool is simple and can be used by an independent observer to assess the quality.The assessment is based on visual quality, maintenance, and the presence and quality of various features (Chu, Li, & Chang, 2021).
The quality of a park is counted as one of the major assets and is explained in terms of different features and characteristics, among which maintenance and cleanliness are some features.
Likewise, it describes facilities such as the provision of playgrounds and dog parks.Similarly, amenities include parking, restrooms, pathways, landscape furniture, etc.The aesthetic features of the place are also accounted as park quality.In addition to this incivility, which shows the problematic aspects (such as litter, noise, vandalism, and safety issue) also defines the quality of the park (Chen, 2020).Similarly, The Urban Land Institute has prepared a report based on interviews with related professionals and experts to provide a framework for park quality.The objective of the report is to provide guidelines to the park developers and managers to effectively assess the quality of parks and help them to make proper decisions on investment and different strategies to improve quality.Five features are used to identify the overall quality of parks: the physical condition, accessibility, user experience, community relevance, and adaptability (Urban Land Institute, 2021).Similarly, five features are used to identify highquality parks as follows; 1. High-quality parks are in excellent physical condition 2. High-quality parks are accessible to all potential users 3. High-quality parks provide positive experiences for park users 4. High-quality parks are relevant to the communities they serve 5. High-quality parks are flexible and adaptable to changing circumstances Park quality can be assessed in terms of the relationship between users and the environment based on sensorial, emotional, and mental relationships.The table below shows that the park quality assessment is divided into five broad categories and each category is further elaborated through different points.It covers the diverse aspects of quality assessment from physical aspects to user perception and behavior (Ter, 2011); Likewise, (Kasyanov & Silin, 2019) developed an assessment tool consisting of multicriteria for the monitoring and management of parks.Table 2 shows that park function and security have been taken as the criteria to evaluate the integrated quality of the urban park (Kasyanov & Silin, 2019); Pokharel & Khanal, 2018).The city is also considered the country's tourist capital with the spectacular natural beauty of the Himalayan range, lakes, hills, caves, gorges, and other interesting natural and cultural features.Among these, Phewa Lake is one of the most attractive tourist destinations it is the most popular and most visited lake in Nepal.To enhance the beauty of the lake various initiatives have been taken.In this context, some parts of the open spaces along the lakes have been used as parks.These parks are not only important from the tourism point of view but also provide city dwellers with various recreational, social, cultural, economic, health, and other opportunities.The parks directly contribute to the conservation of the natural environment around the lake and also discourage encroachment activities.
Management is essential to the long-term viability of public areas.Unfortunately, only 44% of the open spaces in PLMC (Pokhara-Lekhnath Metropolitan City-former local body) are managed, and the remaining 56% are not properly maintained.Therefore, for the improvement in quality of life and better livelihood of local people, priority should be given to the protection, conservation, and development of open spaces.( Pokharel & Khanal, 2018).Therefore, acknowledging the importance of quality assessment of the parks, the general objective of this paper is to examine the quality of selected parks along Phewa Lake.The specific objectives of the study are; • To find out the relevant criteria for the assessment of the quality of parks under study • To assess the quality of parks based on the derived criteria • To identify the strong and weak aspects of the selected parks The results of the study are intended to be valuable in terms of generating ideas for deriving tools for the quality assessment of parks.The assessment of park quality provides information on areas that need to be improved, allowing required measures and planning to be taken to improve the situation.The evaluation results are crucial not just for improving existing parks, but they may also be used to develop new open spaces.Thus, the rest of the paper has been outlined as follows; Section 2 includes materials and methods with an overview of the study area, data collection, and methodology.Six parks along Phewa Lake have been chosen for the study, and the quality criteria for the research are based on the literature review.Section 3 consists of results and a discussion where the parks under study were assessed through the collection and analysis of both primary and secondary data.Finally, section 4 concludes the research with key findings, practical implication of the paper, research limitations, and recommendations for future studies.
Methodology
A deductive research methodology with objective measures was adopted in this study.The study was conducted in different phases, viz.literature review, data collection, data analysis, and finally results and discussion.Firstly, literature on quality criteria for parks and urban spaces from various academic publications was compiled and studied.After a careful review of previous studies, the criteria for the assessment of the park quality were identified in terms of 6 main criteria and 28 sub-criteria.And these criteria and sub-criteria have been used to assess the quality of parks under study.To collect the information on parks as per criteria, site visits to the parks were made.Data were collected through observation and key informant interviews.Both primary data and secondary sources were used to collect relevant information.The collected data were compared and analyzed with the criteria and sub-criteria.Finally, the results were presented and discussed.Besides the sceneries, the park is historically important as this is the place where the dam of Nepal Engineers ' Association, Gandaki Province, Technical Journal Vol. 3 (July, 2023) Phewa Lake was constructed in 1961 AD.Even though the dam is replaced with a new one, the inauguration manuscript can be found and is now used as one of the landscape features.
Introduction to Study Area
Park space is planned with an emphasis on providing the setting to view the scenery.So, spaces adjacent to the lake are relatively better planned and maintained than other parts.Despite the park's aesthetic and historical value majority of the locals and tourists are not aware of its existence.This is partly because it shares the same entrance to the ministry complex and most people think it is a part of the complex rather than a public park.The park is located in the northern direction of Machhapuchhre Pratibimba Park, across Phewa Lake.From the park, beautiful sceneries of Phewa Lake, Rani Ban (Forest), and the Mountain range can be seen.The park is planned with seating spaces, greeneries, and a boating access point.Here visitors mainly come to enjoy sceneries, boating, relaxing, walking, etc.Although the park is mostly in use, one can notice its dilapidated condition at a single glance.
Komagane Park (KP)
Nepal Engineers ' Association, Gandaki Province, Technical Journal Vol. 3 (July, 2023) This park is more popular among locals than tourists.Religious events, sports, walking, gathering, and picnicking are major activities inside the park.Ban (forest), it provides space for a wider view of the lake.International Yoga Day is celebrated as a major event in Park along with other occasional religious events.
Accessibility
Accessibility is one of the major components to influence park usage.The existence of urban parks with good accessibility contributes to the quality of life of the urban population Thus, in terms of accessibility, Basundhara Park and Gaurighat Park are found more favorable than others, as they fulfill all the sub-criteria under the heading.Following the same, Machhapuchhre Pratibimba Park was found least favorable among others.In this park, universal accessibility is limited to some parts only, there is difficulty in finding proper entry because most first-time visitors hesitate to enter as it looks like the premise of the Ministry complex rather than a public park.Along with this people are not allowed to enter the park after it gets dark.The chart above shows that accessibility in Gaurighat Park and Basundhara Park is better than other parks as the parks fulfill all the sub-criteria supporting quality.Whereas Machhapuchhre Pratibimba Park is found to the be weakest as it only fulfills 4 sub-criteria.
Distinctive characteristics
Nepal Engineers ' Association, Gandaki Province, Technical Journal Vol. 3 (July, 2023) All of the parks lie along Phewa Lake with beautiful views of the lake, mountain range, hills, and, forests making them naturally beautiful.In addition to this, some parks are socio-culturally distinctive as they provide the setting for various socio-cultural activities.In the case of
Activities
Regular Park activities like sitting, eating, chatting, watching, and walking are noted in all of the parks.Whereas spaces to comfortably lie on are missing in all of them.Special celebrations are held in the parks except in Machhapuchhre Pratibimba and Damside Park.The boating facility to Phewa Lake is provided only in Damside Park, Komgane Park, and Basundhara Park.
Here, diverse activities can be found in Komagane Park and Basundhara Park, whereas the least number of activities under the criteria is noted in Machhapuchhre Pratibimba Park.
Condition of Landscape Elements
Landscape elements of the parks are assessed in terms of their functional condition.Footpaths are in good condition in the case of Basundhara Park, Yog Park, and Gaurighat Park, but in other parks, the footpaths need maintenance.Except for Yog Park and Gaurighat Park, even though the seating spaces are provided, some of them are not in good serving condition.Sheds in Komagane Park and Gaurighat Park are in good condition but in the case of Basudhara Park, one of them is in dilapidated condition.However, there is no shed structure in other parks.
Except for Damside Park, sculpture structures can be found in parks and are in good condition.
The boundary fence is in good condition only in the case of Yog Park and Gaurighat Park, whereas in other parks maintenance is needed.Seating furniture was found insufficient in two big parks; Komagane Park and Basundhara Park.
Condition of utilities and services
Only Yog Park and Gaurighat Park are provided with lighting facilities, whereas there is no lighting service in Machhapuchhre Pratibimba Park and partial provision in other parks.In all cases, the drainage system is not properly planned, rather follows the natural gradient and Nepal Engineers ' Association, Gandaki Province, Technical Journal Vol. 3 (July, 2023) finally flows to the lake.The washroom facility is not available in Damside Park but in the case of Basudhara Park, it is available within the park premise.However, in other cases, visitors can
4.Conclusions
The importance of quality parks is immeasurable in the context of rapid urbanization.In this research, we have investigated the quality of the existing park along Phewa Lake by deriving quality assessment criteria from the literature review.In the study area, all the parks are rich in the natural views they offer visitors.However, analyzing the parks in terms of the selected quality parameters reveals that their condition is satisfactory in some but not in all criteria.The assessment shows that Gaurighat Park is in a better quality state than other parks under study, The park is also in need of improvement in other criteria of assessment.As a result, these findings contribute to determining park quality from a variety of perspectives.Results of the quality assessment provide the basis for the prioritization, planning, and implementation of resource allocation, and improvement initiatives more effectively and efficiently.Finally, this will help to improve overall park quality, and thus the quality of life and environment.
Although there are many factors to consider when evaluating a park's quality, the study uses only six of them because of time and resource constraints.Besides these assessment criteria, there are other parameters to consider within the physical and sociocultural aspects.In addition to this, consideration of other aspects is also crucial for the complete picture.Thus, it is recommended to carry out a detailed study focusing on a criterion for a particular park or all the parks along the lake.Similarly, research is suggested with additional criteria to assess the quality of park/s from diverse perspectives.
Figure 1 :Figure 2 :
Figure 1: Location of Parks under study
Figure 8 :Figure 10 :
Figure 8: View from the Park Figure 9: Newly constructed footpath
(
Błaszczyk, Suchocka, Wojnowska-Heciak, & Muszyńska, 2020).The table below shows that all the parks under study are well connected to public transportation with access roads in good condition.Similarly, they are easily walkable from nearby neighborhoods.On the contrary, the provision of parking and distinctive entry/exit seems to be missing in Damside Park and Machhapuchhre Pratibimba Park respectively.Both Yog Park and Basundhara Park are universally accessible, the universal accessibility can be found in Machhapuchre Pratibimba Park, and Komagane Park partially only, whereas Damside Park is devoid of universal accessibility.
Pratibimba Park, DP-Damside Park, KP-Komagane Park, BP-Basundhara Park, YP-Yog Park, GP-Gaurighat Park Y-Yes, N-No, NA-Not available, PA-Partial, Comparison of overall accessibility in parks Komagane and Basundhara Park, new year programs and mass social gatherings during special events can be found.Similarly, International Yoga Day celebrations and other yoga activities can be found in Yog Park.Likewise, communal water taps and temporary exhibitions make Gaurighat Park socio-culturally valuable.On the other hand, there are no such special sociocultural activities found in Machhapuchhre Pratibimba and Damside Park.The presence of religious structures and associated activities in the parks show their religious characteristics.Nagthan (place to worship snake god) on Machhapuchhre Pratibimba Park, Kedareshwor Mahadevmani Temple accessed through Komagane Park, Sateshwor Shiva temple, Statue of lord Hanuman, a worship place for lord Sita-Ram on Yog Park and Maithan on Gaurighat Park, demonstrate their religious identity.Historic characteristic is associated with three parks.Machhapuchhre Pratibimba Park has a history of the dam structure built in 1961 AD.Likewise, Komagane Park is the result of international cooperation and friendship between the Japanese town of Komagane and Pokhara (2001 AD).Similarly, Yog Park is historically important due to its history of the location and the relocation of Sateshwor Shiva temple.So overall, in addition to natural value, Komagane Park and Yog Park have distinctive characteristics in terms of socio-cultural, religious, and historical value.Thus all the parks have a distinctive identity in one or more ways.
Range of activities offered in ParksTwo bigger parks; Komagane and Basundhara Park are found to provide a wide range of activities than other parks.Machhapuchhre Pratibimba Park is found with the least activities option.The remaining parks are found with equal number of activities.
Summary of the condition of hardscape elements Overall Gaurighat Park is found with the best condition of hardscape elements, followed by Yog Park.But the condition in Damside Park is found weakest among all as the park fulfills only one sub-criteria supporting the hardscape quality.Regarding soft scape elements of the landscape, trees are in good condition in all the parks.Except for Yog Park and Gaurighat Park, shrubs are not in a well-maintained condition.Similarly, flowers are very few and not properly taken care of except for Gaurighat Park and Yog Park.Ground cover is well maintained in Gaurighat Park and Yog Park partially maintained in the case of Machhapuchhre Pratibimba Park and left wild and with weed growth in other cases.Here, the landscape elements of Yoga Park and Gaurighat Park are found in proper condition.But the condition is found relatively poor in the case of Basundhara Park among all others.Summary of the condition of softscape Both Yog Park and Gaurighat Park have the highest frequency of data supporting quality in terms of the proper condition of the softscape.The condition is found weakest in Damside Park, Komagane Park, and Basundhara Park as the parks meet one supporting quality and one subcriteria only partially.
access nearby washroom services (Machhapuchhre Pratibimba Park-Ministry, Komagane Park-Kedareshwor Mahadevmani Temple, Yoga Park, and Gaurighat Park -Gaurighat).In the case of Yoga and Gaurighar Park, there is a drinking water facility, but it is not available in Damside Park, though, the facility is available in Yog Park and Gaurighat Park.However, in the case of Machhapuchhre Pratibimba Park and Damside Park, visitors can use nearby drinking water facilities (Machhapuchhre Pratibimba Park-Ministry, Komagane Park-Kedareshwor Mahadevmani Temple, Basundhara Park-facility on its northern boundary).In terms of the provision of proper utilities and services, Yog Park and Gaurighat Park are in better condition than others.Damside Park is found to have comparatively the least facilitated.
FrequencyChart 5 :
Summary of the condition of utilities and servicesThe frequency of data that supports the quality of utilities and services is found more in the case of Yog and Gaurighat Park.Whereas in the case of Damside Park, only one data is supporting the quality only partially.So the condition of Damside Park is poor among all other parks under study.3.1.6.Cleanliness Only Yog Park and Gaurighat Park are found in better cleanliness.In the case of Machhapuchhre Pratibimba Park, cleanliness is focused only on the lake part whereas, in Komagane Park, it is only focused on the accessway used by the temple, where regular cleaning is done by the temple.Damside Park and Basundhara Park are in poor condition regarding cleanliness.Bins are in sufficient numbers only in the cases of Machhapuchhre Pratibimba Park, Damside Park, Yog Park, and Gauritghat Park.Overall, Yog Park and Gaurighat Park is with better cleanliness than others, whereas Damside Park and Basundhara Park are on the relatively poorer side.
Summary of cleanlinessThe chart above illustrates that Yog Park and Gaurighat Park are better in terms of cleanliness as the frequency of data supporting quality is more than in other cases.Whereas the highest frequency of data that degrades the cleanliness shows the poor condition of cleanliness in Damside Park and Basundhara Park.
followed by Yog Park, Basundhara and Komagane Park, Machhapuchhre Park, and Damside Park respectively.The park with the highest quality (Gaurighat Park) is found with good accessibility, properly maintained landscape elements, properly functioning utilities and services, and better cleanliness.Whereas the park with the lowest quality (Damside Park) is having major problems regarding proper maintenance of the landscape and overall cleanliness.
Table 1 :
Quality Criteria of urban parks
Park security Security factors of park functioning
Accessibility, landscape element condition, utility, service condition, and cleanliness are four of the factors used for the quality assessment which
Table 3 :
Accessibility condition of the parks
Table 4 :
Distinctive Characteristics of the Parks
Table 5 :
Activities inside the parks
Table 6 :
Condition of landscape elements
Table 7 :
The proper condition of utilities and services in the parks
Table 8 :
Cleanliness condition of the parks
|
2024-01-24T17:36:48.493Z
|
2023-12-31T00:00:00.000
|
{
"year": 2023,
"sha1": "0380c558884ac6e72cc14a3083cf1e2725484ddc",
"oa_license": null,
"oa_url": "https://www.nepjol.info/index.php/TJ/article/download/61936/46623",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7656e26f66887e605e481c58bafdb4e9f8f63862",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
}
|
253399726
|
pes2o/s2orc
|
v3-fos-license
|
Prediction of anemia using facial images and deep learning technology in the emergency department
Background According to the WHO, anemia is a highly prevalent disease, especially for patients in the emergency department. The pathophysiological mechanism by which anemia can affect facial characteristics, such as membrane pallor, has been proven to detect anemia with the help of deep learning technology. The quick prediction method for the patient in the emergency department is important to screen the anemic state and judge the necessity of blood transfusion treatment. Method We trained a deep learning system to predict anemia using videos of 316 patients. All the videos were taken with the same portable pad in the ambient environment of the emergency department. The video extraction and face recognition methods were used to highlight the facial area for analysis. Accuracy and area under the curve were used to assess the performance of the machine learning system at the image level and the patient level. Results Three tasks were applied for performance evaluation. The objective of Task 1 was to predict patients' anemic states [hemoglobin (Hb) <13 g/dl in men and Hb <12 g/dl in women]. The accuracy of the image level was 82.37%, the area under the curve (AUC) of the image level was 0.84, the accuracy of the patient level was 84.02%, the sensitivity of the patient level was 92.59%, and the specificity of the patient level was 69.23%. The objective of Task 2 was to predict mild anemia (Hb <9 g/dl). The accuracy of the image level was 68.37%, the AUC of the image level was 0.69, the accuracy of the patient level was 70.58%, the sensitivity was 73.52%, and the specificity was 67.64%. The aim of task 3 was to predict severe anemia (Hb <7 g/dl). The accuracy of the image level was 74.01%, the AUC of the image level was 0.82, the accuracy of the patient level was 68.42%, the sensitivity was 61.53%, and the specificity was 83.33%. Conclusion The machine learning system could quickly and accurately predict the anemia of patients in the emergency department and aid in the treatment decision for urgent blood transfusion. It offers great clinical value and practical significance in expediting diagnosis, improving medical resource allocation, and providing appropriate treatment in the future.
Background: According to the WHO, anemia is a highly prevalent disease, especially for patients in the emergency department. The pathophysiological mechanism by which anemia can a ect facial characteristics, such as membrane pallor, has been proven to detect anemia with the help of deep learning technology. The quick prediction method for the patient in the emergency department is important to screen the anemic state and judge the necessity of blood transfusion treatment.
Method: We trained a deep learning system to predict anemia using videos of patients. All the videos were taken with the same portable pad in the ambient environment of the emergency department. The video extraction and face recognition methods were used to highlight the facial area for analysis. Accuracy and area under the curve were used to assess the performance of the machine learning system at the image level and the patient level.
Results: Three tasks were applied for performance evaluation. The objective of Task was to predict patients' anemic states [hemoglobin (Hb) < g/dl in men and Hb < g/dl in women]. The accuracy of the image level was . %, the area under the curve (AUC) of the image level was . , the accuracy of the patient level was . %, the sensitivity of the patient level was . %, and the specificity of the patient level was . %. The objective of Task was to predict mild anemia (Hb < g/dl). The accuracy of the image level was . %, the AUC of the image level was . , the accuracy of the patient level was . %, the sensitivity was . %, and the specificity was . %. The aim of task was to predict severe anemia (Hb < g/dl). The accuracy of the image level was . %, the AUC of the image level was . , the accuracy of the patient level was . %, the sensitivity was . %, and the specificity was . %.
Conclusion: The machine learning system could quickly and accurately predict the anemia of patients in the emergency department and aid in the Introduction Anemia is characterized by a hemoglobin concentration below a specified cut-off point; this cut-off point depends on the age, gender, physiological status, smoking habits, and altitude at which the population being assessed lives. Current hemoglobin cut-off recommendations range from 13 to 14.2 g/dl in men and 11.6 to 12.3 g/dl in women (1). Severe anemia is often a sequela of malnutrition, parasitic infections, or underlying disease (2) and is also caused by trauma or other medical conditions such as gastrointestinal hemorrhage. In emergency departments, acute blood loss diseases often cause severe anemia, such as trauma, gastrointestinal hemorrhage, etc., and require quick identification and prompt restoration of the circulation volume to save the patients. Without immediate attention, patients will bleed to death from hemorrhagic shock (3). The classic symptoms of anemia are fatigue and shortness of breath, paleness of the mucous membranes and resting tachycardia (4). Interestingly, several reports have shown that anemia can be qualitatively associated with subjective assessment of the pallor in various parts of the body, such as the conjunctiva, face, lips, fingernails, and palmer creases (5)(6)(7)(8)(9)(10)(11). Previous studies have demonstrated that hemoglobin absorbs green light and reflects red light (12); hemoglobin concentration can affect tissue color.
Nowadays, a complete blood count is a common way to diagnose anemia. However, blood samples are obtained via invasive venipuncture, which necessitates the presence of professional medical staff and equipment (13-15). In the emergency department or ICU, obtaining information about blood hemoglobin levels is essential to ensure whether the patient needs an instant blood transfusion to save their lives (16). Thus, CBC may not be adequate or fast enough to meet the demand of doctors when screening for anemia patients fast and accurately, especially in mass casualty incidents such as war settings. With the rapid development of technology, noninvasive facial recognition technology has been widely used in medicine, such as the area of diagnosis of genetic disorder diseases diagnosis (17, 18), the area of diagnosis of dermatological diseases diagnosis (19,20), the area of nervous system diseases (21,22), etc. Researchers have been studying mucous membrane color changes as a potential biomarker for rapid and reliable anemia diagnosis using facial recognition technology in recent years (23)(24)(25).
Deep learning (DL), a subfield of artificial intelligence (AI), passes input through a large number of layers of interconnected nonlinear processing units to represent complicated and abstract concepts (26). Deep learning has had numerous important breakthroughs in fields as diverse as speech recognition, image recognition, natural language processing, translation, etc. (27)(28)(29)(30). In this study, we will use the deep learning method to extract features from facial images and establish a correlation with the anemic state through layers of training.
Our study aims to determine whether the facial images taken under specific circumstances correlate with the anemic state. Researchers like Dr. Suner and Dr. Collings have found a model to detect the hemoglobin concentration from the analysis of conjunctiva (23,24). Our goal is to develop a model to predict patients' anemic states using images of the patients with the analysis of the entire face taken from the portable pad so that it can promote fast and accurate screening for the anemic state of emergent and severe patients.
Video collection
This was an observational prospective sample study. From October 1, 2021, to 13th April 2022, all videos were collected from patients in the critical care area in the emergency department of Chinses PLA General Hospital First Central Division for any chief complaint. The inclusion criteria were as follows: (1) 18 years old. The exclusion criteria were as follows: (1) patients or guardians unwilling to provide written consent, (2) Patients suffering from diseases affecting the color of the face except for blood loss (such as jaundice, skin diseases that affect the skin color, etc.), (3) known hypoxia (SpO 2 <90%), and (4) receiving or due to receive a blood transfusion before video collection and blood sample measurement. All patients who participated provided their informed written consent.
All patients were asked to lie supine as comfortably as possible. Videos were made under the ambient indoor stable light with the camera of a pad (AIM75-WIFI). The pad was positioned 40 cm directly in front of the patients' faces to ensure that the whole face was captured on the screen. We shot a 5-s video with the pad placed in front of the faces; then, we rotated the pad 45 degrees to the left and right of the faces, with the distance unchanged, and shot another two 5-s videos. This way, we made a 15-s video of each patient who agreed to participate. The automatic focus was used throughout, and the flash was forbidden. Videos were captured in High Frame Rate 60 format and stored in Mp4 format. The resolution of the videos was 1,280 × 720. The blood sample of each participant was acquired immediately after the video, and hemoglobin measurement was carried out within 30 min of sample acquisition, ensuring that hemoglobin results matched the video analysis. Demographic information for patients included gender, age, admitting diagnosis, and hospital laboratory-reported hemoglobin results (Table 1). All videos and demographic information were collected by a single operator to reduce the variability. Research approval was granted by the Institutional Review Board (IRB) of the Medical Ethics Committee of Chinese PLA General Hospital.
Procedure
This section presents our proposed framework for anemia prediction, as shown in Figure 1. The framework consists of three major modules: the video-image module, face detection network, and anemia prediction network. Patient videos were first fed into a video-image module and converted to images. Then, a face recognition algorithm was performed to produce the detected faces of patients, which were the inputs of the anemia prediction network.
Data pre-processing
As anemia prediction relied on an image classification strategy, video frames were retrieved and saved as images. During shooting, patients could assume different positions. Face correction was performed to correct the face direction. Considering that there were hundreds of frames in a video, we chose to utilize some of them. Images were extracted and stored every 50 frames from videos of all patients, which finally built the dataset used in our experiments. Data augmentation, such as horizontal flipping, zooming, and rotation, was used to reduce overfitting and improve model performance.
In the study, we labeled the data at the patient level. All the patients in the research received a complete blood count test shortly after the video collection to determine whether they were anemic or not. Although one patient might have multiple videos, images extracted from videos have the same label if they belong to the same person. Specifically, if a patient was diagnosed with anemia, all images extracted from the videos of the patient were labeled as anemic or otherwise.
Face detection
We used the service offered by Megvii Co., Ltd., known as Face++, as the face recognition and detection solution.
Megvii is a Chinese technology company that mainly focuses on developing image recognition and deep learning software. Megvii manages one of the world's largest research institutes specializing in computer vision, and it is the largest provider of third-party authentication software in the world. Its product, Face++, is the world's largest open-source computer vision platform. The service released on their artificial intelligence open platform can detect and analyze human faces with the provided images. Even patients with different postures or expressions can produce results, which saves us quite a lot of time in annotating and training a face detector from scratch.
We used the Face++ detector to detect faces within images and got back face bounding boxes for each detected face. The values of the bounding boxes were used to crop and extract face regions from the original images. The images, after cropping, were down-sampled and resized into 224 × 224-pixels for subsequent anemia prediction.
Anemia prediction
While most traditional machine learning methods depend largely on hand-crafted features, deep learning techniques have received sufficient attention for various computer science tasks, including classification, object detection, and segmentation. In this study, we compared and showed the classification results of five convolutional neural networks, including ResNet50, Frontiers in Public Health frontiersin.org . /fpubh. . MobileNet, InceptionV3, EfficientNetB0, and DenseNet121 (Tables 2, 3) before finally choosing InceptionV3 as the proposed model. In addition to data augmentation, applying pre-trained models learned from large-scale datasets, such as ImageNet, was another way to reduce overfitting (31). In this study, we initialized the models for anemia prediction with pre-trained ImageNet weights and fine-tuned them on our own training dataset. Specifically, taking a 224 × 224pixel image as input, the anemia prediction network was initially initialized with pre-trained ImageNet weights. We froze all layers and trained only the top layers until convergence. For this step, a relatively large learning rate (1e−3) was used. Then, we unfroze all the layers, fitted .
Experimental settings
We conducted five-fold cross-validation with different seeds to evaluate our prediction method and reported the mean accuracy and AUC among five runs of combined test folds. For each experiment, we split the dataset into training and validation sets with a ratio of 7:3 at a patient level, which meant that images extracted from one person did not appear in different datasets, i.e., training or validation set. We set different class weights to solve the class imbalance problem that emerged in our research. Only the best model was saved based on the quantity monitor's valid accuracy. The model was validated on a holdout set, and the performance was assessed by sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC). All codes were implemented in Tensorflow and run on an NVIDIA GeForce RTX 2080 Ti GPU.
Clinician assessment
Two senior emergency department doctors were invited to subjectively assess the validation videos. Doctors were first shown three example videos of anemic patients and three videos of non-anemic patients. Then, all the videos in the validation set were shown randomly to the doctors. Doctors were asked to rank each video as "anemic" or "not anemic." Each doctor's accuracy, sensitivity, and specificity were assessed to show the clinical performance in detecting anemia patients.
Results
Our research recruited 362 patients, and 362 videos of faces were taken. Two experienced physicians were invited to assess the quality of the videos, and 45 videos were removed for a low definition or failure to display a complete face image. Thus, 316 face videos of 316 patients were used in the research, of which 217 patients were diagnosed with anemia based on the results of a complete blood count, and the average hemoglobin concentration was 10.55 g/dl. One hundred ninety-eight male and 118 female patients were included in the research, and the average age was 63.88. Demographic information of the patients is shown in Table 1. Three tasks were performed in the research. Task 1 aimed to predict the anemia of patients (Hb <13 g/dl in men and Hb <12 g/dl in women), while Task 2 aimed to predict the mild anemia of patients (Hb <9 g/dl). Finally, Task 3 aimed to predict the severe anemia of patients (Hb <7 g/dl).
Prediction results were performed at two levels for each task: image and patient levels. With the help of data augmentation technology, 6,993 images of patients were extracted from the 316 videos and split into training data sets and validation data sets with a ratio of 7:3. The image level and patient level were two evaluation forms for the results. The image level and the prediction results of the model were assessed by the accuracy and AUC. Table 2 shows the results of three tasks at the image level. Figures 2-4 demonstrated the accuracy results for Task 1, Task 2, and Task 3 in image level, and Figures 5-7 demonstrate the AUC results for Task 1, Task 2, and Task 3. The patientlevel results were the aggregating results of the image level and directly reflected the model's prediction ability in the clinical environment. Accuracy, sensitivity, and specificity were used to evaluate the prediction ability at the patient level, which is shown in Table 3.
To compare the performance of the prediction model and assess the clinical acceptability, we invited two senior doctors from the emergency department to assess each video in the Frontiers in Public Health frontiersin.org . /fpubh. . validation set as anemic or not. The performance of each doctor was assessed by the accuracy, sensitivity, and specificity, and the comparison between the prediction model and the doctors' prediction is shown in Table 4.
Discussion
Our research used the facial images of patients combined with the technology of deep learning to build the prediction model for the detection of anemia. The research performed three tasks to examine the model's ability to detect patients with varying anemia degrees. Two comparisons were made to evaluate the prediction model: the first comparison was between our model and Collings's model (23) and Hermoza's model (32). Collings's prediction was based on the conjunctiva and showed 76% accuracy in predicting anemia. Hermoza's model was based on the fingernail and showed 68% accuracy in predicting anemia. Meanwhile, our prediction model reached 84.02% accuracy, indicating that anemia showed better prediction ability than other prediction models. The second comparison was between our model and senior doctors. Both these doctors have less accuracy than the prediction model, scoring 55.23 and 51.46% accuracy, respectively, indicating the promising clinical utility of our model. All the results showed that the model could accurately predict the anemia patients with the facial images and had relatively good performance for predicting mild and severe anemia patients. In particular, for the severe anemia prediction, our results showed the promising performance of the model in the area of clinical treatment aid. The strengths and limitations of each will be discussed in the following paragraphs.
According to the WHO, anemia is a highly prevalent disease, affecting over two billion people worldwide (33). In recent years, rapid screening and diagnosis of anemic patients has become a hot topic. The theoretical support for the research .
/fpubh. . is obvious: anemia may correlate with the pallor in various regions of the patient's body, such as fingernail beds, conjunctiva, palmar creases, and so on (7,9,10). The rapid development of new devices (34) combined with the theory leads to great achievement in the area of the noninvasive, quick detection of anemia. Learning from the reports about the detection of anemia in recent years, fingernail beds and conjunctiva are two important regions for detecting anemia. Fingernail beds and conjunctiva are short of melanocytes, which can affect the red light reflection of hemoglobin. Thus, they are of great use in anemia detection. However, it is worth noting that those researchers require good patient compliance and that patients need to finish the movement to expose the conjunctiva or their fingernail beds. Although this movement is quite easy for healthy people or patients with mild disease, it is difficult for severely anemic patients or those in a coma, compromising the research findings due to insufficient exposure to the characteristic areas. It can also be learned from the previous reports that after the collected data was filtered, it was necessary to eliminate some data whose feature areas were not fully exposed, leading to the reduction of the data (23). In this research, we collected the whole facial images for analysis instead of focusing on a specific organ or region of the face and used a deep learning model to automatically extract facial features to make a prediction. To explore the technical details related to facial feature recognition used by the proposed deep learning model, we used a technique called Gradient-weighted Class Activation Mapping (Grad-CAM) (35) to produce visual explanations for decisions of a convolutional neural network (CNN)-based model. Grad-CAM uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. The Grad-CAM results in Figures 8-10 demonstrate that the features extracted from eyes and lips contribute more to the prediction. Multiple facial features extracted from the videos assist the model in achieving better prediction performance compared with other models in previous reports. At the same time, the acquisition of facial images often only requires consent from patients or guardians without specific body positions or movement, which is convenient, reduces the potential harm to the patients, and is more in line with the requirements of medical ethics.
In this research, all the research equipment is a pad commonly used in public without the participation of additional auxiliary equipment. At present, a lot of auxiliary equipment is often used for image acquisition and analysis, such as a flash lamp, calibration card, etc. (36,37). The use of these auxiliary equipment hinders the practical application of the experimental results. Each piece of additional auxiliary equipment adds an uncontrolled influencing factor to the research. Our research used only a pad to complete all video acquisition work. The videos were analyzed by artificial intelligence, realizing the rapid, convenient, and accurate . /fpubh. . detection of the anemic state, and the comparison result also showed promising clinical utility in the future. Promoting this deep learning-based anemia detection technology, especially in areas short of resources, will enable medical staff to quickly screen and detect the anemic state of patients with fewer resources. Another strength of our research is that all the participants were patients in the critically ill area of the emergency department. Compared to previous research whose inclusion patients were clearly diagnosed and the diagnosis was simple or even just diagnosed as anemia, the diagnosis of inclusion patients in our research was more complicated, and the condition was more critical, such as gastrointestinal hemorrhage, severe trauma, acute myocardial infarction, and other emergencies. Some patients' diagnoses were combined with several diseases; some were even diagnosed with multiple organ dysfunction syndromes (MODS), and anemia was often diagnosed only after a routine blood test. Therefore, the facial changes of these patients were more complicated. In addition to the facial pallor and indifference caused by anemia, the facial changes caused by other diseases, such as the painful face caused by trauma and the face of comatose patients, would impact this research. The facial videos were analyzed by deep learning technology to screen for the features most relevant to anemia, contributing to the facial analysis model and establishing the prediction model of anemia from facial images with high accuracy.
In reality, detecting anemia via facial images is a part of our facial recognition research. In the emergency environment, patients' conditions are often complex and difficult to diagnose.
The core point of triage is how to quickly judge the patient's condition and deploy the most appropriate treatment for them without wasting medical resources. Currently, the commonly used triage methods include "Simple Triage And Rapid Treatment" (START), "Abbreviated Injury Scale" (AIS), "Injury Severity Score" (ISS), and so on (38). Reasonably and accurately applying the triage methods often requires professional and experimental medical staff, which many medical institutions, especially in areas with limited resources, fail to meet the demand (39). Our research has realized the noninvasive, rapid, accurate detection of anemia for urgent cases of anemia. In the future, the application of our triage research will favor the quick judgment of patients' conditions to make reasonable triage decisions. Emergency and severely ill patients often exhibit different emergency faces, such as cyanosis and dyspnea of acute airway obstruction, breathing like dying patients, etc. We can make full use of these characteristics to establish a connection between the facial images and the critical degree of the patients so that we can find a new rapid and simple triage method. This will be a very complex but meaningful challenge for us.
Besides diagnosis and triage, our prediction model also has the potential to aid treatment. Many severely traumatic patients will appear in large-scale battlefield or mass casualty incidents where they may suffer from traumatic hemorrhagic shock, and timely blood transfusion treatment will significantly influence their prognosis (40). However, providing blood transfusion treatment to every patient with limited blood resources is impossible. Many patients suffer from severe trauma without obvious bleeding, such as closed abdominal or pelvic trauma. When there are not enough laboratory devices, it is difficult to diagnose the anemia state of these patients and whether they need an urgent blood transfusion. Our research achieved high accuracy in detecting severe anemia with the help of a portable device (Hb < 70 g/L), which was also the threshold of blood transfusion (41); it could aid doctors in the treatment decision for or against transfusion fast and accurately.
The limitation of this research is that we used a single pad to finish the research, and the changes in research results after using different devices or different deep learning technology may lead to the deviation of detection of anemia, which will be further verified in subsequent research. The validation set was from the same hospital, and multicenter validation is an important task in future studies. Our research participants were all Chinese, meaning the research results might not apply to white people or black people. Analyzing the imperfect results of our research, especially for mild and severe anemia, we attribute the limitations to two reasons: small facial changes in mild anemia and insufficient data. However, the promising results of this research convince us that further research with more data will bring us better results.
Conclusion
Patients' anemia in the ED might be diagnosed fast and correctly by the machine learning prediction model, which would help physicians decide whether or not to administer a blood transfusion. It offers great clinical value and practical significance, expediting diagnoses, improving medical resource allocation, and providing appropriate treatment in the future.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
AZ: research design and writing. JLo: analysis and model design. ZP: data review and supervision. JLu and HZ: data collection. XZ: data collection and validation. JLi: data review and methodology. LW: data review. XC: data validation. BJ: model design and supervision. LC: research supervision and article review. All authors contributed to the article and approved the submitted version.
|
2022-11-09T14:49:13.033Z
|
2022-11-09T00:00:00.000
|
{
"year": 2022,
"sha1": "ce9787b52b2ac316ffb19f8bf71fef2d69435c22",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "ce9787b52b2ac316ffb19f8bf71fef2d69435c22",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201868787
|
pes2o/s2orc
|
v3-fos-license
|
H3K4me3, H3K9ac, H3K27ac, H3K27me3 and H3K9me3 Histone Tags Suggest Distinct Regulatory Evolution of Open and Condensed Chromatin Landmarks
Background: Transposons are selfish genetic elements that self-reproduce in host DNA. They were active during evolutionary history and now occupy almost half of mammalian genomes. Close insertions of transposons reshaped structure and regulation of many genes considerably. Co-evolution of transposons and host DNA frequently results in the formation of new regulatory regions. Previously we published a concept that the proportion of functional features held by transposons positively correlates with the rate of regulatory evolution of the respective genes. Methods: We ranked human genes and molecular pathways according to their regulatory evolution rates based on high throughput genome-wide data on five histone modifications (H3K4me3, H3K9ac, H3K27ac, H3K27me3, H3K9me3) linked with transposons for five human cell lines. Results: Based on the total of approximately 1.5 million histone tags, we ranked regulatory evolution rates for 25075 human genes and 3121 molecular pathways and identified groups of molecular processes that showed signs of either fast or slow regulatory evolution. However, histone tags showed different regulatory patterns and formed two distinct clusters: promoter/active chromatin tags (H3K4me3, H3K9ac, H3K27ac) vs. heterochromatin tags (H3K27me3, H3K9me3). Conclusion: In humans, transposon-linked histone marks evolved in a coordinated way depending on their functional roles.
Introduction
Transposons are endogenous mobile components of a genome that can replicate themselves into new genomic locations [1]. Retroelements (REs) form a specific group of transposons: they proliferate and H3K9me3 are heterochromatin-associated histone marks specific for constitutive and facultative heterochromatin, respectively [29]. The presence of H3K27me3 and H3K9me3, therefore, indicates repressed transcriptional activity in neighboring genome regions. Thus, the abovementioned five epigenetic marks form two functional groups: one associated with transcriptional activation (H3K4me3, H3K9ac and H3K27ac) and one associated with transcriptional suppression (H3K27me3 and H3K9me3).
We ranked regulatory evolution rates of human genes and molecular pathways by using high throughput data on five histone marks from the ENCODE project [30] for five human cell lines. Based on H3K4me3, H3K9ac, H3K27ac, H3K27me3 and H3K9me3 histone tags, respectively, we investigated 25,075 human genes and 3121 molecular pathways. As for the previous TFBS analysis, the processes of "amino acids and polyamines metabolism", "lipid metabolism", "detoxication and catabolism of xenobiotics", "sensory perception and neurotransmission" and "immunity linked pathways" showed signs of the fastest regulatory evolution, while the processes "DNA metabolism and chromatin structure linked pathways", "nucleic base metabolism" and "translation and protein maturation" demonstrated the slowest regulatory evolution. However, histone tags showed different regulatory patterns and formed two distinct clusters: promoters/active chromatin tags (H3K4me3, H3K9ac, H3K27ac) vs. heterochromatin marks (H3K27me3, H3K9me3). Our findings suggest that in humans, distributions of transposon-linked histone tags evolved in a coordinated manner according to their functional roles.
Quantitative Scores of Genes and Pathways Regulatory Evolution
To evaluate the RE-associated regulatory impact of individual genes, we introduced a quantitative score (Equation (1))-gene RE-linked enrichment score (GRE score) of an individual gene is the sum of RE-specific hits (histone modification marks) found in a 10kb-neighborhood of its TSS, that is normalized by the average content of RE-specific hits for all examined genes. For every gene, GRE enables us to measure its regulation by RE-linked hits. However, this score is limited in a way that different genes with the same GRE value could have a substantially different number of total hits (both RE-linked and not) in their TSS neighborhood, so the GRE score cannot be directly used to compare genes. Therefore, it is required to have another normalized score that shows how the regulatory region of a gene is enriched by RE-linked hits with respect to total hits. To this end, the normalized gene RE-linked enrichment score (NGRE) was proposed. For an individual gene, the NGRE is equal to the ratio of GRE score to balanced total number of hits (not only RE-linked) for the gene [13].
Similar scores were proposed to evaluate the impact of hits at the level of molecular pathways [13] termed pathway involvement index (PII) and normalized pathway involvement index (NPII). PII reflects the total impact of RE-linked hits on the regulation of an individual molecular pathway. The bigger PII is, the stronger impact of RE-linked hits on the overall regulation of a molecular pathway should be, and vice versa. However, PII is not informative enough to estimate RE-linked regulation of a pathway in the context of its total regulation. We, therefore, introduced another score termed NPII (normalized PII) which is needed to estimate the relative RE-linked impact on the regulation of a whole molecular pathway. Higher NPII indicates stronger relative impact of RE-linked TFBS on the general regulation of a molecular pathway, and vice versa [13].
GRE and NGRE Scores
Let us assume that the positions of both RE segments and target histone marks within the chromosomal region associated with a gene g are known. Then, for any particular gene g, GRE score is calculated according to the formula (Equation (1)): where GRE g is GRE score for a gene g; HES g is the number of RE-linked hits for a gene g; i is gene index, and HES i is the number of RE-linked hits for gene i; n is the total number of genes under investigation. Normalized gene RE-linked enrichment score (NGRE score) for a gene g is calculated as follows (Equation (2)): Here the GHE (gene hits enrichment) score characterizes gene-specific hits distribution trends, expressed by the formula (Equation (3)): where THS g is the total number of hits mapped in the 10kbp neighborhood of gene g, i is gene index, n is the total number of genes.
PII and NPII Scores
Pathway involvement index (PII) is expressed by the formula (Equation (4)): where PII p is PII score for a pathway p; GRE i is the GRE score for gene i; n is the total number of genes in pathway p. The normalized pathway involvement index (NPII) is calculated as follows (Equation (5)): where PII p is PII for pathway p; PGI p is pathway gene-based index for a pathway p introduced to assess the impact of total hits (not only RE-linked) on the regulation of molecular pathways. PGI for pathway p is expressed by the formula (Equation (6)): where GHE i is GHE score for gene i; n is the number of genes in pathway p.
The reference human genome assembly 2009 (hg19) was indexed by the Burrows-Wheeler algorithm using BWA software, version 0.7.10 [33]. Concatenation of fastq files with single-end or pairwise reads, alignment to the reference genome and filtering were performed by BWA, Samtools (version 1.0), Picard (version 1.92), Bedtools (version 2.17.0) and Phantompeakqualtools (version 1.1) software packages. Aligned reads for histone modification marks for every cell line were mapped on the RE sequences annotated by RepeatMasker (version 3.2.7) and downloaded from the UCSC Browser (RepeatMasker table) [31].
After the calculation of GRE, NGRE, PII and NPII statistics for each gene/pathway and cell line, we calculated the average value of each statistic for genes and pathways across all cell lines under analysis.
Gene Expression Data
From the ENCODE database [34] we obtained RNA sequencing gene expression profiles for human cell lines using the following set of filters: "Transcription", "total RNA-Seq", "gene quantifications". For three out of five cell lines of interest, we found 19 experiments containing gene expression data in two technical replicates: 11 experiments for K562 cell line, five for HepG2 and three for GM12878. Accession numbers are shown in Supplementary Table S1.
Enrichment Analysis for Groups of Differential Genes
We performed gene ontology analysis of genes that were either enriched or deficient in epigenetic regulatory marks linked with REs (RRE-enriched and RRE-deficient genes, respectively) by applying DAVID software (version 6.8) [35] using human gene IDs extracted from USCS Genome Browser [31]; RRE stands for "RE-linked regulation". As the background parameter for the annotation, we took the entire list of genes in the analysis. Our target functional categories were directed GO terms of biological functions, molecular processes and cellular components derived from "Functional Annotation Chart" in DAVID. All the directed terms were extracted with the corresponding enrichment score values. To merge DAVID annotation terms into more general categories, we developed a semi-automatic supervised method. The list of sixteen categories was defined in [13] and contained example terms from each category. To attribute a new term to one of the categories, we found a number of shared genes between the term and each of the example terms. The largest intersection size allowed us to determine the preliminary category of the term. When all terms were split into the categories, we manually checked and fixed misclassification (less than 5% of all cases). As a result, each GO term was assigned to a single category.
To quantify the enrichment of a category for REs-linked regulation, we considered all GO terms in the category and calculated the aggregated p-value by Fisher's method. Obtained p-values reflect the enrichment level; however, the use of any significance threshold is not correct in this case. Our approach, like other types of gene set enrichment analysis, has a limitation that the categories are not entirely independent: some genes could correspond to several GO terms. Therefore, p-values cannot be adequately corrected for multiple testing. Considering this fact, we analyzed p-values guided by the empirical rule: the less p-value is, the more enriched the category is.
Measuring Pathway Enrichment by RE-Linked Hits
Gene architecture data of the molecular pathways were extracted from the following databases: BioCarta [36] (downloaded on March 2015), KEGG [37] (downloaded on June 2015), NCI (https: //cactus.nci.nih.gov/ncicadd/about.htm) (downloaded on March 2015), Reactome [38] (downloaded on March 2015) and Pathway Central (Pathway Central 2019) (downloaded on March 2015 from http://www.sabiosciences.com/pathwaycentral.php). Data on molecular pathways structure were downloaded in .xml and .biopax formats from these databases and used in our computational algorithm [19]. We calculated PII and NPII scores for 3121 pathways for every cell line. To attribute pathways to sixteen functional categories predefined by [13], we applied the same algorithm as for the classification of GO terms. After the primary classification, we manually checked and corrected the categories; each pathway was assigned to a single category.
Enrichment of the categories after the pathway analysis was calculated in the following way. We calculated the EASE score, which is a modification of Fisher's exact test [35]. In our case, the EASE score was calculated according to the following contingency table (Table 1). The EASE method provides the p-value for each category; however, we considered these values in a ranking-like way, because categories were unlikely to be completely independent. Table 1. Contingency table for the EASE score. X is the number of enriched/deficient pathways in the category, Y is the number of all enriched/deficient pathways, Z is the number of all pathways in the category, and K is the number of all pathways analyzed.
Number of not Enriched/Deficient Pathways
Z-X K-Y
Combination of Gene-and Pathway-Based Enrichment Scores
To combine the results obtained in two previous pipelines (according to pp. 2.3. and 2.4.), we applied Fisher's exact test to aggregate the obtained p-values. We analyzed the results in a qualitative manner instead of quantitative as it was performed in previous works [9,13] because of the inevitable drawbacks of enrichment analyses.
For every gene, its GRE score could be used as a measure of the enrichment level by the RE-linked hits. In turn, NGRE score is the normalized GRE on the proportion of total hits (not only RE-linked) overlapping with the gene neighborhood. High NGRE value means stronger impact of RE-specific regulation on overall regulation of a particular gene, and vice versa. The GRE and NGRE scores of 25,075 human genes were calculated in this study.
Similarly, PII score reflects enrichment of the molecular pathways by the RE-linked hits, whereas NPII score was designed to estimate normalized impacts of REs in the regulation of molecular pathways. Higher NPII suggests stronger RE-linked regulatory impact for an individual molecular pathway and, consequently, faster evolution of the corresponding pathway regulatory network [13]. The PII and NPII scores of 3121 molecular pathways were calculated.
After computing GRE, NGRE, PII, NPII statistics of each gene/pathway of every cell line, we averaged the values of each statistic for genes and pathways across all cell lines in the study to operate the scores that represent several human tissues simultaneously. All cell line-specific and averaged calculated GRE, NGRE, PII, NPII scores are in Supplementary Table S2.
Correlation between Histone Tags Based on GRE/NGRE and PII/NPII Scores
First, we analyzed how profiles of histone tags vary within cell lines. For this purpose, we obtained 25 profiles from the ENCODE database (5 tags for 5 cell lines), calculated correlations for each pair of profiles and applied the biclustering on the square symmetric matrix of correlations ( Figure 1). For each histone tag, we found that obtained profiles were highly congruent across cell lines, and the tissue-specific component had only a minor impact on the profiles. Therefore, in further analysis, we averaged values of GRE, NGRE, PII, and NPII scores across five cell lines of interest.
In addition, we observed that the histone marks formed two clear-cut groups with strongly correlated gene distribution profiles for their members (Figure 1). One group contained promoter and active/open chromatin marks (H3K4me3, H3K9ac, H3K27ac), whereas another group contained inactive (constitutive and facultative heterochromatin, respectively) marks H3K9me3 and H3K27me3. The different functionalities of those two groups were clearly illustrated by correlations of their histone profiles with the gene expression data obtained for the same cell lines (Figure 2). To this end, we extracted available RNA sequencing profiles from the ENCODE project repository. Totally, six profiles were available for cell line GM12878, ten for cell line HepG2 and twenty-two for cell line K562. For the MCF-7 and HeLa-s3 cell lines, the were no RNA sequencing data available. In this study, we did not consider microarray hybridization data because RNA sequencing is thought to provide more accurate data being currently the gold standard approach in high throughput transcriptomic research [40,41]. We observed a clear trend that the profiles for active/open chromatin marks positively correlated with the gene expression. In contrast, the inactive (constitutive/facultative) heterochromatin marks showed negative correlations with the expression profiles ( Figure 2). This confirmed that the different histone modification tags presented in the ENCODE project database were related to their expected molecular functions. We then analyzed GRE, NGRE, PII and NPII metrics for the different histone modifications. We visualized the scatterplots for each pair of histone marks and rediscovered two clear-cut groups that display strongly correlated scores ( Figure 3). As previously, histone modifications became divided into two groups: the active/open chromatin marks (H3K4me3, H3K9ac, H3K27ac) and the constitutive/facultative heterochromatin marks (H3K9me3, H3K27me3).
Genes and Molecular Pathways Enriched or Deficient in RE-Linked Regulation
Then, the dependencies in pairs GRE/NGRE and PII/NPII were analyzed to identify genes and pathways, respectively, that have their regulation enriched or deficient by RE-linked histone tags (RRE-enriched and RRE-deficient genes/pathways). For that purpose, 25075 human genes and 3121 pathways in analysis were examined respectively on scatter plots with X axis showing GRE score for
Genes and Molecular Pathways Enriched or Deficient in RE-Linked Regulation
Then, the dependencies in pairs GRE/NGRE and PII/NPII were analyzed to identify genes and pathways, respectively, that have their regulation enriched or deficient by RE-linked histone tags (RRE-enriched and RRE-deficient genes/pathways). For that purpose, 25075 human genes and 3121 pathways in analysis were examined respectively on scatter plots with X axis showing GRE score for genes or PII for pathways and Y axis showing NGRE score for genes and NPII for pathways. These scatterplot allow us to identify genes/pathways that have either high or low RRE impact (Figure 4). Within each scatterplot we fitted a one-parameter linear regression (y = ax) and took the outlier points: 5% of points above the trend line and 5% of points below the trend line (5% of total number of points) by using the Euclidean distance to the regression line. Bottom and top outliers we denoted as RRE-enriched or RRE-deficient, respectively ( Figure 4).
Gene Ontology (GO) Annotation of Top RRE-Enriched and Deficient Genes
We performed Gene Ontology (GO) annotation using DAVID software for ten groups of genes (top and bottom genes for five human histone tags) and obtained p-values of EASE enrichment score for each of 826 GO direct terms. Then, we aggregated GO terms into sixteen broader functional categories formulated according to our previous article [13]. For this purpose, we combined p-values
Gene Ontology (GO) Annotation of Top RRE-Enriched and Deficient Genes
We performed Gene Ontology (GO) annotation using DAVID software for ten groups of genes (top and bottom genes for five human histone tags) and obtained p-values of EASE enrichment score for each of 826 GO direct terms. Then, we aggregated GO terms into sixteen broader functional categories formulated according to our previous article [13]. For this purpose, we combined p-values for terms corresponding a particular category by Fisher's method. The heatmap representation of the obtained aggregated p-values for all functional categories is shown in Figure 5. Remarkably, we also found a sort of anti-phase manner in enrichment between the two groups of histone tags ( Figure 5). For example, categories "infection", "DNA metabolism and chromatin structure linked pathways", "translation and protein maturation", "nucleic base metabolism", "mitochondria linked pathways", and "RNA synthesis" were enriched among the RRE-enriched genes in group 2 but at the same time among the RRE-deficient genes of group 1. In contrast, the categories "electron transfer chain reactions", "infection", "catabolism of xenobiotics" and "cytoskeleton organization and cell adhesion linked pathways" were enriched among RRE-enriched genes of both functional groups of histone modifications.
Top RRE-Enriched and Deficient Molecular Pathways
RRE-enriched and -deficient molecular pathways were aggregated into the same sixteen categories for all types of histone modifications investigated. As before, the two functional groups of modifications showed specific and clearly distinct enrichment trends ( Figure 6). However, five categories showed similar enrichment trends in both groups: "carbohydrates metabolism", "electron transfer chain reactions", "catabolism of xenobiotics", "cell cycle regulation and apoptosis" and "cytoskeleton organization and cell adhesion". In contrast, there were six oppositely regulated categories: "perception and neurotransmission", "RNA synthesis", "translation and protein maturation", "lipid metabolism", "immunity linked pathways", "DNA metabolism and chromatin structure linked pathways". For the remaining five categories, the trends were ambiguous or vague for the two compared groups (Figure 6).
Figure 5.
The p-values heatmap for association with regulatory marks linked with REs (RRE) up-or downregulation for each category for each histone tag.
The two previously identified groups of histone marks corresponding to active or inactive chromatin were notably distinguishable among the data for functional categories. Analysis of p-values revealed the common enriched categories within each group. For example, for the group of "active chromatin" histone modifications, the categories "lipid metabolism", "electron transfer chain reactions", "catabolism of xenobiotics" show RRE-upregulation (high number of RRE-enriched genes). In the same group, categories "cytoskeleton organization and cell adhesion linked pathways", "DNA metabolism and chromatin structure linked pathways", "nucleic base metabolism", "translation and protein maturation" and "RNA synthesis" show RRE-downregulation (high number of RRE-deficient genes). The categories "mitochondria linked pathways" and "infection" show ambiguous trends and no significant up-or downregulation could be detected for six remaining categories.
In the "inactive chromatin" group, categories "RNA synthesis", "translation and protein maturation", "nucleic base metabolism", "intracellular signaling", "mitochondria linked pathways", "electron transfer Chain Reactions", "DNA metabolism and chromatin structure linked pathways", "cell cycle regulation and apoptosis" and "amino acids and polyamines metabolism" were enriched for the RRE-enriched genes. Other categories showed either ambiguous trends or no significant up-or downregulation Figure 5).
Remarkably, we also found a sort of anti-phase manner in enrichment between the two groups of histone tags ( Figure 5). For example, categories "infection", "DNA metabolism and chromatin structure linked pathways", "translation and protein maturation", "nucleic base metabolism", "mitochondria linked pathways", and "RNA synthesis" were enriched among the RRE-enriched genes in group 2 but at the same time among the RRE-deficient genes of group 1. In contrast, the categories "electron transfer chain reactions", "infection", "catabolism of xenobiotics" and "cytoskeleton organization and cell adhesion linked pathways" were enriched among RRE-enriched genes of both functional groups of histone modifications.
Top RRE-Enriched and Deficient Molecular Pathways
RRE-enriched and -deficient molecular pathways were aggregated into the same sixteen categories for all types of histone modifications investigated. As before, the two functional groups of modifications showed specific and clearly distinct enrichment trends ( Figure 6). However, five categories showed similar enrichment trends in both groups: "carbohydrates metabolism", "electron transfer chain reactions", "catabolism of xenobiotics", "cell cycle regulation and apoptosis" and "cytoskeleton organization and cell adhesion". In contrast, there were six oppositely regulated categories: "perception and neurotransmission", "RNA synthesis", "translation and protein maturation", "lipid metabolism", "immunity linked pathways", "DNA metabolism and chromatin structure linked pathways". For the remaining five categories, the trends were ambiguous or vague for the two compared groups ( Figure 6).
Combined Analysis of Gene and Pathway Level Trends
For the majority of the functional categories investigated the gene-and pathway-based analytic pipelines gave yielded results. To combine both types of analyses (gene GRE/NGRE statistics and pathway PII/NPII statistics), we applied Fisher's method to the obtained p-values of gene/pathway enrichment in the categories (Figures 5 and 6) and visualized the resulting p-values in the same manner (Figure 7). The significance threshold was not applicable for this comparison due to the high complexity of the multi-step statistical analysis used here. We focused on the trends that could be observed for the different histone groups ( Figure 5). The two previous groups of histone tags consistently showed largely different coordinated RRE profiles.
Combined Analysis of Gene and Pathway Level Trends
For the majority of the functional categories investigated the gene-and pathway-based analytic pipelines gave yielded results. To combine both types of analyses (gene GRE/NGRE statistics and pathway PII/NPII statistics), we applied Fisher's method to the obtained p-values of gene/pathway enrichment in the categories (Figures 5 and 6) and visualized the resulting p-values in the same manner (Figure 7). The significance threshold was not applicable for this comparison due to the high complexity of the multi-step statistical analysis used here. We focused on the trends that could be observed for the different histone groups ( Figure 5). The two previous groups of histone tags consistently showed largely different coordinated RRE profiles. category that would be deficient in both groups. Overall, these results are consistent with our previous findings made on TFBS data that the above sixteen functional categories are strongly regulated by REs [9].
Discussion
In this study, we examined high throughput RE-linked features of gene regulation by histone modifications. One of the major functions of epigenetic regulation of gene expression is thought to be the control of transposable elements, most frequently their repression [42]. Here we performed a systematic analysis of the reciprocal influence of transposable elements on function and evolution of human epigenetic mechanisms. In many previous reports, influence of transposable elements on the development of epigenetic regulatory networks has been documented. For example, as learned from the analysis of evolution of duplicated genes in human DNA, the key factor influencing the regulatory epigenetic landscape is the presence of transposable elements [43].
In this study, we analyzed quantitative characteristics of gene regulation associated with RElinked histone marks. Another class of transposable elements, DNA transposons, constitute significantly smaller fraction than REs of only up to 3% of the human genome and were most likely not active after mammalian radiation [4]. However, they also impact regulation of gene expression. For example, the human genome contains several thousand copies of the HsMar1 element TIRs that influence the expression of neighboring genes through epigenetic regulation [44][45][46]. Moreover, these elements also contain a functional silencer influencing expression of human gene located nearby [47]. However, the analytic value of DNA transposons in RetroSpect pipeline for human genome is limited as they are not numerous and most of known human genes lack their inserts near TSS. This makes relative measures of regulatory enrichment problematic for both individual genes and molecular pathways. We, therefore, focused on the RE-linked regulation of gene expression, as REs are more abundant and mostly represent more recent inserts than DNA transposons. However, in applications of RetroSpect pipeline to non-mammalian species, DNA transposons may become useful tags in case of their high content and recent insertional activities in the genomes of interest. We found two out of 16 molecular categories enriched by RE regulation in the first group associated with the active/open chromatin while not affected in the second group linked with condensed/inactive chromatin. Specifically, these were "lipid metabolism" and "carbohydrates metabolism" categories. The category "cytoskeleton organization and cell adhesion" was enriched in the first group but showed a contradictory trend in the second group. Three categories were enriched in both groups of histone tags: "electron transfer chain reactions", "catabolism of xenobiotics", "aminoacids and polyamines metabolism". Two categories were enriched in the first group but deficient in the second: "perception and neurotransmission" and "immunity linked pathways". Four categories were deficient in the first group but enriched in the second group: "RNA synthesis", "translation and protein maturation", "DNA metabolism and chromatin structure linked pathways" and "cell cycle regulation and apoptosis". Two categories showed baseline RRE in the first group but were enriched in the second group: "Infection" and "intracellular signaling". One category (mitochondria linked pathways) was oppositely regulated in the first group and upregulated in the second group. Finally, one category showed unclear trends in both groups: "nucleic base metabolism".
As a result, our data showed three similarly and six oppositely regulated categories between the two groups, thus suggesting significant differences between RRE of active and inactive chromatin marks. Notably, we found three categories that were RRE enriched in both groups, but not a single category that would be deficient in both groups. Overall, these results are consistent with our previous findings made on TFBS data that the above sixteen functional categories are strongly regulated by REs [9].
Discussion
In this study, we examined high throughput RE-linked features of gene regulation by histone modifications. One of the major functions of epigenetic regulation of gene expression is thought to be the control of transposable elements, most frequently their repression [42]. Here we performed a systematic analysis of the reciprocal influence of transposable elements on function and evolution of human epigenetic mechanisms. In many previous reports, influence of transposable elements on the development of epigenetic regulatory networks has been documented. For example, as learned from the analysis of evolution of duplicated genes in human DNA, the key factor influencing the regulatory epigenetic landscape is the presence of transposable elements [43].
In this study, we analyzed quantitative characteristics of gene regulation associated with RE-linked histone marks. Another class of transposable elements, DNA transposons, constitute significantly smaller fraction than REs of only up to 3% of the human genome and were most likely not active after mammalian radiation [4]. However, they also impact regulation of gene expression. For example, the human genome contains several thousand copies of the HsMar1 element TIRs that influence the expression of neighboring genes through epigenetic regulation [44][45][46]. Moreover, these elements also contain a functional silencer influencing expression of human gene located nearby [47]. However, the analytic value of DNA transposons in RetroSpect pipeline for human genome is limited as they are not numerous and most of known human genes lack their inserts near TSS. This makes relative measures of regulatory enrichment problematic for both individual genes and molecular pathways. We, therefore, focused on the RE-linked regulation of gene expression, as REs are more abundant and mostly represent more recent inserts than DNA transposons. However, in applications of RetroSpect pipeline to non-mammalian species, DNA transposons may become useful tags in case of their high content and recent insertional activities in the genomes of interest.
We used molecular data obtained from ENCODE project for five human cell lines; 1,556,182 histone tags of all types were investigated in total. Five types of histone marks of open/active or condensed/inactive chromatin were studied: H3K4me3, H3K9ac, H3K27ac and H3K27me3, H3K9me3, respectively. The gene-based characteristics were further aggregated into quantitative scores for the molecular pathways. This allowed us to identify top differential genes and molecular pathways enriched of deficient in regulation by RE-associated histone tags.
We worked with the human cell lines instead of normal tissues due to public availability of high-throughput profiles for target histone marks and RNA sequencing data for the former. The five cell lines selected for our analysis represented different tissues of human body. However, we found that the histone tags highly correlated between the different cell lines, thus suggesting only minor impact of a tissue-specific component on the data. However, availability of novel epigenetic datasets corresponding to normal human tissues would be extremely desirable for further re-analysis of data with RetroSpect pipeline.
For each histone mark at the gene-based way, the analyses yielded two sets of genes (RRE-enriched and deficient) which were functionally annotated with GO direct terms. We developed the semiautomatic annotation algorithm and obtained the list of 826 GO terms attributed further to sixteen functional categories. The same procedure was applied to molecular pathway analysis, and 3121 pathways were attributed to the same sixteen categories.
These data were used to interrogate RRE enrichment or deficiency in the sixteen functional categories previously identified as strongly differential in RE regulation according to transcription factor binding sites data [13]. Interestingly, two functionally different groups of histone marks showed markedly different RRE patterns of the above sixteen functional categories. This result confirms the coordinated behavior of histone modifications associated with gene expression variation [48] and specifies this state that RE-linked histone modifications are correlated within "active" and "repressive" groups of marks.
The first group representatives (H3K4me3, H3K9ac, H3K27ac) showed RRE patterns highly congruent with previous observations of TFBS data, e.g., they were enriched in categories of "amino acids and polyamines metabolism", "lipid metabolism", "detoxication and catabolism of xenobiotics", "sensory perception and neurotransmission" and "immunity linked pathways"; at the same time, they were deficient in "DNA metabolism and chromatin structure linked pathways", "DNA metabolism and chromatin structure linked pathways", "nucleic base metabolism" and "translation and protein maturation".
The second group of histone marks showed clearly more distinct RRE trends with the previous TFBS data. Therefore, we hypothesize that the RE-linked histone marks with the same meaning (activation or inactivation of chromatin) possibly have their evolution coordinated, although the marks of different functionality probably display the opposite evolutionary trends in many functional categories. However, two functional categories that were enriched in both groups of histones here were also RRE-enriched according to previous TFBS data investigations [9,13]: "electron transfer chain reactions", "catabolism of xenobiotics" and "aminoacids and polyamines metabolism".
Our data suggest that histone modifications demonstrate more complex trends of regulation by REs than TFBS. Therefore, further investigation of functional genomic marks and their direct comparisons are needed to uncover the high-throughput impact of REs on human gene regulation.
|
2019-09-08T13:05:50.772Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "506b93df61e2bd625ac728ff39c83460c2a1fde4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/8/9/1034/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "54f425cc74d39c701d063f6faf609358418f1b97",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
257600592
|
pes2o/s2orc
|
v3-fos-license
|
Research on the Comparative Law of Enterprise Criminal Compliance Incentive System
: In recent years, in order to deal with the legal risks that enterprises the Enterprise Compliance Management System has been paid more and more attention. Such practice in many countries abroad has accumulated a certain amount of experience. During in the process of establishing Compliance Management System, many countries grope out a set of incentive mechanism. In the perspective of criminal comparison law, enterprise compliance incentive is mainly manifested as the basis of reducing sentence and the basis of non-prosecution. Compared with foreign countries, China's current enterprise criminal compliance incentive mechanism is in start-up stage. Along with the process of reform of the rule of law and the construction of the rule of law government, all judicial organizations are also in accordance with the different scope of authority to conduct a variety of attempts and practice, in the process of establishing criminal compliance incentive mechanism, we can reference to the experience of other countries, choose the appropriate way to learn from and transplant, In order to establish and improve our enterprise criminal compliance incentive mechanism.
Introduction
As the saying goes, "no interest, no power", which is why Western enterprises started early to establish the enterprise criminal compliance system source power.The establishment of an effective compliance plan can make it possible for enterprises to get more shelter when facing criminal behavior, to reduce penalties, and even to win the result of deferred prosecution or non-prosecution, to seek more opportunities for the development of enterprises, but also greatly reduce the risk of enterprise death.The larger the enterprise, the more it will benefit from the compliance construction.In recent years, more and more multinational enterprises have taken compliance management as an important part of corporate culture.In order to adapt to the international economic situation, international organizations have also begun to formulate targeted "soft law" standards of corporate compliance, and guide member states to comply with and promote them.In China, financial regulatory authorities and the State-owned Assets Supervision and Administration Commission have also begun to issue compliance guidelines and guidelines of "soft law" nature according to China's national conditions to guide enterprises in compliance construction.Foreign enterprises' attention to criminal compliance business and the experience and lessons they have gained have also provided a wide space for China to carry out research on criminal compliance.
Enterprise criminal compliance as the basis for deferred prosecution/non-prosecution
In essence, deferred prosecution is a contract between the procuratorate organ and the enterprise.Specifically, the procuratorate gives the enterprise involved a certain amount of time and space, requires the enterprise to pay a certain count of fines, build a compliance management system, and report regularly.After the expiration of the period, the procuratorate decides whether to file a prosecution based on the completion of the enterprise.Criminal compliance deferred prosecution agreements first appeared in the American Amur Corporation case in 1993.[1] In this case, the prosecutor entered into a deferred prosecution agreement with company, included the content of the compliance management system in the agreement, which was somewhat creative.This was followed by a series of memos issued by the Justice Department.It is also from this that the effective and compliant deferred prosecution mode began to emerge gradually.It was then widely applied into judicial practice.[2] The deferred prosecution agreement system based on the compliance management system originated in the United States, and later many civil law system and common law system countries also through different ways to give recognition and establishment.[3] The Supreme People's Procuratorate defined the non-prosecution of enterprise compliance in the pilot work of enterprise compliance reform which means that if the enterprise involved is willing to establish a compliance management system, the procuratorate can order it to correct its criminal behavior, comprehensively improve the compliance management system, and establish a sound risk identification and response mechanism, violation investigation and accountability mechanism.Then, according to the construction of the enterprise's compliance management system, the decision on whether to prosecute or not to prosecute will be made.The pilot work of the Supreme People's Procuratorate has also been reflected in concrete practice.In the "Environmental pollution case of a company and Mr. Zhang", the procuratorates took the initiative to investigate the operating conditions of the companies, screened whether they met the applicable conditions of enterprise compliance, and actively asked the enterprises involved about their willingness to carry out compliance rectification, laying a good foundation for the resume of the enterprise compliance management system.Although non-prosecution of compliance has been used in practice, it still lacks support of legal application in our country.The most direct manifestation is that it has not been adjusted in the form of Criminal Procedure Law.According to the kinds of non-prosecution in our country, the procuratorates have adopted "conditional non-prosecution mode".In fact, non-prosecution with condition can only be applied to juvenile crimes, so legal application of non-prosecution with condition needs to be improved by the way of legislation.[4] The establishment of this point is particularly important in the countries of civil law system.
Enterprise criminal compliance as the basis for conviction and sentence
If the enterprise has had an effective compliance management system before the case, then the enterprise can prove that it has fulfilled its management obligations and plead not guilty based on this.Such a provision already exists in the British Anti-Bribery Act.When an enterprise is suspected of bribery crimes, it can prove that it has set up anti-bribery mechanism and has fulfilled corresponding obligations, which can be used as the basis for the crime.[5] In this way, it can prove that the compliance management system of the enterprise has been effectively run, and take this as the reason for the enterprise to plead not guilty.[6] In fact, such a practice separates the company's behavior from the employee's behavior.Since the company has an independent will, as the company has done its reasonable obligation to prevent the criminal behavior of the employee, it can be considered that the company has fulfilled its duty of care, that is, at the corporate level, the company itself has no criminal intention.Under the premise, if the company is required to unconditionally pay for the employee's behavior, it will inevitably appear unfair to the company, resulting in the imbalance of rights and obligations at the company level.Even if the company is finally determined to be suspected of carrying out criminal acts, the judicial authorities can still reduce the liability of the company by saying that the company has fulfilled certain obligations.[7] This is one way to incentivize companies.Although enterprises cannot be completely exonerated, the mitigation of criminal law may also be the key factor of death and inventory for enterprises, which also shows that the compliance system plays a role in the incentive system in the mitigation of criminal law.
In the United States Federal Sentencing Guidelines clear that an enterprise has established an effective compliance system which can be used as a circumstance for mitigating punishment.Such groundbreaking regulations have increased the incentive for enterprises to establish a compliance management system to prevent criminal behavior.As an effective system has been established in advance and the system is functioning properly, the punishment can be reduced according to the degree.[8] In 2020, Lanzhou Intermediate People's Court heard the first enterprise criminal compliance defense case "Nestle Employee Infringement of Personal Information Case", the court held that the unit crime should be for the collective interests of the unit, and the decision-making level of the unit should carry out or decide to carry out the criminal behavior.Nestle has expressly prohibited employees from infringing on citizens' personal information, and the perpetrator intentionally violated the company's regulations and committed a criminal act for his own work performance, which is obviously his personal behavior and should not be attributed to the company.In this case, Nestle proved through its compliance obligations that there was no fault of the supervisor on the part of the company and that the company did not need to take responsibility for the individual behaviors of employees beyond the company's control.The court finally found that the company did not have the fault of the supervisor and ruled that it did not constitute a unit crime.Before this case, the court usually combined the subjective will of the doer to determine the subjective will of the unit act.[9] In this case, the court broke through the separation of the will of the doer and the will of the company, excluding the subjective intention of the unit, which has positive significance.
Thus, in practice, compliance has been considered as sentencing circumstances by the judicial authorities, but due to the lag of the law, it has not been explicitly stipulated, so enterprise compliance can only be regarded as discretionary sentencing circumstances at most, rather than statutory sentencing circumstances, which leads to the limited impact of enterprise compliance on the reduction of penalty.
Conclusion
Although corporate compliance is a way of corporate governance guided by new situation and new value, few enterprises will pay attention to it, let alone require them to spend money and time to implement it, if they cannot make profits after the implementation of criminal compliance.The experience of the United States shows that the implementation of compliance programs in the Foreign Corrupt Practices Act is synchronized with the incentive mechanism of criminal law.Other countries have also learned from the experience of the United States, and even carried out direct legal transplants.The reason that the incentive mechanism of criminal law should be established is not an innovative system, but only based on the utilitarian philosophical consideration of enterprises.Only by separating the behaviors of enterprises from the behaviors of employees in a timely manner can the balance be found between punishing the violations of enterprises and ensuring the inventory of enterprises.The existence of what inscrutable philosophical basis, but mainly based on a utilitarian philosophical consideration, that is, only "let off the illegal enterprises, severely punish the illegal executives and employees", in order to punish the illegal behavior of enterprises and avoid causing significant losses between the balance.In practice, most countries in Europe and the United States have adopted the method of giving severe punishment to employees.And the reason why enterprises have implemented an effective compliance system to lenient treatment, is to try to avoid the consequences of the conviction of the enterprise is difficult to survive, to ensure the interests of the shareholders, investors of the enterprise, but also to protect social employment and transaction security and stability.Such a practice, in addition to reasonable, scientific, but also to the maximum extent to avoid the local economic shocks, prevent the negative impact on economic development.
In terms of substantive law, we can learn from the legislative and judicial practice of the UK.When judging whether an enterprise constitutes a unit crime, we adopt the "independent will theory of the enterprise", which will be reflected through documents such as the company's articles of association, internal rules and measures such as risk control and crime prevention taken by the company.In addition to considering whether individual actions are done in the name of the company and for the collective good, the company should also consider whether it has fulfilled its reasonable management obligations.Applying the conditional non-prosecution expansion to corporate crime is a good way to do it.Provisions on content with general characteristics, such as compliance with laws and regulations, industry norms and business ethics.Establishing investigation and accountability mechanisms for violations.Regularly report the construction of enterprise compliance management system.
Based on the experience of foreign countries and the present situation of Chinese legal system, we should not be completely passive and pragmatic in order to make enterprise compliance system a benign system in our judicial system.As an important aspect of enterprise compliance, incentive system should be based on China's legal system and legal practice to study the localization and localization of enterprise compliance.The scientific attitude towards enterprise compliance should be to transplant it into China's legal system on the basis of a comprehensive understanding of its basic principles and operation mode, so that it can be seeded and germinated in China's institutional soil, and gradually become a "living organism" that can effectively play the function of the system.
|
2023-03-18T15:16:48.133Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9620887f23937db0bf9be40d338f799fb88a1519",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2023/03/16/article_1678952594.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "756a11080cb5dc18640f9eb6b7c5f7bf4eed8b2c",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
}
|
234067364
|
pes2o/s2orc
|
v3-fos-license
|
Prehospital triage tools across the world: a scoping review of the published literature
Background Accurate triage of the undifferentiated patient is a critical task in prehospital emergency care. However, there is a paucity of literature synthesizing currently available prehospital triage tools. This scoping review aims to identify published tools used for prehospital triage globally and describe their performance characteristics. Methods A comprehensive search was performed of primary literature in English-language journals from 2009 to 2019. Papers included focused on emergency medical services (EMS) triage of single patients. Two blinded reviewers and a third adjudicator performed independent title and abstract screening and subsequent full-text reviews. Results Of 1521 unique articles, 55 (3.6%) were included in the final synthesis. The majority of prehospital triage tools focused on stroke (n = 19; 35%), trauma (19; 35%), and general undifferentiated patients (15; 27%). All studies were performed in high income countries, with the majority in North America (23, 42%) and Europe (22, 40%). 4 (7%) articles focused on the pediatric population. General triage tools aggregate prehospital vital signs, mental status assessments, history, exam, and anticipated resource need, to categorize patients by level of acuity. Studies assessed the tools’ ability to accurately predict emergency department triage assignment, hospitalization and short-term mortality. Stroke triage tools promote rapid identification of patients with acute large vessel occlusion ischemic stroke to trigger timely transport to diagnostically- and therapeutically-capable hospitals. Studies evaluated tools’ diagnostic performance, impact on tissue plasminogen activator administration rates, and correlation with in-hospital stroke scales. Trauma triage tools identify patients that require immediate transport to trauma centers with emergency surgery capability. Studies evaluated tools’ prediction of trauma center need, under-triage and over-triage rates for major trauma, and survival to discharge. Conclusions The published literature on prehospital triage tools predominantly derive from high-income health systems and mostly focus on adult stroke and trauma populations. Most studies sought to further simplify existing triage tools without sacrificing triage accuracy, or assessed the predictive capability of the triage tool. There was no clear ‘gold-standard’ singular prehospital triage tool for acute undifferentiated patients. Trial registration Not applicable. Supplementary Information The online version contains supplementary material available at 10.1186/s13049-022-01019-z.
Page 2 of 11 Bhaumik et al. Scand J Trauma Resusc Emerg Med (2022) 30:32 helps dictate the ensuing treatment and/or transportation plan. Triage has a demonstrated mortality benefit, for example, in the setting of ST-segment elevation myocardial infarction (STEMI), stroke and trauma [2][3][4]. Triage is employed repeatedly and used across the spectrum of emergency care delivery: at the time of resource dispatch, upon prehospital personnel arrival at the scene of the patient, and again by staff of the receiving facility [5].
In low-and middle-income countries (LMICs) in particular, prehospital triage may play an even more critical role. It is estimated that more than half of deaths in LMICs are caused by conditions that benefit from prehospital and emergency care. Examples include infectious diseases, complications of pregnancy, cardiovascular disease, road traffic accidents and interpersonal violence [6]. Many LMICs have nascent EMS systems that would benefit from effective triage tools [7][8][9].
As EMS systems develop globally, regardless of country or setting, there is a general paucity of review literature appraising prehospital triage tools currently in use across the world. The only comprehensive search of prehospital triage tools to date is a 2013 systematic review which assessed patient-level outcomes attributable to validated prehospital triage systems [10]. Despite screening over 11,000 unique titles and abstracts, and performing 120 full text reviews, the authors found no studies that met their inclusion criteria, which required the triage tool to undergo direct comparison to an alternate tool or to a no triage arm. Hence, more exploratory studies are needed to better understand the state of prehospital triage across the globe in an effort to inform future EMS research and development.
The primary aim of this scoping review was to identify the breadth and diversity of published prehospital triage tools in use across the world and to understand reasons why these studies were performed. Secondarily, we sought to describe the performance characteristics of these tools to provide recommendations on which tools, if any, may be suitable for adoption in new and developing EMS systems.
Methods
A scoping review was done to systematically map the research on prehospital triage tool development, and to identify any existing gaps in knowledge [11]. The following research questions were formulated: What is known from the literature about triage tools that are being used by EMS providers at the time of single patient care in the prehospital setting globally? How are these tools being studied, and what are their performance characteristics?
Considering our focus on triage in routine EMS care, we did not include mass casualty triage tools given their unique mode of application to sorting patients in the specific circumstance of multi-casualty events. We defined prehospital triage as the algorithmic process undertaken by an EMS provider to sort the undifferentiated patient into an appropriate category based on suspected pathology and level of acuity. Clinical treatment protocols (e.g., step-by-step prehospital asthma treatment), clinical guidelines, and singular technologydependent triage tools (e.g., electrocardiogram (EKG) for prehospital STEMI triage) are excluded from our definition of triage.
A Results were limited to English language articles published between 2009 and 2019 to include more contemporaneous papers that are more likely to study tools in current use. We reviewed scientifically peer-reviewed published literature; publication types were limited to the primary literature, including observational cohort and interventional studies. Since we sought to perform a direct review of the most robust primary literature studying these tools, we excluded case reports, reviews, systematic reviews, meta-analyses, comments, editorials, letters, and conference proceedings [12]. All results were exported to, and deduplicated in, EndNote X9 (Clarivate Analytics, Philadelphia, PA). Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia) was used for screening and full text review. See Additional file 1 for a list of all database search strategies.
Retrieved articles were independently screened by two trained reviewers (A1, A2), blinded to each other's reviews. During screening, each reviewer read article titles and abstracts to determine if they satisfied inclusion criteria, and to ensure they did not meet any exclusion criteria (see Table 1). Articles were scored as 'yes' , 'no' , or 'maybe' . Discrepant reviews, or any reviews marked as 'maybe' , were adjudicated by a third reviewer (A3).
The final list of included ('yes') articles was divided between the two reviewers (A1, A2) for a full text review and critical synthesis. The full manuscript of each article was reviewed in detail, and if an article was deemed to meet one or more exclusion criteria, then it was excluded with reason(s) provided. Full text review articles were summarized in prose in a paragraph format (see Additional file 2) which note findings of most relevance to the research objectives. Data from articles, including tool name, country, population, primary research question, sample size, and major findings were also coded in a data charting summary table (See Table 2 for a summary of the top one-third highest quality studies and Additional file 3 for a summary of all studies). The investigators independently appraised, then collectively discussed, all major findings through independent review of the summary paragraphs and tables to reach consensus regarding major themes, key conclusions and recommendations.
The studies included in the final synthesis were assigned a four-tier quality rating (very low, low, moderate, or high) assessed by a customized scale based off the GRADE criteria, which included the study design, number of centers, and sample size (small < 300, moderate 300-1000, or large > 1000) [13]. For example, very low quality rating was assigned to retrospective observational studies that were single center or with small sample size, and a high quality rating was reserved for interventional, controlled, multi-center studies with large sample sizes. The review protocol is available upon request from the corresponding author.
Results
1521 unique articles were retrieved from database query (Fig. 1). After title and abstract screening, 72 (4.7%) met inclusion criteria, and 1449 (95.3%) were excluded. Out of 72 articles which had full-text reviews performed, 55 (3.6% of 1521 unique articles) were deemed relevant and included in the full-text qualitative synthesis. 17 articles were excluded during full-text review with reasons cited in Fig. 1. The prose format synopsis of all 55 articles can be found in Additional file 2, and a summary table of the top third highest quality studies can be found in Table 2. The summary table of all included studies can be found in Additional file 3.
Medical conditions
Of the 55 studies included in our final analysis, 19 (35%) focused on stroke triage, 19 (35%) on trauma triage, and 15 (27%) on triage of general undifferentiated patients. Of the remaining 2 (3%) studies, one addressed infectious disease triage [14] and the other addressed triage of patients with only non-traumatic chief complaints [15].
Location and design of studies
All studies were performed in World Bank designated high income countries, with 23 (42%) in North America, 22 (40%) in Europe, 6 (11%) in East Asia and 4 (7%) in Australia. All studies were prospective or retrospective observational cohort studies with the exception of one small randomized controlled clinical trial on stroke triage by Helwig et al. [16].
Pediatric populations
Twelve studies within the general undifferentiated triage and trauma triage categories included the pediatric population, and four (7%) focused on pediatric patients exclusively. The Rapid Emergency Triage and Treatment System-pediatrics (RETTS-p) tool is used for general undifferentiated pediatric triage in Sweden [17,18]. Magnusson et al. found RETTS-p sensitivity to be moderate, with a sensitivity of 66.7% and specificity of 67.0% for detecting pediatric patients with emergency care need. Table 1 Article inclusion and exclusion criteria *The study had to specifically include patient-level prehospital data # The triage tool must be fully described within the article or through a provided reference. The tool should help the provider arrive at a specific, often binary, triage decision (e.g., Transport patient to trauma center or not; label patient as low or high acuity) $ Assessment of triage outcomes or process must be a stated primary or secondary objective of the study @ The triage tool is not actively used in prehospital clinical practice, is used for research purposes only, or is in development Excludes EMS agency prehospital algorithms or protocols used for clinical management en route (e.g., "asthma protocol") or those that rely on a single diagnostic tool such as a fingerstick glucose or EKG to make a triage decision (e.g., "chest pain protocol")
Inclusion criteria Exclusion criteria
Prehospital/EMS focused* In-hospital focus only In-depth description of triage tool included # Hypothetical triage tool @ Triage tool/process must be a main focus $ Systematic review/meta-analysis Two thirds of the children triaged to life threatening or potentially life threatening by RETTS-p were later identified as non-emergent by hospital providers [18]. Pediatric trauma triage studies focused on the Field Triage Decision Scheme (FTDS) for pediatric trauma in the United States [19], and several regional pediatric trauma tools employed in England [20]. Lerner et al. found that the 2011 Field Triage Guidelines for pediatric trauma had an under-triage rate of 34.8% and over-triage rate of 28.0%, concluding that the current guidelines have an unacceptably high rate of under-triage [19].
General undifferentiated triage tools
The tools for general undifferentiated triage focused on standardized communication of level of acuity assignments between prehospital and emergency department providers. Frequently studied examples include the United Kingdom National Early Warning Score (NEWS), the Canadian Triage and Acuity Scale (CTAS) and the American Emergency Severity Index (ESI), which were originally developed for accurate triage by emergency department providers [21][22][23]. Tools like NEWS incorporate prehospital vital signs and level of consciousness assessments [24,25]; other tools such as CTAS and ESI also include chief complaint, exam findings, and anticipated resource needs as part of the algorithm [26,27]. Most studies on general triage focused on the ability of prehospital providers to use these existing tools. need for lifesaving interventions within hours of ED arrival, and need for intensive care unit (ICU) admission. There was significant heterogeneity of clinical end points in the articles reporting all-comer triage tools. Consequently, a single triage tool in this group with the best performance metrics could not be identified.
Stroke triage
From 19 (35%) articles, we found 18 different stroke prehospital triage tools designed to aid with the recognition of acute stroke. The most commonly studied stroke triage tools were the Rapid Arterial Occlusion Evaluation (RACE, n = 5 studies, 26%), Cincinnati Prehospital Stroke Scale (CPSS; n = 4, 21%), Field Assessment for Stroke Triage for Emergency Destination (FAST-ED, n = 4, 21%), and Los Angeles Motor Scale (LAMS, n = 3, 16%). The authors stated the ultimate aim of prehospital stroke triage is to ensure timely transport of patients with acute ischemic stroke to designated stroke centers that have capabilities for neuroimaging, administration of thrombolytic agents and/or endovascular intervention. According to the authors, these tools also aim to channel patients presenting with stroke mimics, such as hypoglycemia or seizure, away from the major stroke centers to optimize health system resource utilization. Finally, several articles argued that stroke triage tools aim to be easy to use and efficient to administer in the prehospital setting, and to correlate well with the gold standard tools used by in-hospital providers, such as the National Institutes of Health stroke scale (NIHSS) [28]. Of the 19 articles assessed, 10 (53%) used the NIHSS as the referential standard of comparison, or as a model from which scales were derived. Clinical end points for the stroke triage studies were diverse and included: detection of large vessel occlusion (LVO), diagnosis of stroke/transient ischemic attack, tissue plasminogen activator (tPA) administration rate, stroke team activation, accurate destination triage decision, and inter-rater reliability. The highest quality studies for stroke triage evaluated the RACE scale. RACE evaluates five items: facial palsy, upper extremity paresis, lower extremity paresis, head and gaze deviation, and aphasia/agnosia, with a total score of 0 to 9. For example, in a large prospective study in Spain, Carrera et al. validated RACE among a cohort of 1822 patients and found a sensitivity of 84% and specificity of 60% for detecting large vessel occlusion (LVO) for RACE score ≥ 5. 35% of the patients with a RACE ≥ 5 had LVO, compared with 6% LVOs with a RACE < 5 (p < 0.001) [29]. Jumaa et al. found that RACE ≥ 5 had a sensitivity of 77% and specificity of 75% for LVO eligible for mechanical thrombectomy among a cohort of 1147 patients in the United States [30]. Additional scales that have undergone head-tohead comparisons with RACE with comparable performance include the FAST-ED and the CPSS tools [31,32]. The performance characteristics of FAST-ED and CPSS were assessed in 5 (26%) articles. Overall, both have comparable sensitivity (56-83%) and specificity (60-89%) for LVO prediction [31,32].
Trauma triage: ground EMS
A common objective of included trauma triage articles was to accurately identify injured patients that require emergent transport to designated trauma centers. The studies on trauma triage more consistently used similar end points, including trauma center need, under-triage and over-triage rates, and survival to discharge. Trauma center need was uniformly defined as Injury Severity Score (ISS) > 15, need for urgent surgical intervention, or need for intensive care unit level care.
The majority of trauma triage tools identified are based off of the Field Trauma Decision Scheme (FTDS) [33] which appears to be the de facto standard in studies originating from the USA. Since its initial publication in 1986, the FTDS has been revised five times : in 1990, 1993, 1999, 2006 and 2011. According to the articles, the FTDS uses stepwise identification of four aspects of clinical presentation involving physiologic criteria, anatomic criteria, mechanism of injury criteria, and special considerations criteria to identify patients requiring transport to a trauma center. Physiologic criteria focus on vital signs and Glasgow Coma Scale (GCS); anatomic criteria include specific severe injury patterns such as penetrating trauma, flail chest and crush injury; mechanism of injury criteria focuses on high energy mechanisms such as falls from specific height, high speed vehicular crash, and motorcycle accidents; and special considerations include extremes of age, high risk comorbidities, burns, pregnancy, and anticoagulated status [33].
The highest quality study of FTDS performance within this scoping review was conducted by Newgard et al. in 2011; it evaluated the performance characteristics of the 2006 version of FTDS, with a cohort of 122,345 injured patients evaluated and transported by EMS over a 3-year period [34]. Major trauma was defined as ISS > 15, and the overall sensitivity and specificity of the FTDS criteria for identifying major trauma patients were 86% (95% CI 85-87) and 69% (95% CI 68-69), respectively. Triage sensitivity and specificity, respectively, differed by age: 84% and 66% (0 to 17 years); 90% and 64% (18 to 54 years); and 80% and 75% (≥ 55 years). Overall, FTDS appears to have comparatively reduced sensitivity and increased specificity in detection of trauma in elderly patients. Other frequently studied ground EMS tools included the Vittel criteria (France) [35,36], Dutch Field Triage Protocol (Netherlands) [37,38], and Prehospital Index (Canada) [39].
Trauma triage: aeromedical EMS
Three studies focused on the use of trauma triage tools to decide on the utility of helicopter transport [40][41][42]. Brown et al. conducted a US retrospective cohort study of 258,387 trauma patients (16% transported by helicopter, remainder by ground) and found odds of increased survival to discharge for patients transported by helicopter in the following FTDS conditions: GCS < 14 (adjusted Odds Ratio 1.22); respiratory rate < 10 or > 29 (aOR 1.32), penetrating injury (aOR 1.40), or age > 55 (aoR 1.15) [41]. In 2017, Brown et al. investigated the Air Medical Prehospital Triage (AMPT) score, which awards points for low GCS, abnormal respiratory rate, unstable chest wall injury patterns, paralysis, multisystem trauma, or fulfillment of any physiologic plus anatomic criterion from FTDS. The authors found that helicopter EMS increases odds of in-hospital survival by 6.7% for patients with AMPT score ≥ 2 (Absolute Risk Reduction 1.067; 95% CI 1.040-1.083, p < 0.001, n = 222,827) [42].
Trauma triage: traumatic brain injury
Two studies by Fuller et al. focused on predictive tools for triaging severe traumatic brain injury (TBI) in the field [43,44]. The authors studied the Head Injury Transportation Straight to Neurosurgery study (HITS-NS) triage tool and London Ambulance Service major trauma triage tool and found that both had poor sensitivity (< 45%) for detection of severe TBI which was concerning for EMS providers missing TBIs [43,44].
Simplifying triage tools
While most trauma triage studies investigated performance characteristics of established tools, a subset attempted to identify ways to further simplify tools for EMS providers [35,45]. These studies emphasized the challenges of designing the ideal triage tool: the design must optimize over and under-triage rates while remaining streamlined and user friendly to promote widespread adoption.
Discussion
Our scoping review found 55 studies on prehospital triage tools published within the past decade. These tools focused on general undifferentiated, trauma, and stroke populations and all included studies originated from high-income countries. Studies predominantly sought to assess predictive accuracy of the triage tools compared to in-hospital clinical outcomes, and many studied accuracy in simplified versions of existing tools. These published triage tools are generally designed to help prehospital providers determine destination of transport, means of transport and level of acuity. These tools also appear to provide a shared language for prehospital personnel to communicate with other emergency personnel, and assist in identifying vital sign derangements and exam findings across a spectrum of age ranges to differentiate 'acute' and 'non-acute' patients.
Trauma and stroke tools comprised over two-thirds of the included articles, perhaps because of their clinical and health systems significance [46][47][48][49][50][51]. Outcomes for trauma and stroke depend on timely field recognition and are influenced by highly time sensitive interventions that are destination-dependent. Further, trauma and stroke care are regionalized in many high-income countries, therefore correct patient destination decision-making is important to study for trauma and stroke system optimization. Last, both stroke and trauma outcomes are used to drive 'benchmarking' for health system accreditation and funding, which may also drive their importance as a research topic.
In trauma, the US FTDS appears to be the "industry standard" triage tool used, likely reflecting that the majority of our studies were from North America, specifically, the USA [33,52]. As the majority of tools within the trauma triage literature derive from the FTDS, this well-researched tool is a promising starting point for further simplified trauma triage tool development, such as identifying individual components that may predict clinically relevant trauma outcomes [34]. The trauma literature was relatively cohesive in that most studies used common clinical end points, which facilitates comparisons across studies.
In stroke care, while no single tool emerged as the prehospital triage 'gold' standard, the RACE, FAST-ED and Cincinnati Prehospital Stroke scales appear to have the highest quality data supporting their use [29][30][31][32]. The National Institutes of Health Stroke Scale was presented in multiple studies as the gold standard in-hospital tool which was used for comparison [28].
The all-comer triage literature includes a myriad of tools with varying complexity, from those that incorporate vital signs alone (e.g., NEWS), to those with complex diagnostic algorithms incorporating history and exam findings to arrive at a level of acuity designation (e.g., CTAS). No one tool emerged as a clear gold standard, and authors' use of a wide variety of clinical end points which make cross comparisons challenging.
Research themes common to these studies include simplifying existing tools such that they are efficient and accurate for the EMS provider to derive an accurate triage decision, and to identify the most accurate tool out of a large cadre of tools currently available. Standardized reporting of clinical end points would facilitate this endeavor in future research. Additionally, we noted a paucity of articles researching implementation or assessing end user perspectives [29,51,53], and no studies examined costs associated with triage decisions. Qualitative studies assessing EMS provider perceptions of usefulness of prehospital triage tools, cost analyses, and implementation studies would be helpful to further our understanding of the value provided by prehospital triage tools.
Lastly, all the studies included in this scoping review were performed in a few high-income settings, and the tools may not translate well to other high-income settings or LMICs with a different healthcare configuration, infrastructure and cadres of prehospital providers [54]. Destination decision making would need to be locally-determined, especially in LMICs where specialty diagnostic (e.g., computed tomography scanners) and therapeutic resources (e.g., tPA) may be even more scarce. Further, triage tools may need to be tailored based upon regional injury and illness patterns. For example, prehospital triage of obstetric emergencies was notably missing from our review. Jenson et al. performed a systematic review of emergency department (i.e. in hospital) triage tools in LMICs and identified the South Africa Triage Scale (SATS), modified Early Warning Score and the Australasian Triage Scale as promising tools that had been validated across multiple studies in LMIC settings [8]. SATS has been implemented in the prehospital setting in South Africa and studies analyzing real-world performance characteristics, while on-going, are yet to be published [55]. In 2021, Mould-Millman et al. published a theoretical assessment-based validation study of SATS among EMS providers in South Africa. Among 102 EMS providers who performed triage using clinical vignettes, the final SATS triage color was accurately determined in 56.5%, under-triaged in 29.5% and over-triaged in 13.1%, demonstrating good inter-rater reliability but poor validity [56].
In recent years, prehospital care has received increased recognition in international health policy. Data extrapolated from the Global Burden of Disease study show that 24 million lives are lost each year in LMICs due to conditions sensitive to prehospital and emergency care. Ischemic heart disease, cerebrovascular accidents, and unintentional injuries are the largest contributors to morbidity and mortality in these settings [6,57]. In 2019, delegates to the 72nd World Health Assembly adopted a resolution to strengthen emergency and trauma care systems and prehospital care was highlighted as an essential component [58]. Prehospital triage tools are a key building block for quality and safety assurance in the development of novel EMS systems [59]. It is our hope that this scoping review has provided a valuable framework for what is known thus far, and that further research will be done to advance the field.
The authors acknowledge the following limitations of this scoping review. First, the review was limited to English language publications. This may have excluded triage tools published in non-English journals. The review was limited to only peer-reviewed published literature; it is likely that white papers and other non-peer-reviewed papers discuss additional triage tools currently in use. The review protocol was not pre-registered but otherwise followed the PRISMA-ScR recommendations [60]. We included articles with sample sizes or 50 or more cases, which was arbitrary, but intended to select for larger sample size articles from which more compelling conclusions could potentially be drawn. Lastly, inherent to this study's design as a scoping review, the authors were unable to draw quantitative conclusions about the performance characteristics of the tools presented.
Conclusions
This scoping review found that the majority of literature on prehospital triage focused on trauma and stroke specifically, with a few reports on triage tools for general undifferentiated patients. Much of this body of work originates from high income countries. The Field Triage Decision Scheme for trauma, and the Rapid Arterial Occlusion Evaluation for stroke, are especially well studied tools which may serve as tools for emerging EMS systems or as good starting points for simplified adaptations for established EMS systems. We found no single universally accepted 'standard' prehospital triage tool. Future research should focus on implementation analysis and real-world application of these tools. Additionally, research efforts should focus on the development of a single universal triage tool that can be adapted for a variety of contexts.
|
2021-05-10T00:04:34.498Z
|
2021-01-27T00:00:00.000
|
{
"year": 2022,
"sha1": "c37b57487e22e40e907b4589b73400fceefeb2a6",
"oa_license": "CCBY",
"oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/s13049-022-01019-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9bfeff48c5b5038665cd3da2c37099fdd70e0fa",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248640915
|
pes2o/s2orc
|
v3-fos-license
|
Anti-Müllerian Hormone Inhibits FSH-Induced Cumulus Oocyte Complex In Vitro Maturation and Cumulus Expansion in Mice
Simple Summary Anti-Müllerian hormone (AMH) is a homodimeric glycoprotein composed of two identical subunits, which inhibits the recruitment of primordial follicles and the development of antral follicles in females. Anti-Müllerian hormone can be used as a diagnostic and prognostic marker for ovarian reserve, superovulation, embryo quality, and conception rate. However, few studies have focused on the effect of AMH on oocyte maturation. In the present study, we found Anti-Müllerian hormone has no effect on the nuclear maturation and cumulus expansion of cumulus oocyte complexes (COCs), whereas it has an inhibitory effect on follicle-stimulating hormone (FSH)-stimulated COCs nuclear maturation and cumulus expansion. These findings expand our knowledge of the functional role of AMH in modulating folliculogenesis. Abstract Anti-Müllerian hormone (AMH) is secreted by the ovaries of female animals and exerts its biological effects through the type II receptor (AMHR2). AMH regulates follicular growth by inhibiting the recruitment of primordial follicles and reducing the sensitivity of antral follicles to FSH. Despite the considerable research on the actions of AMH in granulosa cells, the effect of AMH on the in vitro maturation of oocytes remains largely unknown. In the current study, we showed that AMH is only expressed in cumulus cells, while AMHR2 is produced in both cumulus cells and oocytes. AMH had no significant effect on COCs nuclear maturation, whereas it inhibited the stimulatory effects of FSH on COCs maturation and cumulus expansion. Moreover, AMH treatment effectively inhibited the positive effect of FSH on the mRNA expressions of Hyaluronan synthase 2 (Has2), Pentraxin 3 (Ptx3), and TNF-alpha-induced protein 6 (Tnfaip 6) genes in COCs. In addition, AMH significantly decreased the FSH-stimulated progesterone production, but did not change estradiol levels. Taken together, our results suggest that AMH may inhibit the effects of FSH-induced COCs in vitro maturation and cumulus expansion. These findings increase our knowledge of the functional role of AMH in regulating folliculogenesis.
The clinical application of the studies cited above mostly depends on the expression pattern and biological function of AMH in the ovary, even though little is known about the factors that regulate AMH expression. In females, AMH has been reported to be highly expressed in granulosa cells of preantral and small antral follicles rather than primordial or atretic follicles [20,21], indicating AMH may play a crucial role in folliculogenesis. The function of AMH has been revealed through studies of AMH transgenic mice [22] and AMH-deficient mice [23]. Despite the lack of an obvious ovarian phenotype in AMH-deficient females, studies demonstrated that AMH could be involved in inhibiting primordial follicle recruitment [24]. These findings were subsequently confirmed by using an in vitro follicle culture system, showing that AMH inhibits the growth of preantral and antral follicles through regulating the sensitivity of FSH [25]. The AMH-induced inhibitory action on follicle growth was mainly the results of reduced granulosa cell proliferation, and decreased aromatase activity and estradiol production [26,27]. Additionally, AMH inhibits follicle activation in response to insulin in ovarian cortical fragments from bovine fetal ovaries in late gestation [28].
Besides the expression pattern mentioned above, AMH is expressed in cumulus cells of large and pre-ovulatory follicles [29]. Its specific expression pattern suggests that AMH, in the form of autocrine or paracrine, may regulate the maturation of the oocyte. However, there are few available studies and the views are contrary regarding the regulatory effect of AMH on oocytes. Takahashi et al. [30] observed that AMH inhibits the development of oocyte meiosis in rats, whereas another study indicated that AMH has no effect on oocyte meiosis [31]. Recently, Zhang et al. [32] revealed that AMH has no effect on in vitro COCs maturation rate in mice, but can improve the blastocyst rate. The reasons for different observations are unknown, but likely are due to differences in species and in vitro culture systems. The objective of the current study is to characterize the expression of AMH and its specific receptor (AMHR2) particularly in oocytes, and elucidate the regulatory effects of AMH on oocyte maturation, cumulus expansion, and steroidogenesis.
Ethics Statement
All animal experiments in this study were approved by the Scientific Ethic Committee of Huazhong Agricultural University (HZAUMO-2017-052) and were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals of the Research Ethics Committee, Huazhong Agricultural University.
Animals
Immature female Kunming mice, aged 21 days old, were purchased from Hubei Disease Control and Prevention Center (Wuhan, China). Animals were housed in airconditioned room at a constant temperature of 25 ± 2 • C with 12 h light/dark cycles, provided ad libitum with water and food. In order to obtain more immature oocytes at the germinal vesicle (GV) phase, the female mice were primed with an intraperitoneal injection of 7.5 IU Pregnant Mare Serum Gonadotropin (PMSG; Sansheng Pharmaceutical Corporation, Ningbo, China) and then sacrificed 44 h later by cervical dislocation.
RNA Isolation and Quantitative Reverse-Transcription PCR Analysis
Total RNA in COCs, cumulus cells, and cumulus-free oocytes was extracted according to RNeasy Micro Kit (QIAGEN, Dusseldorf, Germany) instructions. cDNA was synthesized from 1 µg RNA of each sample by QuantiTect Reverse Transcription Kit (QIAGEN, Dusseldorf, Germany). Specific primers were designed using Primer 5.0 and listed in Table 1. Normal PCR was performed to analyze the expression of Amh and Amhr2 in COCs, cumulus cells, and cumulus-free oocytes, and the PCR products were run on 1.2% agarose (Biowest, Nuaillé, France) gel, and stained with GelRed (Vazyme, Nanjing, China). To access the mRNA expression of Fshr, Amhr2, Bmp15, Gdf9, Ptgs2, Has2, Ptx3, and Tnfaip6 genes in COCs, quantitative reverse-transcription PCR (RT-qPCR) was performed under the guide of instruction of Quantinova SYBR Green PCR Kit (QIAGEN, Dusseldorf, Germany) by CFX96 real-time PCR detection system (Bio-Rad, Hercules, CA, USA). The reaction conditions were as follows: 95 • C for 1 min, 40 cycles of amplifications (95 • C for 10 s, 60 • C for 30 s, and 72 • C for 15 s). Melting curve analysis was performed in the range of 65 • C to 95 • C, 0.5 • C per 5 s increments. Each sample was run along with a no-template control (NTC). The amplification efficiency of all primers was between 90 and 110%. The relative expression of genes was calculated by 2 −∆∆CT [33] and β-actin was used as internal reference gene.
Detection of MPF and cAMP Contents in Oocytes
After culturing COCs in droplets with different treatments for 16 h, COCs were transferred into 200 µL of α-MEM with 0.1% hyaluronidase (Sigma-Aldrich, St. Louis, MO, USA). After instant centrifugation, about 40 oocytes were picked up with a thin glass tube and transferred into 20 µL of PBS with pH of 7.2-7.4. The samples were destroyed at −80 • C, and supernatants were collected after centrifugation at 2500 rpm for 20 min. Maturation-promoting factor (MPF) and cyclic adenosine 3', 5'-monophosphate (cAMP) contents were detected according to the manufacturer's instructions (Mlbio, Shanghai, China). The intra-assay and inter-assay coefficients of variation in MPF and cAMP were less than 10% and 15%, respectively.
Evaluation of Cumulus Expansion
The ability of AMH to regulate cumulus expansion was analyzed by adding 100 ng/mL rhAMH and 100 ng/mL rmFSH alone or in combination to the COCs in vitro culture system. After 16 h culturing of COCs, the cumulus expansion index (CEI) was calculated according to the previously reported method [34]. Briefly, cumulus expansion can be divided into five levels: grade 0, no cumulus expansion, oocytes attached to the bottom of the dish; grade 1, only the outermost 1-2 cumulus granulosa cells expanded; grade 2, the outer cumulus granulosa cells expanded radially, and the whole COCs were observed to be fluffy; grade 3, the radial crown part did not expand, the rest were expanded; grade 4, all cumulus granulosa cells expanded. CEI = [(number of grade 0 oocytes × 0) + (number of grade 1 oocytes × 1) + (number of grade 2 oocytes × 2) + (number of grade 3 oocytes × 3) + (number of grade 4 oocytes × 4)]/total number of oocytes.
Measurement of Estrogen and Progesterone
After 16 h culturing of COCs, culture supernatant was collected for hormone detection. Estradiol and progesterone were measured using the mouse estradiol (E2) ELISA kit (CUSABIO, Wuhan, China) and mouse progesterone (PROG) ELISA kit (CUSABIO, Wuhan, China) according to the instructions. The intra-and inter-assay coefficients of variation were less than 15.0% and 15.0% for estradiol, 15.0% and 15.0% for progesterone, respectively.
Statistical Analysis
All data were presented as mean ± SEM (standard error of mean) and each experiment was conducted at least in triplicate. Cumulus expansion index (CEI) was analyzed by SPSS Kruskal-Wallis test followed by Holm adjustment, and other data analysis was performed using SPSS software package and one-way ANOVA followed by least significant difference (LSD) test. Differences were considered to be statistically significant when p < 0.05.
Expression of Amh and Amhr2 in COCs, CCs and Cumulus-Free Oocytes
As shown in Figure 1, Amh and Amhr2 transcripts were detected in both mouse COCs and CCs. In contrast, Amhr2, but not Amh transcripts, was observed in cumulus-free oocytes ( Figure 1A,B, Lane 3), suggesting AMH may exert paracrine effects on oocytes maturation through binding to AMHR2.
Expression of Amh and Amhr2 in COCs, CCs and Cumulus-Free Oocytes
As shown in Figure 1, Amh and Amhr2 transcripts were detected in both mouse COCs and CCs. In contrast, Amhr2, but not Amh transcripts, was observed in cumulus-free oocytes ( Figures 1A,B, Lane 3), suggesting AMH may exert paracrine effects on oocytes maturation through binding to AMHR2.
Effect of AMH on In Vitro Maturation of Cumulus Oocyte Complexes
The effect of AMH on COCs in vitro maturation was analyzed by treatment with 100 ng/mL rhAMH and 100 ng/mL rmFSH alone or in combination. As shown in Figure 2A, treatment with FSH resulted in the highest oocyte maturation rate with 93% reaching MII. Treatment with 100 ng/mL rhAMH had no significant effect on the COCs maturation when compared to the control group. The maturation rate of COCs in the combination of AMH and FSH group was decreased (p < 0.05) when compared to FSH alone treatment group, suggesting that AMH could inhibit the promoting effect of FSH on COCs in vitro maturation. Likewise, FSH induced higher mRNA expression of Fshr ( Figure 2B, p < 0.05), AMH was able to attenuate this effect although there was no significant difference between the FSH alone and FSH plus AMH groups. The expression of Amhr2 transcript was upregulated in all experimental groups ( Figure 2C, p < 0.05). Unexpectedly, there were no differences among groups in the mRNA expression of either Bmp15 or Gdf9 genes ( Figure 2D, E). Compared with the AMH alone treatment group, the AMH and FSH combined groups could reduce the cAMP content ( Figure 2F, p < 0.05), while the AMH and FSH alone treatment group had no significant difference in the content of cAMP as compared to the control group. In addition, the FSH treatment group increased the MPF content of oocytes ( Figure 2G, p < 0.05), whereas the AMH and FSH combined treatment group resulted in a significant reduction in the MPF content stimulated by FSH ( Figure 2G, p < 0.05).
Effect of AMH on In Vitro Maturation of Cumulus Oocyte Complexes
The effect of AMH on COCs in vitro maturation was analyzed by treatment with 100 ng/mL rhAMH and 100 ng/mL rmFSH alone or in combination. As shown in Figure 2A, treatment with FSH resulted in the highest oocyte maturation rate with 93% reaching MII. Treatment with 100 ng/mL rhAMH had no significant effect on the COCs maturation when compared to the control group. The maturation rate of COCs in the combination of AMH and FSH group was decreased (p < 0.05) when compared to FSH alone treatment group, suggesting that AMH could inhibit the promoting effect of FSH on COCs in vitro maturation. Likewise, FSH induced higher mRNA expression of Fshr ( Figure 2B, p < 0.05), AMH was able to attenuate this effect although there was no significant difference between the FSH alone and FSH plus AMH groups. The expression of Amhr2 transcript was upregulated in all experimental groups ( Figure 2C, p < 0.05). Unexpectedly, there were no differences among groups in the mRNA expression of either Bmp15 or Gdf9 genes ( Figure 2D,E). Compared with the AMH alone treatment group, the AMH and FSH combined groups could reduce the cAMP content ( Figure 2F, p < 0.05), while the AMH and FSH alone treatment group had no significant difference in the content of cAMP as compared to the control group. In addition, the FSH treatment group increased the MPF content of oocytes ( Figure 2G, p < 0.05), whereas the AMH and FSH combined treatment group resulted in a significant reduction in the MPF content stimulated by FSH ( Figure 2G, p < 0.05).
Effect of AMH on Cumulus Expansion of Cumulus Oocyte Complexes
The results showed that the AMH treatment group had no effect on cumulus expansion ( Figure 3A-C), while FSH increased cumulus expansion ( Figure 3B,C, p < 0.05). Compared with the FSH alone, the combination of AMH and FSH resulted in a decrease in the cumulus expansion index ( Figure 3B,C, p < 0.05), suggesting that AMH may inhibit the stimulatory effect of FSH on cumulus expansion. RT-qPCR was used to further detect transcripts associated with cumulus expansion in each treatment group. As shown in Figure 4, AMH treatment increased the mRNA expression of Ptgs2 in cumulus cells, whereas the expressions of Has2, Ptx3, and Tnfaip6 genes were unchanged (p > 0.05). In contrast, FSH upregulated the mRNA expressions of Has2, Ptx3, and Tnfaip6 transcripts (p < 0.05). Furthermore, AMH inhibited the stimulatory effects of FSH on Has2, Ptx3, and Tnfaip6 expressions. Data were expressed as mean ± SEM from at least three independent experiments. Bars with different letters represent significant differences (p < 0.05).
Effect of AMH on Cumulus Expansion of Cumulus Oocyte Complexes
The results showed that the AMH treatment group had no effect on cumulus expansion ( Figure 3A-C), while FSH increased cumulus expansion ( Figure 3B,C, p < 0.05). Compared with the FSH alone, the combination of AMH and FSH resulted in a decrease in the cumulus expansion index ( Figure 3B,C, p < 0.05), suggesting that AMH may inhibit the stimulatory effect of FSH on cumulus expansion. RT-qPCR was used to further detect transcripts associated with cumulus expansion in each treatment group. As shown in Figure 4, AMH treatment increased the mRNA expression of Ptgs2 in cumulus cells, whereas the expressions of Has2, Ptx3, and Tnfaip6 genes were unchanged (p > 0.05). In contrast, FSH upregulated the mRNA expressions of Has2, Ptx3, and Tnfaip6 transcripts (p < 0.05). Furthermore, AMH inhibited the stimulatory effects of FSH on Has2, Ptx3, and Tnfaip6 expressions.
The Role of AMH in Regulation of Estradiol and Progesterone
After 16 h culturing of COCs, estradiol and progesterone in culture supernatant were measured to detect the effect of AMH on steroidogenesis in COCs. The results showed that there were no significant differences in estradiol content among AMH, FSH, AMH plus FSH and the control groups ( Figure 5A, p > 0.05). Compared with the control group, there was no significant difference in progesterone content in the AMH alone treatment group ( Figure 5B, p > 0.05), while the FSH treatment group could significantly increase progesterone production ( Figure 5B, p < 0.05). Importantly, AMH significantly inhibited the promoting effect of FSH on progesterone levels ( Figure 5B, p< 0.05). actin and were expressed as mean ± SEM from three independent experiments. Bars with different letters represent significant differences (p < 0.05).
The Role of AMH in Regulation of Estradiol and Progesterone
After 16 h culturing of COCs, estradiol and progesterone in culture supernatant were measured to detect the effect of AMH on steroidogenesis in COCs. The results showed that there were no significant differences in estradiol content among AMH, FSH, AMH plus FSH and the control groups ( Figure 5A, p > 0.05). Compared with the control group, there was no significant difference in progesterone content in the AMH alone treatment group ( Figure 5B, p > 0.05), while the FSH treatment group could significantly increase progesterone production ( Figure 5B, p < 0.05). Importantly, AMH significantly inhibited the promoting effect of FSH on progesterone levels ( Figure 5B, p< 0.05). The data were expressed as mean ± SEM from three independent experiments. Bars with different letters represent significant differences (p < 0.05).
Discussion
Several studies have shown that AMH and its receptor are mainly expressed in granulosa cells of non-atretic, preantral, and small antral follicles [20,35]. In the present study, we investigated the expression levels of Amh and Amhr2 in cumulus cells and cumulus oocyte complexes (COCs) as well as cumulus-free oocytes. We found that Amh is exclusively expressed in murine cumulus cells. In contrast, Amhr2 is expressed in both oocytes and cumulus cells. These results are in agreement with a recent study [32], although previous studies reported that oocytes expressed very little amount or no Amh and Amhr2 mRNA [20]. On the other hand, the present results confirmed that Amh remains highly expressed in cumulus cells, which is supported by previous reports in humans, indicating AMH is predominantly expressed in cumulus cells of large antral and pre-ovulatory follicles [29,36].
Considering our observations that Amhr2 expressed in oocytes, in the current study we investigated whether AMH influences the in vitro maturation of cumulus oocyte complexes (COCs). Notably, contradictory results have been reported concerning the direct effects of AMH on COCs maturation. An early study on the actions of AMH in the ovary indicated that bovine AMH inhibited oocyte meiosis in rats [30], whereas other studies indicated that AMH had no effect on oocyte meiosis in rats [31] and mice [32]. Our result confirmed that AMH has no significant effects on in vitro COCs maturation. It is generally appreciated that FSH supplementation in the maturation medium can enhance the in vitro oocyte maturation [37,38]. In this study, we observed that FSH of 100 ng/mL concentrations improved the COCs quality compared to the control group. Interestingly, we also found that AMH suppressed the nuclear maturation of COCs induced by FSH, accompanied by reductions in MPF content and Fshr expression. There are many reports clearly showing that AMH exerts an inhibitory role in follicular sensitivity to FSH [24,25,39] and The data were expressed as mean ± SEM from three independent experiments. Bars with different letters represent significant differences (p < 0.05).
Discussion
Several studies have shown that AMH and its receptor are mainly expressed in granulosa cells of non-atretic, preantral, and small antral follicles [20,35]. In the present study, we investigated the expression levels of Amh and Amhr2 in cumulus cells and cumulus oocyte complexes (COCs) as well as cumulus-free oocytes. We found that Amh is exclusively expressed in murine cumulus cells. In contrast, Amhr2 is expressed in both oocytes and cumulus cells. These results are in agreement with a recent study [32], although previous studies reported that oocytes expressed very little amount or no Amh and Amhr2 mRNA [20]. On the other hand, the present results confirmed that Amh remains highly expressed in cumulus cells, which is supported by previous reports in humans, indicating AMH is predominantly expressed in cumulus cells of large antral and pre-ovulatory follicles [29,36].
Considering our observations that Amhr2 expressed in oocytes, in the current study we investigated whether AMH influences the in vitro maturation of cumulus oocyte complexes (COCs). Notably, contradictory results have been reported concerning the direct effects of AMH on COCs maturation. An early study on the actions of AMH in the ovary indicated that bovine AMH inhibited oocyte meiosis in rats [30], whereas other studies indicated that AMH had no effect on oocyte meiosis in rats [31] and mice [32]. Our result confirmed that AMH has no significant effects on in vitro COCs maturation. It is generally appreciated that FSH supplementation in the maturation medium can enhance the in vitro oocyte maturation [37,38]. In this study, we observed that FSH of 100 ng/mL concentrations improved the COCs quality compared to the control group. Interestingly, we also found that AMH suppressed the nuclear maturation of COCs induced by FSH, accompanied by reductions in MPF content and Fshr expression. There are many reports clearly showing that AMH exerts an inhibitory role in follicular sensitivity to FSH [24,25,39] and FSH receptor expression [39]. Our findings here further broaden this negative effect of AMH on FSH inducing COCs in vitro maturation.
Cumulus expansion of the cumulus oocyte complexes (COCs) is necessary for meiotic maturation of oocytes, and regulated by endocrine and paracrine factors includ-ing FSH [38,40], epidermal growth factor (EGF) [41,42], and Insulin-like growth factor 1 (IGF-1) [43]. In this study, when COCs were treated with the combination of AMH and FSH, the stimulatory effect of FSH was significantly inhibited, which is consistent with the COCs maturation results. Ptgs2, Has2, Ptx3, and Tnfaip6 are the key genes involved in the cumulus expansion; therefore, we further determined the mRNA expression of those genes. Similarly to recently published results [38], we found that the stimulatory effect of FSH on cumulus expansion was associated with a marked upregulation of Has2, Ptx3, and Tnfaip6 expression, responses believed to be mediated mainly through protein kinase A (PKA) and EGF pathways [41] as well as an estrogen-signaling pathway mediated by G-protein coupled receptor 30 (GPR30) [38]. However, AMH can block the FSH-stimulating effect on mRNA expression of Has2, Ptx3, and Tnfaip6. Those results demonstrated that AMH has a negative regulatory effect on FSH biological function through limiting cumulus expansion.
Some reports demonstrated that AMH has a negative and inhibitory role on FSHstimulated estradiol production in human granulosa-lutein cells by decreasing FSH-stimulated aromatase expression [26,27]. Here, we observed that AMH had no effect on basal and FSH-induced estradiol levels in the supernatant of COCs culture medium. The discrepant observations of AMH on estradiol production may be due to different culture materials. Notably, we found that AMH reduced FSH-stimulated progesterone production, which is similar to the previous report showing that AMH inhibited EGF-stimulated progesterone production in human granulosa-luteal cells [44].
Conclusions
The results of this study indicate that AMH has no effect on COCs nuclear maturation and cumulus expansion. Furthermore, AMH has an inhibitory effect on FSH-stimulated COCs maturation, cumulus expansion, and progesterone production. Our findings further broaden the horizon for our understanding of the actions of AMH on the oocyte and the inhibitory effects of AMH on the biological activity of FSH during follicular development.
|
2022-05-10T15:01:50.079Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "4a8913d846e0ab4aae183c602d71764ee874a8c7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/12/9/1209/pdf?version=1651920685",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60f019d13f54eac900836cc578aa4a88dea1af8a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238259359
|
pes2o/s2orc
|
v3-fos-license
|
Quantum key distribution with non-ideal heterodyne detection: composable security of discrete-modulation continuous-variable protocols
Continuous-variable quantum key distribution exploits coherent measurements of the electromagnetic field, i.e., homodyne or heterodyne detection. The most advanced security proofs developed so far relied on idealised mathematical models for such measurements, which assume that the measurement outcomes are continuous and unbounded variables. As physical measurement devices have finite range and precision, these mathematical models only serve as an approximation. It is expected that, under suitable conditions, the predictions obtained using these simplified models are in good agreement with the actual experimental implementations. However, a quantitative analysis of the error introduced by this approximation, and of its impact on composable security, have been lacking so far. Here we present a theory to rigorously account for the experimental limitations of realistic heterodyne detection. We focus on collective attacks, and present security proofs for the asymptotic and finite-size regimes, the latter within the framework of composable security. In doing this, we establish for the first time the composable security of discrete-modulation continuous-variable quantum key distribution in the finite-size regime. Tight bounds on the key rates are obtained through semi-definite programming and do not rely on a truncation of the Hilbert space.
I. INTRODUCTION
Quantum key distribution (QKD) is the art of exploiting quantum optics to distribute a secret key between distant authenticated users. Such a secret key can then be used as a one-time pad to achieve unconditionally secure communication. First introduced in the 80's by Bennett and Brassard [1], QKD is now at the forefront of quantum science and technology. By encoding information into the quantum electromagnetic field, QKD enables provably secure communication through an insecure communication channel, a task known to be impossible in classical physics. This contrasts with standard and post-quantum cryptography, which are based on computational assumptions and do not guarantee long-term security. In fact, future advancements in theoretical computer science or computational power (including quantum computing) may jeopardize the security of these schemes.
To travel the route from fundamental physics to future technologies, we need to account for the trade-off between the rate of key generation of the protocol, its security, and the feasibility and robustness to experimental imperfection. The highest standards of security and robustness are those of device-independent QKD, but are achieved at the cost of a reduced key rate. Here we focus on continuous-variable (CV) QKD, within the device-dependent approach, which allows for feasible implementations with much higher key rates. Our goal is to improve the robustness of CV QKD to experimental imperfections and practical limitations. For a recent review of device-independent QKD and CV QKD we refer to Ref. [2].
CV QKD denotes a family of protocols where information is carried by the phase and quadrature of the quantum electromagnetic field. A variety of protocols exist that differ in how the quadratures encode this information [3][4][5][6][7]. However, when it comes to decoding, all CV QKD protocols exploit coherent measurements of the field, i.e., either homodyne or heterodyne detection [8]. The strategic importance of CV QKD indeed relies on this choice of measurement, as homodyne and heterodyne detection are mature, scalable, and noise-resilient technologies. This is in contrast with discrete-variable architectures, that require bulky, high-efficiency, and lownoise single-photon detectors [9].
When modeling a CV QKD protocol, it is customary to describe its measurement outcomes as continuous and unbounded variables. In these models, homodyne detection measures one quadrature of the field, and heterodyne detection provides a joint measurement of both quadrature and phase [8]. These simplified models are powerful mathematical tools due to their continuous symmetry. Two fundamental theoretical results rely on this symmetry: the optimality of Gaussian attacks [10][11][12] and the Gaussian de Finetti reduction [13]. However, this symmetry is not exact and is broken by real-world physical devices. In fact, in actual experimental implementations, homodyne and heterodyne detection yield digital outcomes and have a finite range [14,15]. While it is expected that, in some limit, the idealised measurement models describe actual physical devices well, up to now a quantitative analysis of this approximation was lacking. In particular, it was not known how to quantify the impact of these non-idealities on the secret key rate.
In this work we finally fill this important conceptual gap and present a theory to quantify the secret key rates obtained in actual QKD protocols that exploit actual measurement devices. Up to now, only a handful of results were available in this direction. Furrer et al. considered digitalised homodyne for a protocol based on distribution of entangled states [6], and Matsuura et al. considered a binary encoding using coherent states, homodyne detection, and a test phase exploiting heterodyne [7]. However, in both cases the key rates do not converge to the asymptotic bounds obtained in Refs. [13,[16][17][18], which are believed to be optimal for ideal detection. In contrast, our results converge to these optimal bounds when the non-idealities are sufficiently small.
We focus on discrete-modulation (DM) protocols, where the sender prepares coherent states whose amplitudes are sampled from a discrete ensemble. We establish the security against collective attacks in both the asymptotic and non-asymptotic regime, the latter within the framework of composable security [19]. This contrasts with previous works on DM CV QKD [16][17][18]20], which only considered the asymptotic limit of infinite channel uses. Our composable security proof allows us to quantify the security of QKD in the practical scenario where the number of signal exchanges is finite, and QKD is used a sub-routine of an overarching cryptography protocol. Although collective attacks are not the most general attacks, they are known to be optimal, up to some finitesize corrections, through de Finetti reduction [13,21,22] While we focus on heterodyne detection, the same approach may be as well applied, with some modifications, to homodyne detection.
II. STRUCTURE OF THE PAPER AND SUMMARY OF RESULTS
We introduce DM CV QKD with non-ideal heterodyne detection in Section III and review its asymptotic security in Section IV. We discuss using a data-driven approach to approximate infinite-dimensional states with ones with finite-dimensional support in Section V, and in Section VI calculate corresponding corrections to our secret key rate by using a continuity argument.
We bound the secret key rates in three different settings with increasing complexity, where in each setting we find the optimal values using linear semi-definite programming. In the first setting (Section VII), the semidefinite programs are still over infinite dimensional quantum states, and knowledge of their optimal values would allow one to determine the secret key rate in the asymptotic limit. In the second setting (Section VIII), we map the infinite dimensional semi-definite programs of Section VII into finite-dimensional ones, the latter of which can be solved numerically without truncating the Hilbert space. This gives us a way to exactly numerically evaluate the secret key rate in the asymptotic limit. In the third setting (Section IX), within a composable security Encoding Composable Heterodyne Key rate Ref. [13] CM Ideal Exact Ref. [17] DM Ideal Approx. Ref. [18] DM Ideal Approx. Ref. [20] DM Ideal Exact Ref. [16] DM Ideal Exact This work DM Realistic Exact . These examples suggest that, in the limit of vanishing nonidealities in heterodyne measurement and growing number of channel uses, the secret key rate of DM CV QKD approaches the highest rate possible. Conclusions and potential future developments are discussed in Section XI. Table I compares our results with previous works that also presented security analysis of CV QKD protocols. We only consider works that obtained a tight estimation of the key rate. The encoding of classical information in quantum signals may happen through either a continuous modulation (CM) or a discrete modulation (DM). In this work we consider DM, which reflects what is actually done in experiments. We obtain our security proof within the framework of composable security, which is the gold standard in cryptography; composable security permits a quantitative assessment of the security of QKD, including when the QKD protocol is a subroutine of an overarching communication protocol. We consider a realistic model of actual heterodyne detection, instead of the ideal model used in previous works. Our numerical calculation of the lower bound on the secret key rate is exact, as we do not need to impose an arbitrary cutoff of the Hilbert space.
III. THE MODEL
We consider one-way QKD where one user (conventionally called Alice) prepares quantum states and sends them to the other user (called Bob), who measures them by heterodyne detection. The transmission is through an insecure quantum channel that may be controlled by an adversary (called Eve). This general scheme defines a prepare & measure (PM) protocol. In this work we focus on DM CV QKD protocols where, on each channel use, Alice prepares a coherent state |α whose amplitude is sampled from an M -ary set, {α x } x=0,...,M−1 with probabilities P x . This defines Alice's M -ary random variable X. An example is quadrature phase shift keying (QPSK), obtained for M = 4 and setting α x = αi x , P x = 1/4.
In order to prove the security of these protocols, we need to consider a different, though formally equivalent, scenario where a bipartite quantum state ρ AB is distributed to Alice and Bob, of which Eve holds a purification. This kind of setting defines an entanglement-based (EB) protocol. It is sufficient to prove the security of the EB protocol, from which the security of the PM protocol follows. In the EB protocol, the state ρ is a two-mode state, where a, a † and b, b † are the annihilation and creation operators for Alice and Bob, respectively. The EB representation of DM CV QKD protocols is discussed in detail in Ref. [16]. In this work we focus on collective attacks, which are identified by the assumption that, over n uses of the quantum channel, the state factorises and has the form ρ ⊗n AB . In the following, we indicate as ρ B = Tr A (ρ AB ) the reduced state on Bob side. To make the notation lighter, we will sometimes drop the subscripts AB or B when the the meaning is clear from the context.
On the receiver's side, Bob measures by applying heterodyne detection. Ideally, heterodyne detection is a joint measurement of the field's quadrature (q) and phase (p), whose output can be described as a complex variable β = (q + ip)/ √ 2. Ideal heterodyne detection, applied on a state ρ, would yield a continuous and unbounded output, with probability density 1 π β|ρ|β , where |β is the coherent state of amplitude β. In contrast, actual experimental realisations of heterodyne detection have measurement outcomes that are confined to a finite region in phase space, β ∈ R(R), and hence have finite range. Here we assume that the region R(R) is defined by the condition q, p ∈ [−R, R], for some R > 0. Furthermore, the measurement outputs are digital, such that each quadrature takes d values, with each value corresponding to a unique log d-bit string. This is obtained by binning the values of q ∈ [−R, R] into d non-overlapping intervals. For simplicity, we consider intervals of equal size, for j = 1, . . . , d. The output j is then associated to the event q ∈ I j , which, in turn, we identify by the central value The same digitisation, when applied to both q and p, yields a description of actual heterodyne detection as a measurement with d 2 possible outputs. This defines Bob's variable Y , which is a discrete random variable and assumes d 2 values. These discrete values can be conveniently labeled using the central points of each interval, i.e., If Bob obtains the average state ρ B , then the probability of measuring β jk is where the complex interval I jk is defined in such a way that β ∈ I jk if and only if q ∈ I j and p ∈ I k , and d 2 β = 1 2 dqdp. Finally, there is a non-zero probability of an inconclusive measurement, when the amplitude lies outside the measurement range.
IV. ASYMPTOTIC SECURITY OF CV QKD
In the limit that n → ∞, the secret key rate (i.e., the number of secret bits that can be distilled per transmission of the signal) is given by the Devetak-Winter formula [23]: where I(X; Y ) is the mutual information between Alice and Bob, and χ(Y ; E) ρ is the Holevo information (quantum mutual information) between Bob and Eve (here we assume reverse reconciliation on Bob's data, which is optimal for long-distance communication). The factor ξ ∈ (0, 1) accounts for the sub-unit efficiency of error correction. While I(X; Y ) only depends on X and Y , χ(Y ; E) ρ also depends on the quantum information held by Eve, which in general cannot be estimated directly. Fortunately, the property of extremality of Gaussian states [11,12] allows us to write the upper bound where f χ is a known function of the covariance matrix (CM) elements (see Appendix A) In conclusion, estimating the CM suffices to obtain a universal upper bound on the Holevo information, which holds for collective attacks in the limit of n → ∞. The asymptotic key rate is thus bounded as Since f χ is an increasing function of γ A and γ B , and a decreasing function of γ AB [24], estimating upper bounds on γ A , γ B and a lower bound on γ AB suffices to bound the asymptotic key rate. In practical realisations of CV QKD, where the parameter γ A is known by definition of the protocol, one only needs to bound γ B and γ AB .
V. PHOTON-NUMBER CUTOFF
The technical difficulties in the analysis of CV QKD are due to the fact that the quantum information carriers reside in a Hilbert space with infinite dimensions. To overcome this issue we need to impose a cutoff in the Hilbert space. As we do not want to impose such a cutoff in an arbitrary way, we follow a data-driven approach. Define the following operators on Bob's side: and where |n is the Fock state with n photons. Renner and Cirac noted that [22] From the experimental data, Bob can estimate the probability P 0 (R) as in Eq. (5). Note that from which we obtain This shows that the probability that Bob receives more than 2R 2 photons is no larger than 2P 0 (R). The gentle measurement lemma [25] then yields where is a normalised state with finite-dimensional support, and is the projector onto the subspace with up to N = ⌊R 2 ⌋ photons, and · 1 is the trace norm. In conclusion, though ρ is generic, an experimental estimation of the probability P 0 (R) allows us to determine the proximity of ρ to a state with finite-dimensional support.
VI. CONTINUITY OF THE HOLEVO INFORMATION
In the EB representation, the two-mode state ρ AB is measured, on Bob's side, by heterodyne detection. In general, ρ AB resides in a Hilbert space with infinite dimensions. However, as discussed above, it is close in trace norm to the state τ AB in Eq. (18). Note that τ AB has support in a space with M × ⌊R 2 + 1⌋ dimensions.
The Holevo information is a continuous functional of the state. By applying the continuity bound of Shirokov we obtain [26] where (in this paper we put log ≡ log 2 , and ln denotes the natural logarithm) This implies that, by paying a small penalty in the key rate, we can replace ρ with the finite-dimensional state τ . We thereby obtain the following bound on the asymptotic key rate, By comparing with Eq. (11), we note that this bound depends on the CM of τ . However, τ is only a mathematical tool and does not describe the state that is prepared and measured in the experimental realisation of the protocol. The only state that is physically accessible is ρ. Below we show how we can estimate the CM of τ by measuring ρ by heterodyne detection. In particular, our goal is to find an upper bound on γ B (τ ) and a lower bound on γ AB (τ ).
VII. SEMI-DEFINITE PROGRAMMING
In the EB representation, Alice prepares the two-mode state Alice keeps the mode A and sends A ′ to Bob. The vectors |ψ x are mutually orthogonal and span an M -dimensional subspace of Alice's mode A. Note that Alice's reduced state is The equivalence with the PM protocol is obtained by noticing that a projective measurement of A ′ in the basis {|ψ x } x=0,...,M−1 prepares the mode A in the coherent state |α x with probability P x . A good choice for the vectors |ψ x 's is presented in Ref. [16].
Our goal is to bound the key rate using the data collected by Alice and Bob, where Bob's measurement is modeled as realistic heterodyne detection with finite range and precision. We follow the seminal ideas of Refs. [17,18] and achieve this by semi-definite programming (SDP). As an example, we apply linear SDP, as done in Ref. [17], to bound the CM of the state τ , but we remark that our theory can also apply to non-linear SDP as in Ref. [18].
Let ρ B (x) be the state received by Bob given that Alice sent |α x . Alice and Bob can experimentally estimate the probability mass distribution which can be used as a constraint in the SDP that we later formulate. We can also consider linear combinations of the parameters P jk|x , which obviously are also experimentally accessible. Here we consider the quantities (where¯denotes complex conjugation) which are the expectation values of the variance and the covariance between Alice's and Bob's variables. Note that v = Tr(Vρ) and c = Tr(Cρ) are the expectation values of the operators Similarly, from Eq. (5), the quantity 1 − P 0 (R) = Tr(Uρ) is the expectation value of the operator Denote asγ B (τ ) the optimal value of the semi-definite program Taking into account normalisation, we obtain the upper bound on γ B (τ ), Similarly, consider the optimal valueγ AB (τ ) of the semi-definite program from which we obtain the lower bound Note that the projector Π appears in the objective functions but not in the constraints. For this reason, we cannot simply replace ρ with τ , and the optimal values of the semi-definite programs remain defined in an infinite dimensional Hilbert space. However, when numerically solving these semi-definite programs, we find solutions of the form Πρ B Π and (I ⊗ Π)ρ AB (I ⊗ Π). This suggests that the presence of the projector operator Π in the objective function suffices to make the problem effectively finite-dimensional (see the Appendix D for further detail). To numerically evaluate the optimal values of these semi-definite programs, we derive the corresponding dual programs, which are more efficient to evaluate, and detail this in Appendix D.
VIII. FINITE-DIMENSIONAL SDP
In this section we obtain from (31) and (33) two semidefinite programs that are defined in a finite-dimensional Hilbert space. We do this by replacing the constraints appearing in (31) and (33) with weaker constraints. This represents no loss of generality, as our goal is to obtain an upper bound on γ B (τ ) and a lower bound on γ AB (τ ). We express the new semi-definite programs in terms of the normalised state τ AB , defined in Eq. (18), which has support in the finite-dimensional subspace containing no more than N = ⌊R 2 ⌋ photons.
First consider the semi-definite program in (31). Note that, since V is positive semi-definite, we have Therefore, the condition Tr(Vρ B ) ≤ v implies Tr(VΠρ B Π) ≤ v. Taking into account the fact that the trace of Πρ B Π is larger than 1 − 2P 0 (R) (from Eq. (16)), we obtain the following constraint: Also note that the constraint Tr(Uρ B ) ≥ 1 − P 0 (R) can be rewritten as Tr((I − U)ρ B ) ≤ P 0 (R). As I − U is positive semi-definite, this constraint can be replaced with Tr((I − U)Πρ B Π) ≤ P 0 (R). Applying the same argument as above, we obtain the constraint which in turn implies Putting all this together, (31) can be replaced with the finite-dimensional semi-definite problem: , , Consider now (33). Note that the operator C is bounded, where O ∞ = sup ψ | ψ|O|ψ | ψ|ψ denotes the operator norm. This observation allows us to express the constraint in terms of the state τ AB instead of ρ AB by introducing a small error, where the first inequality follows from the general property that |Tr(OO ′ )| ≤ O ∞ O ′ 1 , for any pair of Hermitian operators O, O ′ .
In conclusion, we replace (33) with the finitedimensional semi-definite problem: , , IX. NON-ASYMPTOTIC REGIME Entropic uncertainty relations are often used to establish the security of QKD in the non-asymptotic regime [27]. In particular, they have been applied successfully in CV QKD by Furrer et al. [6]. Unfortunately, this elegant method does not yield a tight bound on the key rate for CV QKD. Quoting Leverrier [13]: "This [CV QKD] protocol can be analyzed thanks to an entropic uncertainty relation, but [...] this approach does not recover the secret key rate corresponding to Gaussian attacks in the asymptotic limit of large n, even though these attacks are expected to be optimal." In the same paper, Leverrier showed that the Asymptotic Equipartition Property (AEP) [28] is better suited for CV QKD as it converges to the secret key rate corresponding to Gaussian attacks in the asymptotic limit.
As we show below, the theory developed in the previous sections can be extended to the non-asymptotic regime where a finite number n of signals is exchanged between Alice and Bob. To achieve this goal, we need to make two main modifications to our theoretical analysis.
The first modification accounts for the finite-size correction to the entropic functions appearing in the asymptotic rate in Eq. (22). These corrections can be computed using the AEP [28]: where the additive term ∆ can be bounded as [29] ∆(d, ǫ s ) ≤ 4(1 + log d) log (2/ǫ 2 s ) , and ǫ s is the entropy smoothing parameter. Furthermore, Eq. (48) also includes a term due to privacy amplification, characterised by the hashing parameter ǫ h . The corresponding key is secure up to probability ǫ = ǫ s + ǫ h (see Ref. [28] for more details). Invoking the AEP is not sufficient to analyse the nonasymptotic regime. In order to achieve composable security in the non-asymptotic regime, we also need to provide confidence intervals for the channel parameters that are not known exactly but obtained through parameter estimation. Our second modification to our theory takes this into account, and we discuss this further below. Providing confidence intervals for parameter estimation is a difficult problem in CV QKD because the variables measured in ideal homodyne or heterodyne detection are unbounded. This problem was solved by Leverrier [13] by exploiting a continuous symmetry of heterodyne detection for CV QKD protocol with Gaussianmodulation. Unfortunately, discrete modulation occurs on a finite range and does not have a continuous symmetry. Hence, Leverrier's approach cannot by applied to any CV protocol with discrete modulation. In our work, since we consider non-ideal heterodyne detection (which is bounded), we are able to compute confidence intervals for all the relevant parameters of the communication channel. Therefore, although the AEP can be applied to previous asymptotic security proofs (e.g. Refs. [16][17][18]20]), our work is the first one to allow for a composable analysis of parameter estimation for CV QKD protocol with discrete modulation.
A. Parameter estimation: confidence intervals
The second modification arises because the parameters v, c, and P 0 (R), which enter the semi-definite programs, need to be estimated from experimental data. In the nonasymptotic regime, these estimates are subject to statistical errors due to finite-size fluctuations. To account for this, we need to compute confidence intervals for these quantities for any finite n. It is sufficient to consider onesided confidence intervals, as the parameters enter the semi-definite programs in constraints expressed through inequalities. Following the approach of Ref. [24], we assume that parameter estimation is performed after error correction. This allows Alice and Bob to use all their raw keys for both parameter estimation and key extraction.
First consider the variance parameter v. Given n signal transmissions, Bob obtains from his measurements a string of quadrature and phase values, q B 1 , q B 2 , . . . , q B n and p B 1 , p B 2 , . . . , p B n . His best estimate for In the scenario of collective attacks, this is the sum of n i.i.d. variables, with each variable taking values in the interval [0, R 2 ]. We can then obtain a confidence interval for v using the additive Chernoff bound. For any δ v > 0, where D(a b) = a ln a b + (1 − a) ln 1−a 1−b is the relative entropy. Note that, for p < 1/2, we have which yields To obtain a confidence interval for the covariance parameter c we apply the Hoeffding bound. Let us denote as q A 1 , q A 2 , . . . , q A n and p A 1 , p A 2 , . . . , p A n the raw data collected by Alice. The best estimate for c iŝ Finally, consider the estimation of P 0 (R). This parameter is estimated by counting the number of times that a measurement output falls outside of the allowed range R(R). Bob can locally estimate this with the help of the auxiliary variables S i , where S i = 0 if the ith signal falls inside the range, and S i = 1 otherwise. Therefore, Bob's best estimate for P 0 (R) iŝ This is the average of independent Bernoulli trials and therefore follows the Binomial distribution. A confidence interval can be obtained from the additive Chernoff bound: Applying the bound in Eq. (52) we obtain We will require that the probabilities ǫ v , ǫ c , ǫ P are much smaller than 1, of the order of 10 −10 .
In summary, we have obtained that the following bounds,v hold true with almost unit probability (larger than 1 − ǫ PE , where ǫ PE = ǫ v + ǫ c + ǫ P follows from an application of the union bound). For simplicity we put ǫ v = ǫ c = ǫ P = ǫ PE /3. By inverting Eq. (56), we obtain From Eqs. (54) and (60) we obtain the following conditions for δ v and δ P : To estimate these quantities we apply the inequalities (61), (63): Finally, solving for δ v and δ P we obtain In conclusion, the non-asymptotic secret key rates are obtain using the formula in Eq. (48), where the parameter γ B (τ ) and γ AB (τ ) are obtained by solving the semidefinite programs (40), (47) with the replacements and δ v , δ c , δ P bounded as in Eqs. (64), (69), (70). The key rate obtained in this way is secure up to probability not larger than ǫ ′ = ǫ s + ǫ h + ǫ PE . FIG. 1: Asymptotic secret key rates versus channel loss for QPSK encoding, for collective attacks in the limit of n → ∞. The channel parameters are |α| = 0.5, u = 0.001, and ξ = 0.97. The solid lines show the theoretical rate expected for ideal heterodyne detection, from Ref. [16]. For non-ideal heterodyne, the key rate is computed for d = 16 and R = 6 (squares) and R = 7 (circles). Top figure: the key rate is obtained by truncating and solving the infinite-dimensional semi-definite programs (31) and (33). Bottom figure: the key rate is obtained by solving the finite-dimensional semidefinite programs (40) and (47).
X. QPSK: SECRET KEY RATES
Our theoretical analysis applies to any DM protocol. As a concrete example, we describe the application of our theory to QPSK encoding, where α x = αi x and P x = 1/4, for x = 0, 1, 2, 3. To align with the symmetry of our model of realistic heterodyne detection, we set α = |α|e iπ/4 . We have From this we obtain and For the sake of presentation, we assume a Gaussian channel from Alice to Bob, characterised by the loss factor η ∈ [0, 1] and the excess noise variance u ≥ 0. Given that a, a † are the canonical annihilation and creation operators on Alice's input mode, and b, b † on Bob's output mode, a Gaussian channel (in the Heisenberg picture) is a map of the form where e, e † are the canonical operators associated to an auxiliary vacuum mode, and w is a Gaussian random variable with zero mean and variance u. Assuming this form for the channel from Alice to Bob, we can explicitly compute the expected asymptotic values of the constraint parameters v, c, and P 0 (R), and then solve the semidefinite programs to estimate the CM elements γ B (τ ), γ AB (τ ). (More details are discussed in Appendix E.) The computed secret key rates (measured in bits per channel use, i.e., per mode) are shown in Figs. 1-2 versus the loss η, expressed in decibels. The other parameters of the protocol are fixed as |α| = 0.5, and u = 0.001. Figure 1(top) is obtained by solving the semi-definite programs (31) and (33), which are defined in an infinitedimensional Hilbert space. To find a solution, we truncate the Hilbert space. The figure shows that, as expected, by increasing R, and for d large enough, the secret key rate converges towards the value expected for ideal heterodyne detection (which has been recently computed in Ref. [16]). Our theory allows us to rigorously compute the deviation from this ideal rate. Figure 1(bottom) is obtained by solving semi-definite programs (40) and (47), which are defined in a finitedimensional Hilbert space. In this case, a solution can be found without arbitrary truncation of the Hilbert space. Compared with Fig. 1(top), we note that the secret key rate is reduced, especially if the value of R is not large enough. This is due to the term proportional to C ∞ introduced in constraints of the semi-definite programs to account for the projections into the finite-dimensional space (therefore, an improved key rate can be obtained with a better bound for C ∞ ). However, already for R = 7 the difference with the solution of the infinitedimensional problem is relatively small. Figure 2 is obtained by solving finite-dimensional semidefinite programs and including the finite-size corrections in the constraints, as discussed in Section IX. For the sake of illustration, the calculations have been done by The error parameters are ǫ h = ǫ s = ǫ PE = 10 −10 . The figure shows that a non-zero secret key rate is obtained when the block size is about n = 10 10 or larger. The dominant finite-size corrections are due to δ v and δ c . This means that an improved key rate could be obtained by using tighter confidence intervals for the estimation of these parameters. This, in turn, would allow us to reduce the block-size without compromising composable security.
XI. CONCLUSIONS
In CV QKD information is decoded by a coherent measurement of the quantum electromagnetic field, i.e., homodyne or heterodyne. These are mature technologies and represent the strategic advantage of CV QKD over discrete-variable architectures. This applies to both continuous [3,4,13,24] and discrete modulation protocols [5, 16-18, 20, 30, 31]. Ideal homodyne and heterodyne detection, which are measurements of the quadratures of the field, possess a continuous symmetry that plays a central role in our theoretical understanding of CV QKD. However, this symmetry is broken in real homodyne and heterodyne dectection that are implemented in actual experiments [14,15]. While it is expected that, in practice, these measurements are well approximated by their idealised models in some regimes, a quantitative assessment of the error introduced by this approximation, and of its impact on the secret key rate, has so far been elusive. Here we have filled this gap and presented a theory to quantify the security of CV QKD with real, imperfect, heterodyne detection. Within this theory we have established the composable security of DM CV QKD in the non-asymptotic regime. To the best of our knowledge this is the first result obtained in this direction, as previous works only considered asymptotic, non-composable security [16][17][18]20]. Extension to most general attacks, which in principle can be obtained through a de Finetti reduction, remains an open problem.
In this paper, we have extended the approach of Ref. [17], in which one first estimates the covariance matrix of the quadratures, and then obtain a bound on the key rate using the property of extremality of Gaussian states. However, our theory can also be applied to the method of Refs. [18,20], in which one uses the measured data to bound the key rate directly through nonlinear semi-definite programming. We have focused on a particular kind of non-ideality in detection, but our approach can be applied to other non-idealities in both detection and in state preparation. Examples of these nonidealities include non-linearities in the analog-to-digital converter [32] and noise in the state preparation [16]. In principle, accounting for experimental imperfections in the security analysis mitigates the threat from sidechannel attacks. Our approach may also be extended to measurement-device-independent QKD [29,33,34], which protects against unknown side-channel attacks on the detectors. The results presented here are not only conceptually important, but will also enable secure, practical, and reliable DM CV QKD. In fact, to obtain reliable bounds on the secret key rates, the practitioner of CV QKD needs to carefully assess, in a composable way, finite-size effects as well as the impact of non-idealities in the measurement devices, including but not limited to, the effects of finite range and precision considered in this work. Consider a two-mode state ρ AB shared between Alice and Bob. We denote a, a † , and b, b † the annihilation and creation operators on Alice's and Bob's mode, respectively. Their local quadrature and phase operators are . The symmetrically ordered CM γ ′ (ρ) of the two-mode state ρ is defined as where Σ(x, y) := (xy + yx)/2. The CM can be written in a block form as where A, B, C are 2×2 matrices. We denote as ν + and ν − the symplectic eigenvalues of γ ′ (ρ). When Bob measures his mode by ideal heterodyne detection, the conditional state of Alice has CM We denote ν 0 as the symplectic eigenvalue of γ(ρ A|B ). The property of extremality of Gaussian states yields the following bound on the Holevo information: where and for any x > 0 the function g is defined as and g(x) := 0 if x = 0. It is possible to show [24] that the function F χ increases if we replace γ ′ (ρ) with the matrix γ(ρ) where ∆ := (q A q B − p A p B )/2. From this we obtain the bound Note that Obviously, F χ (γ(ρ)) is a function of γ A (ρ), γ B (ρ), γ AB (ρ). We therefore define Appendix B: QPSK: EB representation In the PM representation, Alice prepares the state |α x with probability P x = 1/4, for α x = αe ixπ/2 and x = 0, 1, 2, 3, where we put α = |α|e iπ/4 . The average state prepared by Alice is We can expand this state in the number basis. Its (n, n ′ ) entry is α nᾱn ′ √ n!n ′ ! 1 + e (n−n ′ )π/2 1 + e (n−n ′ )π . (B4) That is, ρ nn ′ A ′ = 0 unless n − n ′ is a multiple of 4, in which case, As this state is invariant under rotation of π/2 in phase space, the eigenvectors have the form, for y = 0, 1, 2, 3, |φ y = n≥0 c y,n |y + 4n . (B6) we obtain λ y c y,n = e −|α| 2 /2 α y+4n (y + 4n)! (B8) By imposing normalisation, we find |φ y = e −|α| 2 /2 λ y n≥0 α y+4n (y + 4n)! |y + 4n , where λ y = e −|α| 2 n≥0 |α| 2(y+4n) (y + 4n)! . Explicitly, We define the purification of the state ρ A ′ through its Schmidt decomposition, where |φ y = e −|α| 2 /2 λ y n≥0ᾱ y+4n (y + 4n)! |y + 4n .
It is easy to check that which we can invert to obtain We can then write where we have defined We now express the operators that appear in our semidefinite programs in the basis {|ψ x ⊗ |n } x=0,...,3;n=0,...,∞ , where |n 's are the number states of Bob's side, satisfying b † b|n = n|n .
The operator ρ B is a density matrix of one bosonic mode.
(C3) Therefore, The operator V is Note that by symmetry, V nn ′ = 0 unless n−n ′ is multiple of 4. Also by symmetry, V is a real matrix in the Fock basis. Similarly, we have with The covariance operator in the objective function reads To compute this, first note that from which we obtain Finally, the operator C has components The operator can thus be written as Note that, by symmetry, [B] nn ′ = 0 for n − n ′ even. Also by symmetry, the entries of C are all real.
In the main body of the paper we have formulated the following optimisation problems maximize ρ ≥ 0 and minimize ρ ≥ 0 where A, X = Tr(A † X) denotes the Hilbert Schmidt inner product. To derive the corresponding dual programs which will be more numerically efficient to evaluate, we revisit duality theory for SDP with mixed constraints. Given any semidefinite program of the form where C, A i and B j are Hermitian matrices, the Lagrangian is given by where y i ≥ 0, z i ∈ R. By linearity of inner products, we can rewrite the Lagrangian as The Lagrange dual is then given by The Lagrange dual of (D1) is thus given by minimize y 1 , y 2 ≥ 0, z ∈ R y 1 v − y 2 (1 − P 0 (R)) + z subject to − 1 2 Π(b † b + bb † )Π + y 1 V − y 2 U + zI ≥ 0.
(D7) Strong duality in this case holds because the inequality constraints can be strictly feasible, and the Slater constraint qualification holds.
(D8) where κ(y, z) = −y 1 C + y 2 V − y 3 U + y 4 I + h,k z h,k Z h,k , (D9) φ(z) = h≥k z h,k Re(σ h,k ) + h<k z h,k Im(σ h,k ) , (D10) and Z h,k = E h,k ⊗ I B when h ≥ k and Z h,k = F k,h ⊗ I B when h < k, with To solve numerically these optimisation problems we need to impose a cutoff to Bob's Hilbert space, and work within a finite dimensional space of dimensions dim, containing no more than (dim − 1) photons on Bob's side. The value of dim can be arbitrarily large, as long as it is larger than N + 1, where N = ⌊2R 2 ⌋ is determined by the rank of the projector Π. However, our numerical results suggest that it is sufficient to put dim = N + 1. As an example, Fig. 3 shows the optimal values for QPSK encoding, and for the optimisation problems (D7) and (D8), as a function of dim.
As a concrete example, we apply our theory to QPSK encoding, where α x = αi x and P x = 1/4, for x = 0, 1, 2, 3. To align with the symmetry of our model of realistic heterodyne detection, we set α = |α|e iπ/4 . We simulate a Gaussian channel from Alice to Bob, characterised by the loss factor η ∈ [0, 1] and the excess noise variance u ≥ 0.
First, we compute the expected value for the mutual information, where H(Y ) is the entropy of Bob's measurement outcome, and H(Y |X) is the conditional entropy for given input state prepared by Alice. If Alice prepares the coherent state |α x , with α x = (q x + ip x )/ √ 2, then the state ρ B (x) received by Bob is described by the Wigner function W x (q, p), where W x (q, p) = 1 π(2u + 1) e − (q− √ η qx ) 2 +(p− √ η px ) 2 2(u+1/2) .
(E2)
From this, we obtain the probability density of measuring β = (q + ip)/ √ 2 by ideal heterodyne detection, and, in turn, the probability of measuring β ∈ I jk , P jk|x = 1 π β∈I jk d 2 β β|ρ B (x)|β = P j|x P k|x , (E4) where P j|x = 1 2 erf (2 + d − 2j)R + d √ η q x d 2(u + 1) For QPSK encoding, the conditional mutual information then reads (log in base 2) The probability distribution of Y is obtained by averaging over X, P jk = 1 4 3 x=0 P jk|x , and the entropy of Y is Similarly, we compute the expected values for the estimated parameters v and c. We obtain
|
2021-08-03T01:16:13.458Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6b0e5916db95f437902391c1c80871ac8450318d",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PRXQuantum.3.010341",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "dec0dc23bf1f148681d9ca45e3ee431d812aac41",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
13401282
|
pes2o/s2orc
|
v3-fos-license
|
Switching Periodic Membranes via Pattern Transformation and Shape Memory Effect
We exploited mechanical instability in shape memory polymer (SMP) membranes consisting of a hexagonal array of micron-sized circular holes and demonstrated dramatic color switching as a result of pattern transformation. When hot-pressed, the membrane underwent pattern transformation to an array of elliptical slits (with width of tens of nanometers) to featureless on surface with increasing applied strain, switching the membrane with diffraction color to a transparent film. The 15 deformed pattern and the resulting color change can be fixed at room temperature, both of which could be recovered upon reheating. Using continuum mechanical analyses, we modeled the pattern transformation and recovery processes, including the deformation, the cooling step, and the complete recovery of the microstructure, which corroborated well with experimental observations. We find that the elastic energy is roughly two-orders of magnitude larger than the surface energy in 20 our system, leading to autonomous recovery of the structural color upon reheating. Furthermore, we demonstrated two potential applications of the color switching in the SMP periodic membranes by 1) temporarily erasing a pre-fabricated "Penn" logo in the film via hot-pressing, and 2) temporarily displaying "Penn" logo by hot-pressing the film against a stamp. In both scenarios, the original color displays can be recovered. 25
Introduction
Shape memory polymers (SMPs) are polymeric smart materials of interest for a variety of applications, including deployable space structures, artificial muscles, biomedical devices, sensors, smart dry adhesives, and fasteners. 1, 2 They 30 form a "permanent" shape by chemical or physical crosslinking (e.g. crystallization or chain entanglement). Above a thermal phase transition temperature, either a glass transition temperature (T g ) or a melting temperature (T m ), SMPs can be deformed to different temporary shapes, which 35 can be fixed by cooling the sample. Upon exposure to an external stimulus, such as heat, light, and solvent, the temporary shapes can return to their original (or the permanent) shape. There has been much effort to develop new chemistry for improved shape fixity and shape recovery 40 efficiency, responsiveness to new environmental triggers, achieving multi-shape memory effect, and applications to biomedical devices. 1,[3][4][5][6][7][8][9] Nevertheless, most of the study focuses on shape memory effect in bulk SMPs. A few groups have created micropatterns in SMPs, such as 45 microprotutions 10 and microwrinkles 8,11 by taking advantage of the large modulus change near the phase transition temperature. None of them, however, have reported the recovery to the original shape from the micropatterns. During the shape recovery process, the entropic energy stored in the 50 deformed state is released. It remains to be seen whether the deformed shape can be completely recovered as surface energy becomes increasingly dominant when the size shrinks to micro-and nanoscale.
Recently, we and several other groups have demonstrated 55 pattern transformation in elastic membranes with periodic hole arrays by mechanical compression, 12, 13 solvent swelling, 14,15 polymerization, 16 and capillary force. 17 For example, when swollen by an organic solvent, a poly(dimethylsiloxane) (PDMS) membrane consisting of 60 micron-sized circular holes in a square array buckles to a diamond plate pattern of elliptic slits with the neighboring units perpendicular to each other. 14 As a result, the physical properties (e.g. photonic 18,19 and phononic 15 band gap 18,19 and mechanical behaviors 20, 21 ) could be significantly altered 65 due to change of lattice symmetry, pore size, shape and volume filling fraction. One question rises whether it is possible to switch a colorful film to transparent one via pattern transformation. The latter state will allow for seeing through or mingling with the surroundings. Therefore, the 70 www.rsc.org/xxxxxx dramatic visual contrast between colored and transparent 5 states is of interest for applications such as display, privacy window, and camouflage. In nature, invisibility is an important strategy for many sea creatures to hide from predators in water. For example, bobtail squids are invisible in sand during the day with chromatophores in the skin 10 concentrated into small, barely visible dots; when the muscle fibers stretch out the skin, thereby enlarging the chromatophores, the color becomes visible for signaling or escape from predators. 22 Here we report switching a SMP membrane with 15 diffraction color to a transparent film via harnessing the mechanical instability and shape memory effect. When hot-pressed, the SMP membrane consisting of a hexagonal array of circular holes (1.2 µm in diameter, 2.5 µm in pitch, and 5.0 µm in depth) underwent pattern transformation to an 20 array of elliptical slits to featureless on surface with increasing applied strain, leading to the dramatic change of the hole size and shape, and diffraction color, which could be fixed at room temperature, and later recovered to the original pattern (and color) upon reheating. Using continuum 25 mechanical analyses, we modeled, for the first time, an out-of-plane compression of SMP membrane. We observed the hot-press induced deformation and pattern transformation of the membrane at different strains, the structure fixation at the cooling step, and the complete recovery of the 30 microstructure, in agreement with experiments. We also find that the elastic energy stored in the membrane is roughly 2-orders of magnitude larger than the surface energy, leading to autonomous recovery of the structural color upon reheating. Further, we demonstrated two possible applications 35 of color and transparency change in our SMP periodic membranes, including 1) temporary erasing the pre-fabricated "Penn" logo in the film, and 2) a temporary display of "Penn" logo by hot-pressing the film against a stamp.
40
The ability to simultaneously change the lattice symmetry, pore size and shape, and volume filling fraction through pattern transformation offers an attractive approach to drastically alter the materials properties. Most deformation methods reported so far involves the use of solvent, either 45 through swelling or drying processes. In comparison, application of mechanical force will allow us to independently control the amount, direction (uniaxial or biaxial both in-plane and out-of-plane), and timing of strain applied to the periodic structures. In the case of in-plane 50 compression, however, additional care has to be taken to eliminate the out-of-plane buckling, e.g. by sandwiching the film between two rigid sheets. 12 In most applications, a direct out-of-plane compression is easy to implement and desirable, and was thus performed in our experiments.
membranes
The SMP periodic membrane (1.2 µm in diameter, 2.5 µm in pitch, and 5 µm in depth) was prepared by replica-molding from a 2D hexagonal pillar array, which was fabricated by 3-beam holographic lithography 23, 24 (see Fig. 1a-b and details 70 in Experimental section). The negative-tone photoresist, epoxycyclohexyl POSS® cage mixture (epoxy POSS) was chosen here to fabricate the pillar array since it could be readily removed by hydrofluoric acid (HF) solution at room temperature 23 after templating the SMP membrane. When the 75 latter was heated to 10-30 o C above its T g (70 o C), it became softened and was compressed vertically by a hot-press to a temporary shape (Fig. 1c). The load was carefully controlled to deform the membrane at different strain levels, here referring to engineering strain, ε = change of film 80 thickness/original thickness. The temporary shape was fixed when cooled down to room temperature while keeping the loading force constant. Upon reheating to 90 o C, the hexagonal shape was recovered. During the pattern deformation and recovery, we observed reversible switching 85 of color and transparency.
Although the bulk SMP film is transparent, the SMP membrane is colorful due to the diffraction grating effect ( Fig. 2a, f, k). Because of the Gaussian distribution of the laser beam in holographic lithography and possible small misalignment of optics, there was gradient laser intensity 15 from center to the edge, resulting in pore size distribution and color variation across the sample size. This can be improved using a beam shaper or patterning the film by conventional photolithography through a photomask. When the applied strain, ε, was ~13±2%, the circular holes of p6mm symmetry 20 were deformed to elliptical slits (width of major axis, 1.25 µm, minor axis, 500 nm) with p2gg symmetry (Fig 2g, l), in agreement with the observation from the swelling-induced instability in SU-8 membranes with a hexagonal array of pores. 15 When the SMP membrane was compressed in the 25 vertical direction, it expanded in-plane due to positive Poisson's ratio, hence generating an equivalent in-plane compressive stress to the circular holes. The initial diffraction color diminished significantly after compression although it was not completely lost at this strain level (Fig. 2b). This 30 could be attributed to the smaller pore size and porosity. The width of the minor axes of the ellipse further decreased, from hundreds of nanometers to a few nanometers, as the strain was increased. When ε was increased to ~20±2%, the holes were almost closed into lines (see Fig. 2c, h and m) and the SMP 35 membrane became quite transparent, much like the bulk film.
At ε ~ 30±2%, the holes were closed-up and the surface became nearly featureless (Fig. 2d, i and n). No further change of transparency was observed. When any of the above deformed SMP membranes were reheated to 90 o C, the 40 original periodic structure was restored nearly to completion (97.6% of the original hole size and 100% of the original pitch), as evident by the SEM images and the regeneration of strong diffraction color (Fig. 2e, j and o and Movie S1 †). Surprisingly, even the one with completely closed pores was 45 restored, suggesting that the adhesive energy between the pore surfaces was much smaller than the elastic recovery energy. The different color displayed in Fig. 2a (the original film) and 2e (the recovered one) could be caused by a small misalignment of incident light during photo shooting could 50 lead to appearance of a different color. When ε was greater than 50%, the 2D grating with air holes and its color could no longer be completely recovered due to the permanent deformation of the polymer network. The reversible switching between the colorful displays to 55 transparency was repeated successfully for more than 10 cycles with ε < 50%, and the recovery of diffraction color occurred within a few seconds (see Movie S1). According to SEM images, the hole diameter and pitch of the recovered film decreased slightly to 94.4% and 98.4% of the original 60 one after three cycles, respectively, and to 89.7% and 98.0% of the original one after ten cycles, respectively. The diffraction color displayed at any of the temporary state could be reprogrammed on demand by precise control of the applied strain level and temperature/load of deformation. Hence, it is 65 possible to build a color spectrum by carefully tuning the mechanical deformation. Further, we may achieve full-color display by combining the instability and design of the original microstructures with variable structural parameters. During the pattern transformation and recovery process, 70 the air holes were squeezed out and restored, respectively, which would result in a dramatic transparency change. As a proof-of-concept, we placed two SMP membranes on a paper printed with "Penn" logos: one was hot-pressed at ε ~ 30±2% (the left one), and the other was the original, non-deformed 75 one (the right one, see illustration in Fig. 3a). Due to diffraction from the surface of the original membrane with pores in hexagonal array, the "Penn" letters beneath it could not be clearly viewed, in sharp contrast to that beneath the deformed membrane (see Fig. 3b). The transparence change 80 was further investigated by UV-Vis spectroscopy at different thermal and mechanical treatments (Fig. 3c)
Finite Element Analysis
Since the deformation results presented here are the first demonstration of instabilities induced by loading in the direction perpendicular to the voids, we built a 3D mechanical model to quantitatively investigate the buckling and 25 post-buckling behaviors. The structure is modeled as an infinite array of infinitely long voids in the x analyses are conducted and the constraining effect given by the substrate is accounted for by setting the equal to zero. A periodic representative volume elem 30 pressing and press release. Finally, the recovered sample (D) shows low transmittance 00 nm), close to that of the original membrane in mation results presented here are the first demonstration of instabilities induced by loading in the direction perpendicular to the voids, we built a 3D mechanical model to quantitatively investigate the buckling and is modeled as an infinite array of infinitely long voids in the x 1 -x 2 plane. 3D analyses are conducted and the constraining effect given by the substrate is accounted for by setting the lateral expansion equal to zero. A periodic representative volume element The stress-strain behavior of the SMP is captured using a two-mechanism constitutive model. 15 decomposed into two contributions: the resistance due to stretching and orientation of the molecular network ( 50 mechanism N, and the resistance due to intermolecular interactions (σ v ), mechanism V. At the applied the total stress acting on the material is given by material parameters defining the position and width of the zone where mechanism V becomes significant.
The shape memory behavior is taken into account by having σ v depend on (T-T g ). When T > T g the material is characterized by a rubbery behavior; as T decreases toward becomes increasingly glassy and locked into the deformation. The constitutive model is implemented into 65 subroutine (VUMAT) of the commercial finite element code ABAQUS, and numerical simulations of the whol thermo-mechanical loading history of the structures performed in four steps (see Fig. 4 and Movie S2) using model parameters summarized in Table 1 70 Initial of End 4 Step of End is considered and a series of constraint equations are applied to the boundaries of the model providing general periodic boundary conditions. in behavior of the SMP is captured using a The stress response is decomposed into two contributions: the resistance due to stretching and orientation of the molecular network (σ N ), and the resistance due to intermolecular the applied temperature T, the total stress acting on the material is given by (1) ] with A 1 and A 2 as material parameters defining the position and width of the significant.
is taken into account by having the material is characterized decreases toward T g , the material becomes increasingly glassy and locked into the deformation. The constitutive model is implemented into a user-defined the commercial finite element code numerical simulations of the whole mechanical loading history of the structures are . 4 and Movie S2) using the Table 1. Step of End = ε 2 Step of End 3 Step of d e Step 1) Hot-pressing. T increases above T g , so σ v vanishes 5 and the material exhibits rubber-like behavior. The stability
Soft Matter
µ is the elastic shear modulus, N is the parameter relating to the limiting chain extensibility, K is the bulk modulus, E is the Young's modulus, ν is Poisson's ratio, ߛ ሶ is pre-exponential shear strain rate 10 factor, ∆G is activation energy, s0 is the initial athermal deformation resistance, sss is the athermal deformation resistance value at the steady state, h is the softening slope (the slope of the yield drop with respect to plastic strain).
of the structure is investigated by conducting a Bloch wave 15 analysis. 25 At an applied strain, ε = 11%, a critical instability is detected, leading to the same pattern previously observed under constrained swelling, 15 which is characterized by sheared voids where the shear direction alternates back and forth from row to row (see Fig. 4a-c). Further compression 20 leads to complete closure of the voids at ε = 22% (Fig. 4d), in agreement with experimental observation (Fig. 2h). In the simulations, further compression was avoided to prevent too much mesh distortion.
Step 2) Cooling down. T decreases to 20 o C, and σ v 25 increases, making the material much stiffer and preserving the pattern (Fig. 4e); Step 3) Unloading. The press is removed, but the holes remain completely closed (Fig. 4f), and the elastic energy is stored in the material; 30 Step 4) Reheating up. T increases above T g so that the structure again exhibits a rubbery behavior (σ v vanishes again) and the initial shape and pattern are elastically recovered (see Fig. 4g).
As seen in Fig. 4, the numerical analysis nicely captured sample. Additionally, we find that for the considered structures with voids of 1µm in diameter the surface energy (22.8 mJ/m 2 measured by goniometer) is roughly two orders 55 magnitude smaller than the elastic recovery energy, making the recovery autonomous upon reheating. Since the strain energy is proportional to L 3 (with L denoting the characteristic material dimension), while the surface energy is proportional L 2 , a decrease of the voids diameter will increase the 60 contribution of the surface energy. An approximate analysis suggests that the surface energy will play an important role for voids 10 times smaller than those considered in this study.
Color displays with SMP periodic membranes
To demonstrate the flexibility of color and transparency 65 change in our SMP periodic membranes and their potential applications, we exploited two possible renderings of the SMP membranes. First, a "Penn" logo was pre-fabricated within the 2D membrane (Fig. 5a, b). The template for replica molding was fabricated by exposing the negative-tone 70 photoresist, epoxy POSS, to UV light through a photomask with "Penn" logo, followed by 3-beam holographic surrounding area (Fig. 5c). Since the region with "Penn" was mostly crosslinked in the first step, the second exposure did not produce any pillar in this region but shallow voids ( 5d). After replica-molding the template to SMP membrane, there was no or little color diffracted from this region in sharp contrast to the bright color from the surrounding area (Fig. 5e, g). When the SMP membrane was 20 hot-pressed above T g , "Penn" logo disappeared as the film became transparent (Fig. 5f). When reheated, the "Penn" logo reappeared together with its colorful background, confirming the success of shape recovery. Here, the logo was pre-fabricated in the permanent shape, which could be 25 temporarily erased upon deformation. In a second approach, the "Penn" logo was introduced as a temporary shape by a rubber stamp indented into the SMP membrane during heating (Fig. 6a) at 90 o C. The stamp was released after the film was cooled down to room temperature. 30 As seen in Fig. 6b and 6d, the indent transparent, especially at the sharp corners of the letters, presumably receiving higher stress, while the background remained colorful. When reheated, the "Penn" logo was erased (Fig. 6c, e). In this way, different letters or patterns 35 could be "finger-printed" and reprogrammed into the same SMP membrane repeatedly, which could be extremely useful as a user-friendly touch screen display or fingerprinting by tailoring the SMP T g near the body temperature. It should be noted that all the displays presented here require no extra 40 energy to maintain the displayed state.
Conclusions
We prepared 2D periodic membrane in SMPs, and studied the mechanical instability and shape memory effect. When lithography to create hexagonal array of pillars in the . 5c). Since the region with "Penn" was mostly crosslinked in the first step, the second exposure did not produce any pillar in this region but shallow voids (Fig. SMP membrane. Schematic illustrations of (a) indentation of a stamp with a letter "P" into a heated SMP membrane, (b) the display of letter "P" in the SMP membrane in the deformed region and (c) structural recovery upon cal images of the indented "Penn" in the colored SMP membrane (d) and its erase after to SMP membrane, there was no or little color diffracted from this region in sharp contrast to the bright color from the ing area (Fig. 5e, g). When the SMP membrane was , "Penn" logo disappeared as the film . 5f). When reheated, the "Penn" logo reappeared together with its colorful background, confirming the success of shape recovery. Here, the logo was fabricated in the permanent shape, which could be proach, the "Penn" logo was introduced as a stamp indented into the SMP C. The stamp was released after the film was cooled down to room temperature.
. 6b and 6d, the indented region was transparent, especially at the sharp corners of the letters, presumably receiving higher stress, while the background remained colorful. When reheated, the "Penn" logo was . 6c, e). In this way, different letters or patterns printed" and reprogrammed into the same SMP membrane repeatedly, which could be extremely useful friendly touch screen display or fingerprinting by near the body temperature. It should be s presented here require no extra membrane in SMPs, and studied the mechanical instability and shape memory effect. When hot-pressed, the membrane underwent pattern transformatio from a p6mm hexagonal lattice of circular holes (1 µm diameter) to a p2gg pattern of elliptical slits ( from a few hundreds of nm to a few nm holes were completely closed. The original film 75 because of the diffraction from the periodic micropattern and can be reversibly switched to a transparent state by mechanical deformation above the material's T reheating, the deformed patterns were able to recover, hence, restoring the diffraction color. The comb 80 transformation and shape memory effect in a 2D membrane offers several distinctive characteristics. 1) It is the first demonstration of instabilities induced by loading in the direction perpendicular to the voids in microstructu which is more desirable in practical applications than 85 approaches such as solvent swelling and in compression.
2) The temporarily deformed structure and the resulting color can be fixed without the need for continuous input of external trigger; they can also be programmed continuously by varying the mechanical strain level. 3) The 90 continuum mechanical analyses have faithfully captured the buckling and post-buckling behaviors of the SMP membrane observed experimentally. Importantly, the model that the surface energy plays a negligible role comparing with elastic energy when the void dimension is comparable to the 95 wavelength of light, leading to autonomous and fast shape recovery of the microstructure.
We emphasize that while the diffraction demonstrated in temperature responsive SMPs 90 a broad range of stimuli responsive material systems literature, allowing for fine-tuning the switching speed, degree of responsiveness temporary states, and the type of stimulus T g of the epoxy SMP used in our system 95 (e.g. to 30 o C) by increasing the concentration of flexible crosslinker, decylamine. 26 SMPs that can store up to three different shapes in temporary states reported. 7,27 We expect that the study of tuning structures via combined pattern transformation and shape 100 memory effect will shed new light in harnessing the mechanical response of soft materials and advancing range of technologies, including color displays, camouflage, and energy efficient building components (e.g. smart windows and responsive façade). pressed, the membrane underwent pattern transformation hexagonal lattice of circular holes (1 µm pattern of elliptical slits (width varied from a few hundreds of nm to a few nm), and eventually the holes were completely closed. The original film is colorful periodic micropattern and can be reversibly switched to a transparent state by mechanical deformation above the material's T g . Upon reheating, the deformed patterns were able to recover, hence, color. The combination of pattern transformation and shape memory effect in a 2D periodic membrane offers several distinctive characteristics. 1) It is the first demonstration of instabilities induced by loading in the direction perpendicular to the voids in microstructured SMPs, which is more desirable in practical applications than approaches such as solvent swelling and in-plane compression.
2) The temporarily deformed structure and the resulting color can be fixed without the need for continuous ger; they can also be programmed continuously by varying the mechanical strain level. 3) The continuum mechanical analyses have faithfully captured the buckling behaviors of the SMP membrane observed experimentally. Importantly, the model suggests that the surface energy plays a negligible role comparing with elastic energy when the void dimension is comparable to the wavelength of light, leading to autonomous and fast shape raction color change is demonstrated in temperature responsive SMPs here, there are material systems in the he transition temperature, switching speed, degree of responsiveness, number of type of stimulus. For example, the of the epoxy SMP used in our system could be lowered increasing the concentration of more SMPs that can store up to temporary states have been We expect that the study of tuning periodic via combined pattern transformation and shape will shed new light in harnessing the mechanical response of soft materials and advancing a wide color displays, sensors, camouflage, and energy efficient building components (e.g. smart windows and responsive façade).
Unless specifically noted, all chemicals were obtained from and used as received. Fabrication of the hexagonal pillar array (Fig. 1a) 5 The SMP periodic membrane was replica molded from a 2D hexagonal pillar array (1.2 µm in diameter, 2.5 µm in pitch, and 5 µm in height), which was fabricated by 3-beam holographic lithography (HL) 23, 24 from epoxycyclohexyl POSS® cage mixture (EP0408, Hybrid Plastics®) (epoxy 10 POSS) mixed with 0.9 wt % photoinitiator, Irgacure 261 (Ciba Specialty Chemicals). In a typical HL experiment, the epoxy POSS photoresist was spin-coated on a glass substrate, prebaked at 50 o C for 40 min, followed by 95 o C for 2 min. The film was then exposed to three interfering laser beams 15 (λ= 532 nm, power of beam source ~ 1.0 W), followed by post-exposure bake (PEB) at 50 o C for 30 s (Fig. 1a). The pillar structures were obtained after development in propylene glycol methyl ether acetate (PGMEA), rinsing in isopropanol (IPA), followed by drying in critical point dryer 20 (SAMDRI ® -PVT-3D, tousimis) from ethanol to prevent pillar collapse. The sample area was defined by the laser beam size, typically ~1 cm in diameter. By varying the dosage of laser exposure and the PEB time and temperature, we obtained holes size ranging from hundreds of nanometers to a few 25 microns.
Replica molding SMP periodic membrane (Fig. 1b) The SMP precursor, a mixture with molar ratio 5
Hot pressing of SMP membranes
The SMP membrane was compressed in the vertical direction using a manual bench top heated hydraulic press (CARVER 4122, Carver, Inc). The sample (> 0.4 mm thick) was placed inside of a Teflon sample holder (0.4 mm thick), which was 45 then pressed between two Teflon sheets with heated platens. The platens were pre-heated to 100 o C for 10 min to reach equilibrium. Then a pressure of 1000 psi was applied to the sample and kept for 15 min before cooling down to room temperature, followed by release of the pressure to lock the 50 temporary shape. The strain was calculated by comparing the final film thickness with the original one.
Fabrication of SMP membrane with embedded "Penn" letter
The membrane was fabricated by replica molding in the way 55 similar to that from the hexagonal POSS pillar array. One added step was UV exposure (λ=365 nm, 400 mJ/cm 2 , 97435 Oriel Flood Exposure Source, Newport) through a "Penn" logo photomask conducted after prebaking and before the three-beam laser exposure. After PEB, the "Penn" region was 60 highly crosslinked and appeared nearly flat or with shallow features depending on the dosage, while the surrounding areas formed pillar structures.
Calculation/modelling
Numerical simulations of stability of the structure were 65 conducted using the nonlinear finite element code ABAQUS/Standard (version 6.8-2) while the thermo-mechanical loading history of the structures was investigated utilizing the nonlinear finite element code ABAQUS/Explicit (version 6.8-2). Each mesh was 70 constructed of 8-node, linear, 3D elements (ABAQUS element type C3D8R). In the hexagonal array the voids have a radius R = 1 µm and a unit cell spanned by the lattice vectors A1 = [2 0 0] µm and A2 = [1 1.732 0] µm and A3=[0 0 0.1] µm is used. RVE consisting of 1x2x1 unit cells is considered 75 in the simulations of the thermo-mechanical loading cycle and an imperfection in the form of the most critical eigenmode is introduced into the mesh to capture the instability upon hot-pressing, the subsequent freezing-in of the transformed pattern and then the shape recovery behavior. The 80 stress-strain behavior of the SMP is captured using the material parameters reported in Table 1
|
2016-10-26T03:31:20.546Z
|
2012-09-26T00:00:00.000
|
{
"year": 2012,
"sha1": "102931f84f9bb3602beab284b1630d0c15e00c8f",
"oa_license": "CCBY",
"oa_url": "https://dash.harvard.edu/bitstream/1/11130518/1/Li_SwitchingPeriodic.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8af295e348dd5d42bfc6f2be74d1fde6d784fb9d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
100741741
|
pes2o/s2orc
|
v3-fos-license
|
The Study of a New Ceramic PZT Material Pb 1-0 . 04 Sm 0 . 02 Nd 0 . 02 [ ( Zr 0 . 55 , Ti 0 . 45 ) 1-2 x , x ( Y 2 / 3 , Mo 1 / 3 ) , x ( Y 2 / 3 , Ni 1 / 3 ) ] O 3 with SEM and X-Ray Diffraction
The PZT is modified by the introduction of doping agents in A-sites and/or B-sites of perovskite structure [1]. The principal role of dopants is generally improving the properties of these materials for their adaptation to specific applications, which is the purpose of this study. Our choice fell on the mixed oxide: acceptor and donor. Five compositions with varying dopants percentage were prepared by the conventional method of thermal synthesis of mixed-oxides. Pb1-0.04Sm0.02Nd0.02[(Zr0.55,Ti0.45)1-2x,x(Y2/3,Mo1/3),x(Y2/3,Ni1/3)]O3 such that (x = 0.01, 0.03, 0.05, 0.07 and 0.1), are studied. All the samples were being sintered at a temperature ranging from 1100 ̊C to 1180 ̊C after being compacted in circular discs. The detailed structure was carried out for sintered specimens. The results of X-ray diffraction showed that all the ceramics specimens have a perovskite phase. The phase structure of Pb1-0.04Sm0.02Nd0.02[(Zr0.55,Ti0.45)1-2x, x(Y2/3,Mo1/3),x(Y2/3,Ni1/3)]O3 ceramics was transformed from the tetragonal to the rhombohedral, with an increase in the ratio of Zr/Ti in system. The scanning Electron Microscopy (SEM) showed an increase of the mean grain size when the sintering temperature was increased. The lattice parameter measurements showed that tetragonal and rhombohedral unit cells of the phases depend on the sintering temperature.
Introduction
The ceramic-type lead zirconate titanate has been studied widely during the last decades [2,3].PZT powders were prepared by the reaction process using oxides as starting materials.Barium titanate has long been known for its large dielectric constant [4].Several suitable additives are used to improve and/or modify its properties: Sr to decrease the critical temperature TC (<120˚C), to increase Pb [4], Ce and Nb to increase the dielectric constant and the spontaneous polarization [5].In ceramic manufacturing technology, piezoelectric PZT ceramic compositions are most likely to be near the morphotropic phase boundary [1].The electromechanical response of these ceramics is known to be most pronounced at the morphotropic phase boundary (MPB) composition which separates the tetragonal (Ti rich) and rhombohedral (Zr rich) phase fields.Despite extensive work on the location of the MPB, considerable controversy exists about the nature and exact composition range of the MPB [6,7].
We present in this work the preparation and the different stages of the formation reaction of the solid solution, reports the influence of sintering temperature on density and porosity.Then we will detail the different techniques of analysis applied to this compound, and we begin first by XRD and SEM.X-ray diffraction is presented to demonstrate the co-existence of the tetragonal and rhombohedral phases.Finally, we present some electrical properties: the dielectric constant and electrical loss angle for selected compositions of PZT prepared.These studies help us to accumulate as much information on these materials.
Experimental
The starting materials are carefully homogenized for three hours in the middle acetone into a beaker through a magnetic stirrer.Then dried in an oven at temperature 120˚C for two hours, the powder is ground in a glass mortar to a particle size as fine as possible.After that, The Study of a New Ceramic PZT Material Pb 1-0.04 Sm 0.02 Nd 0.02 [(Zr 0.55 ,Ti 0.45 ) 1-2x ,x(Y 2/3 ,Mo 1/3 ),x(Y 2/3 ,Ni 1/3 )]O 3 with SEM and X-Ray Diffraction 596 our mixtures are compacted in a mold as a pellet using a hand press.The pellets obtained are dried again at a temperature of about 50˚C for 30 minutes.The sample is in the open air at temperatures ranging at 800˚C in a programmable furnace brand "L Nabertherm 60 programmable" which can reach 1200˚C.The heating rate used 2˚C/min and the holding time is two hours, cooling material is slow.All compounds obtained were characterized by the diagrams obtained X-ray diffraction using a diffractometer D500.Five samples of the solid solution were prepared from a mixture of oxide whose rate purities are shown in Table 1.
Sintering
The basic technique for the preparation of ceramic parts is sintering, if the transformation, through the mechanisms of atomic diffusion, a powdery product-noncohesive granular medium, composed of loosely agglomerated particles [8].In most cases, a smart operation is performed, whose aim is the removal of organic substances by calcination at low temperature.The smart is usually performed in a furnace different from that of sintering, the main reason being that it is difficult for even an oven jointly meets the constraints of thermal cycling and air that are different for these two operations.Bridges are established between the particles quickly and welded together, these bridges will give rise to grain boundaries, at which point the porosity remains high.The porosity is changing the pore radius decreases and increases the compactness; porosity takes the form of substantially spherical isolated pores, and there is a coarsening of crystals, the polycrystalline structure is beginning to emerge.The porosity control ceramics (volume fraction, size and pore geometry) allows to vary their properties and to obtain products with desirable characteristics (thermal and mechanical properties).It is necessary during the various manufacturing steps to control accurately the thermal cycle imposed on materials (heating rate, holding time, sintering temperature and cooling rate) [9].The present study is carried primarily on the five compositions, near the morphotropic phase boundary to determine some structural properties dielectric and mechanical.
The Density
The study of the density is necessary to optimize the optimum temperature sintering.Figure 1 shows the variation of density as a function of sintering temperature, it is seen that there is an increase in density up to 7.40, so the optimum temperature of sintering is 1100˚C for A4 and A5, but it was 1150˚C for A3, finally it was 1180˚C for A1 and A2.The quality of the material increases with increasing density and it increases with increasing the sintering temperature [10].This may be explained by the effect of doping percentage, for A1 is 2% of dopant, for A2 it was 4%.For A3 it was 10%.For A4 it was 14%.Finally for A5 it was 20%.So it's quite different one regards every one of these compositions.The optimum temperature for sintering is influenced by several factors such as the addition of impurities, the rate of sintering, the holding time and also the composition of the protective atmosphere as it is achieved if this equilibrium is established [11]:
The Porosity
The porosity is calculated as follows: Figure 2 shows the variation of porosity with the sintering temperature.The porosity decreases when the sintering temperature increases to a minimum density which corresponds to the maximum.
SEM Analysis
Figure 3 shows scanning electron micrographs of the specimens sintered at 1100˚C, 1150˚C and 1180˚C respectively.From these images, it can be deduced that the decrease of porosity with increasing sintering temperature, is due to a decrease in the number and size of the pores.Figure 4(a) describes the micro structural evolution.Grain size increases with increasing sintering temperature.A uniform microstructure was obtained at 1150 and 1180˚C.While the average grain size was 12 -14 µm.This was caused by coexistence of the two phases in these materials.
Figure 1 .
Figure 1.Versus of density with the sintering temperature.
Figure 4 (
Figure3shows scanning electron micrographs of the specimens sintered at 1100˚C, 1150˚C and 1180˚C respectively.From these images, it can be deduced that the decrease of porosity with increasing sintering temperature, is due to a decrease in the number and size of the pores.Figure4(a) describes the micro structural evolution.Grain size increases with increasing sintering temperature.A uniform microstructure was obtained at 1150 and 1180˚C.While the average grain size was 12 -14 µm.This was caused by coexistence of the two phases in these materials.Figure 4(b) describes the microstructure evolution with the dopants percentage of the specimens sintered at 1150˚C.Grain size increases with increasing the dopants percentage.
Figure 2 .
Figure 2. Versus of porosity with the sintering temperature.
Figure 4 .
Figure 4. (a) Grain size versus sintering temperature; (b) Grain size evolution with the dopants percentage of the specimens sintered at 1150˚C.lattice constants of the materials.The co-existence of tetragonal and rhombohedral phases near the mophotropic phase boundary implies the existence of compositional fluctuation.The compositional fluctuation can, in principle, be determined from the width of the X-ray diffraction peaks.A morphotropic phase boundary "co-existence region" was observed [shown by duplicated (200) peaks].it has been reported in the literature that the splitting of these reflections into triplets takes place in conventionally-prepared ceramics due to compositional fluctuations leading to the co-existence of the tetragonal and rhombohedral phases (T + R).The X-ray diffraction patterns of: Pb 1-0.04 Sm 0.02 Nd 0.02 [(Zr 0.55 ,Ti 0.45 ) 1-2x ,x(Y 2/3 , Mo 1/3 ),x(Y 2/3 ,Ni 1/3 )]O 3 materials (x = 0.01, 0.03, 0.05, 0.07 and 0.1), represented by samples A1, A2, A3, A4 and A5 are given in Figure 5. Triplet peaks around 2θ = 45˚ indicate that the specimen consists of a mixture of
|
2018-05-07T14:13:16.721Z
|
2013-09-30T00:00:00.000
|
{
"year": 2013,
"sha1": "7c811c88e6e2ad11ca3a3848dcfd34eec15a6d95",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=37637",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7c811c88e6e2ad11ca3a3848dcfd34eec15a6d95",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
118416224
|
pes2o/s2orc
|
v3-fos-license
|
Magnetic braking in convective stars
. Magnetic braking causes the spin-down of single stars as they evolve on the main sequence. Models of magnetic braking can also explain the evolution of close binary systems, including cataclysmic variables. The well-known period gap in the orbital period distribution of cataclysmic variable systems indicates that magnetic braking must be significantly disrupted in secondaries that are fully convective. However, activity studies show that fully convective stars are some of the most active stars observed in young open clusters.There is therefore conflicting evidence about what happens to magnetic activity in fully convective stars. Results from spectro-polarimetric studies of cool stars have found that the field morphologies and field strengths are dependent on spectral type and rotation rate. While rapidly rotating stars with radiative cores show strong, complex magnetic fields, they have relatively weak dipole components. Fully convective stars that are rapidly rotating also possess strong magnetic fields, but their configurations are much simpler; often close to dipole fields. How this change in field geometry affects the stellar wind is the focus of several ongoing modelling efforts. Initial results suggest that rapidly rotating active dwarfs drive much stronger winds, about two orders of magnitude larger than those on the Sun.
Introduction
The idea of stellar magnetic fields driving angular momentum loss can be dated back to the 1962 paper by Evry Schatzman. This paper brought together the ideas of the day to describe how the Hertzsprung-Russell diagram can be split into distinct sections. Stars with slow rotation rates are in the lower right part of the diagram -they are predominantly stars with outer convective envelopes. As convective stars host solar-type dynamo activity, stellar magnetic fields keep material in the extended magnetosphere in corotation, thus exerting a braking torque and driving the spin-down of a single star on the zero age main sequence.
Magnetic braking should therefore operate in all systems with low mass stars (0.4 < M * < 1.5 M ⊙ ); i.e., stars with outer convective envelopes. Magnetic braking in the low mass secondary star of close binary systems is also responsible for determining binary separations. The secondary loses angular momentum and angular momentum is then removed from binary systems through tidal locking, causing the binary separation to decrease and evolve further (Mestel 1968). A schematic diagram of the wind model of Mestel and Spruit (1987). Here C marks a position inside the "dead zone" while the Alfvén surface, S A , denotes the start of the "wind zone" where the field is radial. This figure has been reproduced from Campbell (1997) with permission.
The first observational evidence for magnetic braking was gathered from a rotation study of G-dwarfs in open clusters of different ages (e.g., Pleiades, Hyades) and in the field. The seminal paper by Skumanich (1972) found that these stars spin down following the inverse square root of their age, v ∼ t 1/2 . This braking law was adopted to derive magnetic braking laws for close binary stars with cool secondaries, assuming that secondary stars have a comparable mass loss rate to that of single G stars (Verbunt & Zwaan 1981). However, it is worth noting that the Skumanich results are based only on G stars with v e sin i values up to 30 km/s. Close binary stars can far exceed these velocities.
This early work led to the development of a variety of angular momentum loss formulations. Iniitally braking laws were developed assuming symmetric winds flowing along purely radial field lines (Weber & Davis 1967, Mestel 1968). Later formulations of braking laws allow for a more complex two-component coronal structure with an inner closed region and an outer region in which field lines are open ( Figure 1; Mestel 1984, Kawaler 1988, Mestel and Spruit 1987, Tout and Pringle 1992, Ivanova and Taam 2003. Solar eclipse images provide support for this large-scale coronal model as a first approximation. In these later models, the star has a large-scale dipole field that causes a "dead zone" near the equator in which field lines are closed. As matter is trapped in this zone, it cannot escape or contribute to the angular momentum loss and reduces the efficiency of the magnetic braking. The hot expanding corona is driven by thermal pressure gradients and centrifugal acceleration and causes the formation of an outer zone (the "wind zone") where the field is open.
The field of the star gets distorted where the kinetic energy density of the outflowing material matches the poloidal magnetic field density. Field lines are blown open into the flow where the poloidal velocity matches the Alfvén speed; where the kinetic energy density matches the poloidal magnetic energy density.
Magnetic braking in convective stars
3 This can be calculated using the poloidal magnetic field, B pol and the mass density, ρ: Other variations are possible: Tout and Pringle (1992) propose a stellar field that declines more strongly with distance due to more complex stellar fields; they also assume that not all open field lines connect with the whole stellar surface. Ivanova and Taam (2003) assume that the X-ray luminosity of the secondary star is generated in the dead zone and therefore use X-ray observations of stars to model the volume of the dead zone. Just by this modification they find that their magnetic braking prescription can reproduce the observed rotation rates at a range of masses. It is clear that the field configuration is important in determining basic properties of the dead zone and where the wind zone starts. X-ray measurements alone cannot reveal the distribution of the underlying field geometry.
There are numerous published angular momentum braking laws; these all rely on a series of assumptions that have been treated differently. Knigge et al. (2011, this volume) demonstrate most effectively how these laws differ, and in fact can predict opposite trends with orbital period and stellar mass. We clearly need to understand more about the nature of stellar magnetic braking from observations of convective stars.
Rotational evolution of stars: observations
Stellar winds (or outflows) are very difficult to detect directly in main sequence cool stars. On the Sun the wind causes a mass-loss rate ofṀ= 2×10 −14 M ⊙ yr −1 (e.g., Feldman et al. 1977). However, its low density and high temperature make it difficult to detect. Direct measurements of outflows on other main sequence stars are even more challenging.
Indirect measurements can be made through observations of the interaction between the stellar wind and the local interstellar medium. This is detected as extra Lyα absorption in UV spectra from the Hubble Space Telescope, HST (Wood et al. 2002(Wood et al. , 2005. HST studies of a handful of systems reveal that mass loss rates should scale with magnetic activity. However, the scaling relation depends on very few systems, some of which are binaries and therefore not well understood.
Studies of close binary systems and cool stars have been conducted to characterise mass loss and angular momentum loss rates further. These are described below. While the techniques differ a coherent picture is starting to emerge.
Magnetic braking in cataclysmic variable systems (CVs)
Magnetic braking has been shown to explain the evolution of close binary systems. All close binary systems lose angular momentum through gravitational radiation. The angular momentum loss rate due to gravitational radiation should depend on the mass of the component stars, M 1 and M 2 and the orbital sepa- Orbital period distribution of cataclysmic variable binary systems. There are few systems with periods between 2-3 hours. This is attributed to a disruption in magnetic braking as stars become fully convective. Reproduced from Davis et al. (2008) with permission. ration a, as follows (Paczyński 1967, Knigge et al. 2011: Gravitational radiation clearly weakens in systems with large orbital separations. The observed mass transfer rates in long period cataclysmic variable binary and low mass X-ray binary systems can only be explained by another angular momentum loss mechanism (Verbunt & Zwaan 1981). This is attributed to magnetic braking, with the assumption that convective stars in close binary systems will show the same magnetic braking levels as those seen in single convective stars. Their magnetic braking law prescription depends on the secondary mass, M 2 , radius, R 2 and rotation, Ω as shown below: The orbital period distribution of CVs is shown in Figure 2 (Davis et al. 2008). This has a largely bimodal distribution, with very few systems in the so-called "gap" between orbital periods of 2-3 hours. In order to explain the accretion rates and sizes of the donor stars observed above the period gap, it is necessary to invoke magnetic braking. At orbital periods of 3 hours secondary stars become fully convective and presumably this causes its magnetic activity to be disrupted and essentially switches off the magnetically driven wind (outflow). As magnetic braking switches off, the donor shrinks within its Roche lobe, mass transfer is shut off and the secondary re-attains thermal equilibrium. The system is then no longer observed as a cataclysmic variable and subsequent orbital evolution of the CV is driven by gravitational radiation only. Mass transfer resumes again when the secondary star makes contact with its Roche lobe once more at P orb ∼ 2h.
An important proof of this scenario is provided in Patterson et al (2005). They find that the masses of the donor stars above and below the period gap are very similar, which is expected if mass transfer is reduced so the masses remain largely unchanged. In line with the magnetic braking scenario, donor stars above the period gap are more inflated than those below the period gap. New work measuring the sizes of the donor stars further supports these findings: Knigge et al. (2011) fit the sizes of the secondaries using a parametrised version of the Verbunt & Zwaan braking law (Rappaport et al. 1983). The find that gravitational radiation losses alone are also not sufficient to account for the star sizes below the period gap, with twice the level of gravitational radiation-driven braking required to explain the evolution below the period gap.
Magnetic braking in post common envelope binary systems (PCEBs)
Studies of PCEBs can shed light on the nature of magnetic braking. If magnetic braking changes with secondary mass, the timescales for the onset of accretion in close binaries should be affected. Politano & Weiler (2006) use a Monte Carlo population synthesis code to investigate the relative distribution of PCEBS with low mass and high mass secondaries assuming different magnetic braking formulations.
They find the most noticeable effect when magnetic braking is completely disrupted in fully convective stars; this causes a significant decline in PCEB secondaries with radiative cores. The relative number of PCEBs declines by 38% in the mass bin at which magnetic braking is switched on (M 2 > 0.37 M ⊙ ). Intermediate braking prescriptions are also investigated, in which the magnetic braking is reduced in rapidly rotating systems or the most X-ray active stars. However, for both of the intermediate braking cases they find that the numbers of low mass and high mass secondaries remains similar. The fourth case investigated assumes no magnetic braking, this finds a relative increase in the number of PCEBs with increasing secondary mass.
A large survey of white dwarf-main sequence (WDMS) binaries using the SDSS finds a decline of about 80% in the relative fraction of PCEBs at masses greater than M 2 > 0.37 M ⊙ (Schreiber et al. 2010). However, the simulation above predicts the relative number of PCEBs not the fraction of PCEB/WDMS systems: the number of WDMS binary systems should increase with secondary mass. Taking this into account, Politano & Weiler's predictions translate to a decrease of between 38-73% in the fraction of PCEB/WDMS systems at higher masses. Schreiber et al. (2010) caution that, while the decrease they observe with increasing secondary star mass is in general agreement with the predictions from the disrupted magnetic braking model, the observed distribution is broader than that predicted. A contributing factor may be the uncertainty in the spectral type determination of these systems, which would effectively broaden an initially steep function. Alternatively, the onset of the disruption of magnetic braking may occur more gradually rather than at one mass.
Magnetic activity and braking in single stars
Since Skumanich's 1972 study of G main sequence stars, we now have a wealth of information tracking the angular momentum properties of convective stars over a range of spectral types (i.e. masses) and ages. Barnes (2003) collate rotation periods from open cluster studies and find that the braking timescales depend strongly on a star's spectral type, age, and crucially, the type of magnetic activity behaviour displayed in that star.
He finds that stars ostensibly fall into two activity tracks: the more slowly rotating stars lie on the interface track, I: so-called as they show signs of classic interface (solar-type) dynamo activity. They spin-down with age efficiently following a modified version of the Skumanich law. The other track is called the convective track, C: this tends to contain the more magnetically active, rapidly rotating stars and shows a reduced braking efficiency. Expressions governing the spin-down rates of single stars on these two tracks are shown below (Barnes & Kim 2010), where k C and k I are constants and τ c is the convective turnover timescale: Stars move from the C track to the I track as they age, with lower mass stars taking longer to make this transition. Almost all G stars will have made this transition by the first 200 Myr on the main sequence, while M stars can take over 500 Myr.
Key questions
Our current understanding of angular momentum evolution comes predominantly from statistical studies of cool star systems. The root cause of the magnetic braking mechanism is of course the stellar magnetic field. We can learn about the properties of stellar magnetic fields in more detail by studying proxies of magnetic activity such as X-ray emission. More recently, tomographic techniques have revealed even more detailed information about the distribution and characteristics of magnetic fields at the surfaces of stars where they first emerge. In the rest of this paper we address the following questions. These have been posed by earlier studies and serve to place our understanding of the root causes of magnetic braking and stellar winds on a solid footing.
1. What is the dependence of stellar magnetic fields on spectral type, age, rotation and binarity?
2. What happens to magnetic fields in fully convective stars?
3. Do magnetic fields look similar in single stars and in their binary star counterparts?
4. How does the stellar wind depend on the stellar magnetic field?
Magnetic fields in cool stars
Magnetic braking laws that have been developed for convective stars rely on simple prescriptions for the stellar magnetic field, which has been variously modelled as a dipole or even as a monopole in some early cases. The magnetic activity state of a star is often characterised using a magnetic activity proxy, e.g., X-ray and Ca II H&K emission. These are measures of the magnetic heating in the outer atmospheres of cool stars and, therefore, indirect measures of the magnetic flux threading through the stellar atmosphere. Numerous X-ray studies of open clusters have revealed that X-ray luminosity and therefore magnetic activity levels are strongly dependent on rotation rate, with X-ray emission generally increasing with increasing rotation. A tighter correlation is found with the Rossby number, R • = P rot /τ c ; where P rot = stellar rotation period and τ c =convective turnover time-scale. τ c is a theoretical quantity that increases with increasing convection zone depth.
X-ray and Ca II activity studies have found that the dynamo does not in fact switch off at full convection, as predicted by studies of the evolution of close binary systems. Indeed, fully convective stars have similar fractional X-ray luminosities (L X /L bol ) to active G and K stars (Figure 4; Pizzolato et al.2003, James et al. 2000, Jeffries et al. 2011. Furthermore, there is still considerable uncertainty regarding exactly how these measures of magnetic heating relate to the underlying magnetic field. While these diagnostics may be sensitive to the magnetic energy levels in stars, they cannot reveal the underlying magnetic geometry and therefore the conditions that drive stellar winds. How magnetic flux is distributed in these stars is central to understanding the conditions driving stellar winds.
Spot maps
The technique of Doppler imaging has been used to image the surfaces of over 80 convective stars since it was first introduced in 1987 by Vogt, Penrod & Hatzes (see review by Strassmeier 2009). It can only be applied to rapidly rotating stars with v e sin i > 15km/s; these stars are some of the most magnetically active, displaying X-ray luminosities up to two orders of magnitude greater than that on the Sun. The dark starspots that are reconstructed are analogous to sunspots, which mark the largest concentrations of magnetic flux at the solar surface.
Surface spot maps show that stars with similar spectral types and rotation rates have similar spot patterns, suggesting that they not only have similar levels of activity but also similar magnetic flux emergence patterns. Figure 5 shows Doppler maps from four K1-2 main sequence stars with similar activity levels -all of these maps show polar/high latitude spots co-existing with low latitude spots. Most G and K dwarfs tend to possess high latitude spots, with many showing large spots that cover their poles -also known as polar caps (e.g., Donati & Collier Cameron 1997, Barnes et al. 2005, Jeffers et al. 2011). G and K stars often have a mixture of both high latitude/polar spots and low latitude spots. In early M dwarfs, that are not fully convective the starspot patterns change, with little evidence for polar spots (Barnes et al. 2004).
The K2 secondary in the post common envelope binary, V471 Tau, is of particular interest. It is instructive to compare this map with those of single stars with similar masses and rotation rates, such as AB Dor in Figure 5. As V471 Tau has an inclination angle of nearly 90 • the low latitude spots cannot be reconstructed accurately and are smeared out due to a mirroring effect between the northern and southern hemispheres of the star. Despite this V471 Tau's spot maps suggest that there are shorter lived low latitude spots co-existing with the polar cap. This spot pattern is typical of other active K dwarfs and suggests that binarity and tidal locking do not fundamentally change the magnetic field generation mechanism in stars (Hussain et al. 2006). Our study of the tidally locked binary system, HD 155555 (G5 + K0), found spot and magnetic field patterns in the binary component stars that are indistinguishable from those of single stars with similar spectral types (Dunstone et al. 2008). Figure 5. Surface spots on four rapidly rotating K stars with similar magnetic activity levels (Hussain et al., 2000, Rice & Strassmeier 1998). The rotation periods and names of all four stars are shown as captions. These images are snapshots of the stellar surface at a selected phase.
It is possible to map the surfaces of the secondaries in CV systems using the technique of Roche tomography (Watson & Dhillon (2001), which uses similar principles to those in Doppler imaging techniques. Maps of AE Aqr, BV Cen and over eight other systems have now been published; they show a mixture of high and low latitude spots though it is not clear if these stars host polar spots due to the often low contrast reconstructions. This is a particularly challenging technique as strong irradiation patterns across CV secondary surfaces can dilute the effects of starspots (e.g., QQ Vul; Watson et al. 2003). Furthermore, as CVs have short orbital periods and the secondaries are faint, getting high S/N spectra in short exposure times (to limit phase smearing) limits the numbers of systems that can be studied in this way with current facilities.
Magnetic field maps
With the advent of high resolution, high throughput spectro-polarimeters (e.g., CFHT-ESPADONS, ESO 3.6m-HARPSpol) we can now detect stellar magnetic fields directly using circular spectro-polarimetry. Time-series of high resolution circularly polarised spectra are inverted to produce surface magnetic field distributions using Zeeman Doppler imaging (Semel 1989, Donati & Brown 1997. This technique applies Doppler imaging principles to high resolution circularly polarised profiles. As circularly polarised spectra are sensitive to the line-of-sight component of the stellar magnetic field, the technique enables us to measure the size of the magnetic field as well as reconstructing its geometry and distribution across the stellar surface. Over 30 convective stars have been imaged using Zeeman Doppler imaging, covering a range of spectral types, rotation rates and evolutionary states. A summary of recent results was presented in the review by Donati & Landstreet (2009; see their Figure 3). Studies show two clear transitions in magnetic activity characteristics. The first happens with rotation rate: slowly rotating G and K-type stars possess simple, mainly axisymmetric poloidal fields. In more rapidly rotating stars, the field strengths increase and surface azimuthal fields -horizontal, East-West oriented fields -strengthen. The second transition in magnetic activity occurs in fully convective stars as described below.
The transition to full convection
Magnetic field maps of M dwarfs show a marked change with mass. While high mass M stars look similar to G and K-type stars, fully convective M stars (M * ≤ 0.4 M ⊙ ) have predominantly poloidal field topologies that are more axisymmetric; they also have stronger fluxes than their higher mass counterparts ( Figure 6; Donati et al. 2008. This ties in with X-ray studies, which find no significant drop in X-ray luminosity of fully convective stars (Figure 4).
A more complete picture of magnetic field properties of convective stars is obtained by combining the field topologies from magnetic maps with measurements of mean magnetic fluxes measured from intensity spectra. Because circularly polarised spectra are only sensitive to the line-of-sight component of the magnetic field, multiple switches in polarity across the surface cause the circularly polarised signature to be diluted due to flux cancellation. Measurements of Zeeman broadening in intensity profiles of magnetically sensitive lines make it possible to measure the mean magnetic flux at the surface of a star (e.g. (Reiners & Basri 2009, adapted with permission here). Left: Mean magnetic fluxes recovered from intensity diagnostics (crosses) and circularly polarised profiles (triangles). Circular polarisation recovers less flux overall than intensity; as circular polarisation is sensitive to the line-of-sight component of the stellar magnetic field, complex topologies will lead to flux cancellation. The mean magnetic fluxes from intensity, < B int > (crosses), do not change significantly with stellar mass. The dashed line denotes the fully convective boundary. Right: The fractional flux from circularly polarised profiles increases in fully convective stars, as their simpler magnetic field topologies result in reduced flux cancellation. Reiners & Basri (2009) compare their measurements of mean magnetic flux for stars covering a range of masses, 0.31 ≤ M * ≤ 0.75. Their results are summarised in Figure 7. They find that the mean magnetic flux < B int > is unaffected by the transition to full convection. However, the size of the magnetic flux recovered from from circularly polarised spectra ( < B circ > in Figure 7) rises. This suggests that the field polarities must be simpler and less prone to flux cancellation at lower masses, thus supporting the results from Zeeman Doppler imaging studies.
Wind models for cool stars
Surface magnetic field maps can be extrapolated to produce detailed models of a star's magnetosphere. The surface field maps are used to define the locations of footpoints of fields that extend in to the star's corona and beyond (Hussain et al. 2002). A co-ordinated X-ray and Zeeman Doppler imaging study of the K star, AB Dor, showed that the coronal model created by extrapolating the surface magnetic field map could reproduce the level of rotational modulation observed in the contemporaneous X-ray lightcurves and spectra (Hussain et al. 2007).
Recent studies have shown how similar maps can be used to model stellar winds in detail using magnetohydrodynamic codes that were originally developed for the Sun (Cohen et al. 2010, Vidotto et al. 2011. These studies use the BATS-R-US code, and require the surface magnetic fluxes as an input, assuming Spinning up AB Dor from P rot = 25 d (top left) to the actual P rot = 0.5 d results in greater field tangling. Bottom: The effect of increased base coronal density from left to right, n • = 2 × 10 8 , 10 9 and 10 10 cm −3 . The iso-surface of n = 10 8 cm −3 is shown in green and the colour scale represents the density (1E6 to 8E8cm −3 ) (Cohen et al. 2010, with permission). a potential field initially. Further inputs are the star's parameters and values for the base coronal density, ρ • and temperature, T • . The code assumes a thermal wind and allows the wind to evolve and interact with the magnetic field in a self-consistent way until a steady-state wind solution is reached. The interaction between the coronal density structure and the speed of the wind determines the angular momentum loss rate (J) and the mass loss rate (Ṁ) for the star.
We find that wind models of the rapidly rotating K0 star, AB Dor (P rot = 0.5 d, T eff = 5000K), indicate significantly higher mass loss rates than those seen on the present-day Sun (Cohen et al. 2010). AB Dor's surface maps show a complex field distribution, with field strengths of over 700 G, much of which is concentrated at higher latitudes than on the Sun. These models require the base coronal density as an input. This is inherently uncertain even though it can be estimated from X-ray coronal diagnostics, which are likely somewhat higher than the density at the base of the wind. We use values ranging between n • = 2 × 10 8 and 10 10 cm −3 to investigate the dependence of the star on this parameter. Our simulations show that two factors affect the loss rates of the stars ( Figure 8): a) Rotation rate -increasing AB Dor's rotation period by a factor of 50 (from P rot =0.5 d to 25 d) reduces the loss rate by up toJ 0.5 /J 25 ∼70 andṀ 0.5 /Ṁ 25 ∼ 10. AB Dor's maps show strong high latitude flux near the pole of the star. This combined with rapid rotation results in the tangling of field lines and thus more of the corona is closed, leading to larger loss rates than in slower rotators with similar magnetic field distributions. If the star is spun down more of the field becomes open and the loss rates are reduced considerably. b) Coronal base density -an increase of a factor of 50 (n 0 = 2 · 10 8 to 10 10 cm −3 ) increases the loss rates by over an order of magnitude (J 10 10 /J 10 8 ∼10 anḋ M 10 10 /Ṁ 10 8 ∼ 20). This is because greater mass at the base increases the mass flux through the closed "dead zone"; this decreases the density gradient with height, which effectively leads to a greater torque on the rotating star.
An analogous study of the mid-M star, V374 Peg (M4, Prot=0.44 d), finds similarly large loss rates (Vidotto et al. 2011). V374 Peg's surface field is quite similar to that of EQ Peg A ( Figure 6): it has a simple, largely dipolar field, with a strength of 1660 G (compared to only 1-2 G on the Sun). While the mass and angular momentum loss rates are similar to those predicted for AB Dor, the reasons are different. The models for V374 Peg show much faster winds than on AB Dor, with relatively little field tangling despite the rapid rotation and presumably due to the simple dipolar field in V374 Peg. As with the AB Dor study, Vidotto et al. also find that the braking rate strongly depends on the coronal base density, which is not well-defined.
These first detailed wind modelling studies pose some interesting questions as they find that braking is more efficient in rapidly rotating stars, including the simpler M dwarf fields. So how do we explain the observations, which suggest that, in fully convective M stars, the braking must become less efficient? Future studies will reveal much more about how mass loss and angular momentum loss rates change with magnetic topology. The wind modelling techniques should first be fine-tuned against the few observable quantities such as the mass loss measurements made by Wood et al. (2005) in a handful of systems.
Summary
This review provides an overview of angular momentum evolution in both binary and single star systems. A wealth of observations suggest that magnetic braking is likely to operate at all spectral types. However the form of the magnetic braking changes with spectral type, rotation rate and activity state. The fundamental processes controlling the magnetic braking efficiency have, as yet, to be firmly established. Surface imaging studies of activity in cool stars have revealed a promising avenue and go some way towards answering the key questions we pose in this paper.
1. What is the dependence of stellar magnetic fields on spectral type, age, and rotation? Clear changes are seen with spectral type and rotation rate. Magnetic field maps of G-M-type stars confirm that the magnetic field topology in active rapidly rotating G and K-type stars differs compared to less active slowly rotating counterparts. Rapidly rotating stars show stronger, more complex, fields; with strong flux typically at high latitudes. Slowly rotating stars have more axisymmetric, simpler dipolar fields with much weaker field strengths.
2. What happens to magnetic fields in fully convective stars? Magnetic field studies find a transition in M stars near the fully convective boundary (M * ≤ 0.4 M ⊙ ): they possess simpler more axisymmetric dipolar fields of the type seen in slowly rotating stars, but with field strengths up to three orders of magnitude larger. This transition agrees well with where the radiative core becomes negligibly small and convective turnover timescales are expected to increase ). These results are consistent with X-ray and Ca II studies, which cannot detect changes in magnetic topology and find that the magnetic heating in the upper atmospheres of these stars does not switch off or decrease at full convection.
How the change in field topology affects the properties of the stellar wind has yet to be established and is the focus of intensive modelling efforts. Early results from spectro-polarimetric studies of very low mass stars suggest that the field topologies change again below masses of 0.15 M ⊙ : the magnetic field becomes weaker and complex ). However, this requires further investigation as it is based on a small sample and one exception, WX UMa, has been found.
3. Do magnetic fields look similar in single stars and in their binary star counterparts? Spot and magnetic field maps of the tidally locked main sequence secondaries in binary systems, V471 Tau and HD 155555, look similar to those of single G and K-type stars, with strong flux at both high and low latitudes. Images of secondaries in CVs also suggest a mix of high and low latitude spots. However, the tomography of CV secondaries is very challenging as the spot signatures are difficult to resolve in contrast to irradiation patterns. The direct comparison with main sequence stars of similar masses has yet to be done systematically.
4. How does the stellar wind depend on the stellar magnetic field? Initial results suggest that rapidly rotating active dwarfs drive stronger winds than those in more slowly rotating systems. This ties in well with observed mass loss rates. However, further work is needed to establish how changes in coronal topology directly affect the angular momentum and mass loss rates in stars. Inputs that are used in these models also should be refined: for example magnetic field maps can be enhanced to account for missing flux from dark polar spots; the base coronal densities also need to be constrained.
Binary evolution studies have established that there is a significant disruption to magnetic braking when stars become fully convective. Ostensibly this transition is congruent with the point at which magnetic field maps show a switch from complex multipolar fields to simple dipolar fields. How strongly this transition should affect magnetic braking has yet to be understood. Rotation studies of single stars indicate that active stars spin down on slower timescales than their inactive counterparts regardless of mass (Barnes 2003, Barnes & Kim 2010; furthermore the braking observed in single stars is weaker than that needed to explain the properties of CV secondaries above and below the period gap (Knigge et al. 2011).
Future avenues of investigation need to address the following points: a) modelling braking in moderately active stars by extrapolating surface magnetic field maps as inputs and refining these models through comparison with the few measurements of loss rates in cool stars; b) once outflows/winds are better understood in single stars models of these winds can be used to investigate the evolution of close binary systems in more detail; c) establishing whether the spot patterns found in CV secondaries are analogous with their counterparts in PCEBs and single stars. Further spot and magnetic field maps from Roche tomography and Doppler imaging studies of close binary systems will prove invaluable to establish this latter point.
|
2012-02-23T01:55:14.000Z
|
2011-09-01T00:00:00.000
|
{
"year": 2012,
"sha1": "316e627357a66e0d94c3f755352f66e0eccf22dc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "316e627357a66e0d94c3f755352f66e0eccf22dc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
155092173
|
pes2o/s2orc
|
v3-fos-license
|
Dual Supervised Learning for Natural Language Understanding and Generation
Natural language understanding (NLU) and natural language generation (NLG) are both critical research topics in the NLP and dialogue fields. Natural language understanding is to extract the core semantic meaning from the given utterances, while natural language generation is opposite, of which the goal is to construct corresponding sentences based on the given semantics. However, such dual relationship has not been investigated in literature. This paper proposes a novel learning framework for natural language understanding and generation on top of dual supervised learning, providing a way to exploit the duality. The preliminary experiments show that the proposed approach boosts the performance for both tasks, demonstrating the effectiveness of the dual relationship.
Introduction
Spoken dialogue systems that can help users solve complex tasks such as booking a movie ticket have become an emerging research topic in artificial intelligence and natural language processing areas.With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions.The recent advance of deep learning has inspired many applications of neural dialogue systems (Wen et al., 2017;Bordes et al., 2017;Dhingra et al., 2017;Li et al., 2017).A typical dialogue system pipeline can be divided into several parts: 1) a speech recognizer that transcribes a user's speech input into texts, 2) a natural language understanding module (NLU) that classifies the domain and associated intents and fills slots to form a semantic frame (Chi et al., 2017;Chen et al., 2017;Zhang et al., 2018;Su et al., 2018c,
Natural Language Generation Natural Language
McDonald's is a cheap restaurant nearby the station.
Semantic Frame
RESTAURANT="McDonald's" PRICE="cheap" LOCATION= "nearby the station" Figure 1: NLU and NLG emerge as a dual form.
2019), 3) a dialogue state tracker (DST) that predicts the current dialogue state in the multi-turn conversations, 4) a dialogue policy that determines the system action for the next step given the current state (Peng et al., 2018;Su et al., 2018a), and 5) a natural language generator (NLG) that outputs a response given the action semantic frame (Wen et al., 2015;Su et al., 2018b;Su and Chen, 2018).Many artificial intelligence tasks come with a dual form; that is, we could directly swap the input and the target of a task to formulate another task.Machine translation is a classic example (Wu et al., 2016); for example, translating from English to Chinese has a dual task of translating from Chinese to English; automatic speech recognition (ASR) and text-to-speech (TTS) also have structural duality (Tjandra et al., 2017).Previous work first exploited the duality of the task pairs and proposed supervised (Xia et al., 2017) and unsupervised (reinforcement learning) (He et al., 2016) training schemes.The recent studies magnified the importance of the duality by boosting the performance of both tasks with the exploitation of the duality.
NLU is to extract core semantic concepts from the given utterances, while the goal of NLG is to construct corresponding sentences based on given semantics.In other words, understanding and generating sentences are a dual problem pair shown in Figure 1.In this paper, we introduce a novel train-arXiv:1905.06196v4[cs.CL] 30 Apr 2020 ing framework for NLU and NLG based on dual supervised learning (Xia et al., 2017), which is the first attempt at exploiting the duality of NLU and NLG.The experiments show that the proposed approach improves the performance for both tasks.
Proposed Framework
This section first describes the problem formulation, and then introduces the core training algorithm along with the proposed methods of estimating data distribution.
Assuming that we have two spaces, the semantics space X and the natural language space Y, given n data pairs {(x i , y i )} n i=1 , the goal of NLG is to generate corresponding utterances based on given semantics.In other words, the task is to learn a mapping function f (x; θ x→y ) to transform semantic representations into natural language.On the other hand, NLU is to capture the core meaning of utterances, finding a function g(y; θ y→x ) to predict semantic representations given natural language.A typical strategy of these optimization problems is based on maximum likelihood estimation (MLE) of the parameterized conditional distribution by the learnable parameters θ x→y and θ y→x .
Dual Supervised Learning
Considering the duality between two tasks in the dual problems, it is intuitive to bridge the bidirectional relationship from a probabilistic perspective.If the models of two tasks are optimal, we have probabilistic duality: where P (x) and P (y) are marginal distributions of data.The condition reflects parallel, bidirectional relationship between two tasks in the dual problem.Although standard supervised learning with respect to a given loss function is a straightforward approach to address MLE, it does not consider the relationship between two tasks.Xia et al. (2017) exploited the duality of the dual problems to introduce a new learning scheme, which explicitly imposed the empirical probability duality on the objective function.The training strategy is based on the standard supervised learning and incorporates the probability duality constraint, so-called dual supervised learning.There-fore the training objective is extended to a multiobjective optimization problem: where l 1,2 are the given loss functions.Such constraint optimization problem could be solved by introducing Lagrange multiplier to incorporate the constraint: where λ x→y and λ y→x are the Lagrange parameters and the constraint is formulated as follows: Now the entire objective could be viewed as the standard supervised learning with an additional regularization term considering the duality between tasks.Therefore, the learning scheme is to learn the models by minimizing the weighted combination of an original loss term and a regularization term.Note that the true marginal distribution of data P (x) and P (y) are often intractable, so here we replace them with the approximated empirical marginal distribution P (x) and P (y).
Distribution Estimation as Autoregression
With the above formulation, the current problem is how to estimate the empirical marginal distribution P (•).To accurately estimate data distribution, the data properties should be considered, because different data types have different structural natures.For example, natural language has sequential structures and temporal dependencies, while other types of data may not.Therefore, we design a specific method of estimating distribution for each data type based on the expert knowledge.
From the probabilistic perspective, we can decompose any data distribution p(x) into the product of its nested conditional probability, where x could be any data type and d is the index of a variable unit.
Language Modeling
Natural language has an intrinsic sequential nature; therefore it is intuitive to leverage the autoregressive property to learn a language model.In this work, we learn the language model based on recurrent neural networks (Mikolov et al., 2010;Sundermeyer et al., 2012) by the cross entropy objective in an unsupervised manner.
where y (•) are words in the sentence y, and L is the sentence length.
Masked Autoencoder
The Even though the product rule in (1) enables us to decompose any probability distribution into a product of a sequence of conditional probability, how we decompose the distribution reflects a specific physical meaning.For example, language modeling outputs the probability distribution over vocabulary space of i-th word y i by only taking the preceding word sequence y <i .Natural language has the intrinsic sequential structure and temporal dependency, so modeling the joint distribution of words in a sequence by such autoregressive property is logically reasonable.However, slot-value pairs in semantic frames do not have a single directional relationship between them, while they parallel describe the same sentence, so treating a semantic frame as a sequence of slot-value pairs is not suitable.Furthermore, slot-value pairs are not independent, because the pairs in a semantic frame correspond to the same individual utterance.For example, French food would probably cost more.Therefore, the correlation should be taken into account when estimating the joint distribution.Considering the above issues, to model the joint distribution of flat semantic frames, various dependencies between slot-value semantics should be leveraged.In this work, we propose to utilize a masked autoencoder for distribution estimation (MADE) (Germain et al., 2015).By zeroing certain connections, we could enforce the variable unit x d to only depend on any specific set of variables, not necessary on x <d ; eventually we could still have the marginal distribution by the product rule: where S d is a specific set of variable units.
In practice, we elementwise-multiply each weight matrix by a binary mask matrix M to interrupt some connections, as illustrated in Figure 2. To impose the autoregressive property, we first assign each hidden unit k an integer m(k) ranging from 1 to the dimension of data D − 1 inclusively; for the input and output layers, we assign each unit a number ranging from 1 to D exclusively.Then binary mask matrices can be built as follows: Here l indicates the index of the hidden layer, and L indicates the one of the output layer.With the constructed mask matrices, the masked autoencoder is shown to be able to estimate the joint distribution as autoregression.Because there is no explicit rule specifying the exact dependencies between slot-value pairs in our data, we consider various dependencies by ensemble of multiple decomposition, that
Experiments
To evaluate the effectiveness of the proposed framework, we conduct the experiments, the settings and analysis of the results are described as follows.
Settings
The experiments are conducted in the benchmark E2E NLG challenge dataset (Novikova et al., 2017), which is a crowd-sourced dataset of 50k instances in the restaurant domain.Our models are trained on the official training set and verified on the official testing set.Each instance is a pair of a semantic frame containing specific slots and corresponding values and an associated natural language utterance with the given semantics.
The data preprocessing includes trimming punctuation marks, lemmatization, and turning all words into lowercase.
Although the original dataset is for NLG, of which the goal is to generate sentences based on the given slot-value pairs, we further formulate a NLU task as predicting slot-value pairs based on the utterances, which is a multi-label classification problem.Each possible slot-value pair is treated as an individual label, and the total number of labels is 79.To evaluate the quality of the generated sequences regarding both precision and recall, for NLG, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references, while F1 score is measured for the NLU results.
Model Details
The model architectures for NLG and NLU are a gated recurrent unit (GRU) (Cho et al., 2014) with two identical fully-connected layers at the two ends of GRU.Thus the model is symmetrical and may have semantic frame representation as initial and final hidden states and sentences as the sequential input.
In all experiments, we use mini-batch Adam as the optimizer with each batch of 64 examples, 10 training epochs were performed without early stop, the hidden size of network layers is 200, and word embedding is of size 50 and trained in an end-to-end fashion.
Results and Analysis
The experimental results are shown in Table 1, where each reported number is averaged over three runs.The row (a) is the baseline that trains NLU and NLG separately and independently, and the rows (b)-(d) are the results from the proposed approach with different Lagrange parameters.
The proposed approach incorporates probability duality into the objective as the regularization term.To examine its effectiveness, we control the intensity of regularization by adjusting the Lagrange parameters.The results (rows (b)-(d)) show that the proposed method outperforms the baseline on all automatic evaluation metrics.Furthermore, the performance improves more with stronger regularization (row (b)), demonstrating the importance of leveraging duality.
In this paper, we design the methods for estimating marginal distribution for data in NLG and NLU tasks: language modeling is utilized for sequential data (natural language utterances), while the masked autoencoder is conducted for flat representation (semantic frames).The proposed method for estimating the distribution of semantic frames considers complex and implicit dependencies between semantics by ensemble of multiple decomposition of joint distribution.In our experiments, the empirical marginal distribution is the average over the results from 10 different masks and orders; in other words, 10 types of dependencies are modeled.The row (e) can be viewed as the ablation test, where the marginal distribution of semantic frames is estimated by considering slotvalue pairs independent to others and statistically computed from the training set.The performance is worse than the ones that model the dependencies, demonstrating the importance of considering the nature of input data and modeling data distribution via the masked autoencoder.
We further analyze understanding and generation results compared with the baseline model.In some cases, it is found that our NLU model can extract the semantics of utterances better and our NLU model can generate sentences with richer information based on the proposed learning scheme.In sum, the proposed approach is capable of improving the performance of both NLU and NLG in the benchmark data, where the exploitation of duality and the way of estimating distribution are demonstrated to be important.
Conclusion
This paper proposes a novel training framework for natural language understanding and generation based on dual supervised learning, which first exploits the duality between NLU and NLG and introduces it into the learning objective as the regularization term.Moreover, expert knowledge is incorporated to design suitable approaches for estimating data distribution.The proposed methods demonstrate effectiveness by boosting the performance of both tasks simultaneously in the benchmark experiments.
Figure 2 :
Figure 2: The illustration of the masked autoencoder for distribution estimation (MADE).
Table 1 :
is, to sample different sets S d .The NLU performance reported on micro-F1 and the NLG performance reported on BLEU, ROUGE-1, ROUGE-2, and ROUGE-L of models (%).
|
2019-05-16T00:58:40.000Z
|
2019-05-15T00:00:00.000
|
{
"year": 2019,
"sha1": "d833a5a729107ad1fbcd6d49f80f5677717bf048",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/P19-1545.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "3068c2a60b0ceaa2cd31786c0958ba29ccf0e4c6",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
598028
|
pes2o/s2orc
|
v3-fos-license
|
Molecular Cloning and Expression of EG95 Gene of Iranian Isolates of Echinococcus granulosus.
Background Echinococcosis or hydatidosis is a chronic, zoonotic worldwide infection that occurs by the larval stages of taeniid cestodes of the genus Echinococcus. Iran is known as endemic region for this infection in the world. Vaccination has been considered as a good prevention method for this disease. Recombinant vaccines containing EG95 protein, against E. granulosus, has shown a high degree of protection against E. granulosus infection. In this study EG95 gene was extracted from Iranian isolates of E. granulosus and then cloned and expressed in expression vector. Methods Protoscoleces were collected from sheep hydatid cysts. Then DNA and RNA were extracted from protoscoleces, and amplified by PCR and RT-PCR with specific primer. Afterward the purified RT-PCR products were successfully ligated into pTZ57R/T plasmid vector. The pcDNA3 plasmid was used as expression vector and Eg95 fragment sub cloned into this plasmid. The pcEG95 plasmid was digested by restriction enzymes to confirm cloning of this gene in pcDNA3 plasmid. In last step, the subcloned gene was expressed in CHO as eukaryotic cell. Results EG95 fragment successfully was subcloned in pcDNA3 and EG95 protein was expressed by eukaryotic cell. The recombinant EG95 protein was confirmed by SDS-PAGE and Western blot. Conclusion Recombinant plasmid of pcEG95 was constructed successfully and express of recombinant EG95 protein was confirmed.
Introduction chinococcosis or hydatidosis is a chronic and zoonotic worldwide infection that occurs because of infection by the larval stages of taeniid cestodes of the genus Echinococcus (1,2). Echinococcus granulosus is the most important cestodes in human infection. It is assumed that 50 million people are at risk of acquiring the disease in Asia and in Africa (2). Iran is known as endemic region for the infection in the world (3). Many epidemiological studies have shown that the seroprevalence of human hydatidosis in different regions of Iran is high and between 1.2-21.4% of infectivity (4). The minimum and maximum prevalence of hydatidosis in sheep based on abattoir data are reported 5.1% (Kerman) and 74.4% (Ardabil) respectively (5,6). An effective vaccine against infection with E.granulosus would be a valuable for hydatid control campaigns (7). Early attempts at inducing immunity in dogs as definitive hosts through vaccination carry out in 1933 (8). Vaccination in sheep as intermediate host has been considered to prevention of disease in recent decades (1). In many studies, live or killed parasites were used as antigen for production of vaccine (9). Using recombinant proteins like EG95 were assumed effective for prevention against hydatidosis. EG95 has been a highly effective sheep vaccine in New Zealand, Australia, and Argentina (10) and Iran (11) and the vaccine may prove to be a useful tool in the control of hydatid disease in areas of endemicity (9). Lightowlers et al. used this protein for vaccination of sheep against experimental challenge with the E.granulosus. This vaccine has shown a high degree of protection against this parasite (96-98%) (7). In other study, vaccination with EG95 obtained from three different parasite isolates conferred a high degree of protection against challenge in sheep (protection range 96-100%) (10).
Since, there was no study in Iran regarding the gene coding EG95 protein production, so we investigated to extract EG95 gene from Iranian isolates of E. granulosus and cloning as well as expressing it in appropriate expression vector.
Collection of samples
Sheep hydatid cysts were collected from a slaughterhouse in Sari City, Mazandaran Province, Iran, where the incidence of the infection is high. Protoscoleces were aspirated from cysts, pooled, and washed with sterile PBS.
Genomic DNA extraction DNA was extracted from protoscoleces by phenol-chloroform method (12). The DNA concentration and quality was assessed by both UV absorbance and electrophoresis on the 1% agarose gel.
Total RNA extraction
The protoscoleces immediately after washing with PBS, was placed at the liquid nitrogen over night. Then total RNA extracted from protoscoleces with the RNX Plus kit (Cinnagene ® ) according to the manufacturer's instructions. The RNA concentration and quality was assessed by both UV absorbance and electrophoresis on the 1.5% agarose gel.
DNA amplification
The EG95 gene of E. granulosus was amplified by PCR. This reaction was performed with the Bioneer master mix (AccuPower ® PCR PreMix, k-2012). Primers were designed according to the published E. granulosus EG95 vaccine antigen (EG95) cDNA sequences (GenBank accession number AY421719.1). The forward primer: CGGAATCATGGCATTCCAGTTATGTCTC, with the restriction site for EcoRΙ, and Reverse primer: GCCTCGAGTCAAGTAAGGACAAC, with the restriction site for XhoΙ. PCR procedure including initial denaturation at 94 ○ C for 4 min, then 35 cycles of denaturation at 94 ○ C for 1 min, E annealing at 53 ○ C for 1 min, extension 72 ○ C for 1 min, and final extension at 72 ○ C for 10 min. The PCR product was analyzed by electrophoresis on a 1.2% agarose gel and the size of them compared with 100 bp DNA ladder (GeneRuler™ 100 bp Plus DNA Ladder, Fermentase ® ). This amplified DNA was sequenced with Capillary Electrophoresis System by Macrogen Company (Korea).
RT-PCR amplification
The total RNA reverse was transcribed to cDNA (with using RevertAid™ H Minus Reverse Transcriptase, Fermentas ® ), then this cDNA was used as template DNA for RT-PCR amplification. The primers used in this reaction, was the same as the primers used in PCR reaction. RT-PCR procedure included initial denaturation at 94 ○ C for 3 min, then 35 cycles of denaturation at 94 ○ C for 1 min, annealing at 53 ○ C for 40 sec, extension 72 ○ C for 45 sec, and final extension at 72 ○ C for 10 min. The RT-PCR product likewise the PCR product was analyzed by electrophoresis on a 1.5% agarose gel and the size of them compared with 100 bp DNA ladder, then this products were isolated and purified from agarose gel with using DNA gel Extraction Kit (Accu-Prep ® Gel Purification Kit, Bioneer).
Ligation and Transformation of EG95 gene
The purified RT-PCR products were ligated into pTZ57R/T plasmid vector (InsT/A cloneTM PCR product cloning Kit, Fermentas ® ) according to the manufacturer's instructions. For predispose the bacteria, we used E. coli Top10 strain and calcium chloride method (12). After ligation, the ligated plasmid was transformed to the competent bacteria according the protocol (12), and cultured in Luria-Bertani (LB) broth medium free antibiotic by incubation for 1h, at 37 ºC. Then these bacteria were plated on the LB Agar plate containing antibiotic (ampicillin 100 mg/ml), IPTG 200 mg /ml, X-Gal 20 mg/ml and this plate was incubated at 37 ºC for 16-18 h. After this time, the blue and white colony was formed at the plate. To confirm the gene cloning we used Colony-PCR method. The plasmid was extracted from bacteria by plasmid extraction kit (Accu-Prep Plasmid MiniPrep DNA Extraction Kit, Bioneer ® ), and digested by EcoRΙ and XhoΙ enzymes. The enzymatic reaction was performed in two conditions, mono and double digestion, separately. The products was analyzed by electrophoresis on a 1% agarose gel and the size of them compared with 1kb DNA ladder (Gen-eRuler™ 1kb, Fermentase ® ). Eg95 gene that digested from pT-Eg95 in double digest reaction, extracted from gel by using gel Extraction Kit (AccuPrep ® Gel Purification Kit, Bioneer ® ).
Sub cloning of Eg95 in expression vector
The pcDNA3 plasmid was used as expression vector and Eg95 fragment sub cloned into this plasmid. The pcDNA3 plasmid was digested by the same enzymes used for digestion of pT-Eg95 (EcoR1 and XhoI). Then digested plasmids were purified from agarose gel by using DNA Extraction Kit (AccuPrep ® Gel Purification Kit, Bioneer). The Eg95 fragment was ligated into the digested pcDNA3 according to the same protocol that used for ligation of Eg95 and pTZ57R/T (12). The ligation product was transformed into the E. coli Top10 strain, according to the protocol (12) and cultured in Luria-Bertani (LB) broth medium free antibiotic by incubating for 1h at 37 ºC with shaking. These transformed bacteria were plated onto LB agar plates contain ampicillin 100 mg/ml and incubating at 37 ºC for 16-18h. For identifying the pcEg95-IR recombinant plasmid, we used Colony-PCR amplification and restricted digestion with the EcoRI and XhoI enzyme. The recombinant pcEg95-IR plasmids were sequenced by Macrogen Company.
Expression of recombinant EG95 protein
pcEG95-IR was transfected into the Chinese hamster ovary cell (CHO) using Fugene 6 Transfection reagent kit (Roche). Transfected cells were seeded in a 12-well tissue culture dish with Dulbecco's Modified Eagle Medium (DMEM) contining neomycin antibiotic (G418) and then kept at 37°C in a 5% CO2 incubator.
SDS PAGE and Western blot analysis
After 14 days, the transfected cell harvested from the media and expressed recombinant EG95 protein was examined on 15% acrylamide gel in SDS PAGE. The resulted proteins were transferred to the nitrocellulose membrane. The membranes were incubated in PBS (Phosphate Buffered Saline containing 2% BSA (Bovine Serum Albumin) and then washed three times with PBST (PBS-Tween20 0.5%.). The nitrocellulose membrane was reacted with 1:200 seropositive mouse antibody for 1 h at 37°C. Then were washed three times with PBST and subsequently treated with horseradish peroxidase (HRP) conjugated with goat anti mouse IgG with 1 : 5000 dilution for 1 hour at 37°C. The page was visualized for color after development in Di Amino Benzidine/H2O2 substrate solution for 15 min at room temperature. The reaction was stopped by washing four times in distilled H2O.
Results
Total DNA and RNA were extracted from protoscoleces derived from sheep hydatid cysts and assessed by electrophoresis on the agarose gel. The extracted DNA and RNA (cDNA) were used as templates in PCR and RT-PCR reactions. The result is shown in Fig. 1. The RT-PCR product was successfully ligated into the pTZ57R/T plasmid and transformed to Top10 of E.coli. To confirm the ligation, the pTZ57R/T plasmid was digested with EcoRI and XhoI restriction enzymes (Fig. 2). Then, the digested pT-Eg95 was ligated successfully into pcDNA3 plasmid (as expression vector) and transformed into Top10 strain of E. coli. The pcEG95 plasmid was digested by restriction enzymes to confirm ligation of this gene into pcDNA3 plasmid (Fig. 3). In next step, pcEG95-IR plasmid and amplified DNA of EG95 were sequenced, and the sequences were submitted in GenBank with two-accession numbers ( JF357600.1, JF829212). These sequences were compared with EG95 gene of E. granulosus in GenBank. The result showed 100% homology with E. granulosus EG95-5 and EG95-6 genes, which recorded in GenBank with the AF199350.1 and AF199347.1 accession numbers. In addition, DNA sequence was identical 99% with some sequences recorded in GenBank like AF199349.1, EU595909.1 and EU595882.1. The expressed recombinant EG95 protein was evaluated by SDS PAGE and western blot, showing about 17kDa band on acrilamid gel (Fig. 4) and nitrocellulose membrane (Fig. 5).
Discussion
Echinococcosis is a zoonotic disease with worldwide distribution caused by adult or larval stages of Echinococcus (family Taeniidae) (13). Controlling human hydatid disease using anthelmintics, simultaneously with changes in human life and animal managing have been unsuccessful in some developing countries (9). So, use of effective vaccine has been considered as the most efficient way to control echinococcosis. Early attempt for vaccination the sheep against E.granulosus had been performed by Gemmell (1966), where he used oncosphere of the parasite as a crude antigen (14). Heath et al. (1981) and Osborn & Heath (1981) also showed that oncospheres have potent to prevent hydatid disease (15,16). Later, Heath and Lawrence (1996) defined the antigenic polypeptides in oncosphere that induce protective immunity. Their results indicated that, only the fraction containing the 23 and 25 kDa molecules (EG95 protein) was able to stimulate protection against the infection (17). EG95 is one of most important protein in E.granulosus and may be a vital function in parasite biology, which expresses in all stages of life cycle of this worm (18). This protein has high degree conservation in different stage of Echinococcus development. It may be involved in penetration of parasite to the epithelial border of the intestinal villi (19). The EG95 encoding gene is a member of a multiple gene family, which expresses in the oncosphere, mature worm, and protoscoleces. Four EG95-related genes are expressing an identical EG95 protein (EG95-1, EG95-2, EG95-3, and EG95-4) only in the oncosphere life-cycle stage, as well as EG95-5 and EG95-6 additionally express in oncosphere, adult worm and protoscoleces (20,21). In the present work, we used gene coding EG95 protein in protoscoleces. Our results indicated that cDNA of EG95 in Iranian isolates of E. granulosus has 492bp and it identical with the sequences reported earlier (20) and 85% identical with the sequences reported by Lightowlers et al. (10). The difference may be due to the stage of parasite development, where Lightowlers et al. (1999) had used oncosphere for amplification of EG95 genes but in our work, we used protoscoleces. In our study, the size of the recombinant EG95 protein demonstrated was about 17 kDa, which is different from the size of the native protein (23-25 kDa) reported by Heath and Lawrence (17). This discrepancy in size may be due to post-translational glycosylation (20). On the other side, a number of studies in mice have shown that DNA vaccination can induce antibody and cell-mediated responses to a variety of bacterial, viral, and parasitic antigens (21). DNA vaccines considered to have potential advantages because of easier construction, ability to induce long-lasting immune responses, high temerature stability and low production cost (19). In the present study, EG95 fragment successfully subcloned in pcDNA3 as eukaryotic expression vector to produce protein for DNA vaccine. Additionally, recombinant plasmid of pcEG95 was constructed successfully to be used for recombinant vaccine. In sheep, vaccination with recombinant EG95 protein induces high levels of protection against E. granulosus (7). The mechanism of protection appears to be strongly correlated with the presence of antibodies and the complementmediated lysis of the parasite oncosphere (22).
|
2018-04-03T03:14:24.353Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "3772e8175b7d2284ed89118a0c39914544ea8671",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3772e8175b7d2284ed89118a0c39914544ea8671",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
220872655
|
pes2o/s2orc
|
v3-fos-license
|
Reducing Health Inequalities in Aging Through Policy Frameworks and Interventions
Lifepath, a European Commission Horizon 2020 programme of research adopted a life course approach to understanding the impacts of socioeconomic differences on healthy aging and considered the relative importance of lifetime effects by comparing studies on childhood and adult risks. A key component of the programme was the identification of policy relevant results and messages. Longitudinal European cohorts of over 1.7 million individuals from 48 independent cohort studies were harmonized and followed for the key outcomes of mortality and functional decline. Biological markers, allostatic load, and DNA methylation were also examined to help unravel the impact of socioeconomic factors including education, occupation, or income on aging. It is well-recognized that socioeconomic position affects behaviors such as smoking, high alcohol consumption, low physical activity, and a diet low in fruit and vegetables. Lifepath indicated that socioeconomic status is an independent risk factor for death and disease but that it also helps drive the uptake of these well-recognized risk behaviors. The evidence from Lifepath points to a suite of possible policies, some universal, some targeted but it was not possible to assess specific interventions, other than conditional cash transfers, or to explore how interventions might be effective in reducing health inequalities in aging. Nevertheless, it was clear that the timing of interventions is important as the consequences of early interventions may span the whole life course. These influences have important implications for policy making, since appropriate policies can reverse the embodiment of socioeconomic disadvantage, thus reducing health inequalities and resulting in healthier aging. Applying principles of proportional universalism as one approach to reducing inequalities should be considered.
INTRODUCTION
Healthy aging is an important public health issue, both nationally and internationally. The World Health Organisation (WHO) recognizes healthy aging as a process whereby all people of all ages are able to live a healthy, safe and socially inclusive lifestyle (1). However, it is widely recognized that inequalities experienced from the earliest years of life, and throughout the life course, undermine healthy aging.
Looking beyond the provision of health and social care, the social determinants of health have a major effect on health and well-being. These factors include housing quality, education, social connectivity, climate change, and local environmental damage. Action on these social determinants of health is needed across the life course to reduce inequalities. WHO also promotes a "health in all polices" approach which recognizes that all arms of government can influence the determinants of health and should develop policies which support good health.
Aging can be considered in three component parts: physical (measured as activities of daily living or ADL), mental (measured as cognitive decline), and social (participation in community activities) (2)(3)(4).
The number of people aged 60 and over is expected to increase from 901 million to 1.4 billion between 2015 and 2030 (5). Life expectancy increased throughout the second half of the twentieth century but in recent decades this increase has come with more years spent in poor health. It has been estimated, for example, that an English male with a life expectancy of 79.5 years in 2014-16 would have an average healthy life expectancy of 63.3 years, spending around 20 per cent of his life in poor health (6). An English female, with a life expectancy of 83.1 years, would spend 19.2 years (23 per cent) in poor health.
The rapid aging of populations, and the rising numbers of older people living in suboptimal health, highlights the need to develop policies and practices to support healthy aging through the life course and address health inequalities in old age.
As noted, healthy aging includes the maintenance of physical and cognitive functioning, as well as good mental health (3). With increasing age, most people experience a gradual decline in all of these. Despite this decline in health that naturally accompanies aging, many older people lack access to adequate health care (7).
In countries that lack universal health care, at no or low cost, older people may be forced to choose between paying medical costs and other basic needs such as for food, warmth and accommodation. In addition, health care services are not always age-appropriate, particularly in rural areas of low-income countries (8).
However, there are stark differences in healthy aging outcomes between different social groups. Most aging related health outcomes are strongly associated with socioeconomic characteristics of individuals (9,10). This means that people who have higher education attainment, better jobs, or higher income tend to have better physical or cognitive function compared to those who experience socioeconomic disadvantage.
The impacts of socioeconomic circumstances on healthy aging are now well-documented. People living with socioeconomic disadvantage are more likely to develop disease or die earlier than those living in more advantageous circumstances. This pattern has been described as the social gradient, where the risk of poor health tends to increase with step declines in socioeconomic position (SEP).
The social gradient demonstrates the need for policies and interventions that "level up" health, i.e., raise the health of the worst off to the highest level achievable within society. One approach to responding to this need is "proportional universalism" and is described as policies that are universal and benefit everyone in society, but that are at a scale and intensity that are proportionate to the level of disadvantage (11).
Evidence from longitudinal studies (cohorts) helps us understand a range of trajectories for aging. This evidence can be used to inform policies and interventions to address health inequalities in aging. However, it is less clear which specific policies national and local governments should introduce to reduce the gradient, and the inequalities that it represents, and whether there is sufficient political will to implement such policies.
LIFEPATH STUDIES-UNDERSTANDING THE ROLE OF HEALTH INEQUALITIES ON AGING
The European Commission Horizon programme funded Lifepath, a life-course approach to understanding the impacts of socioeconomic differences on healthy aging (12). The studies considered under Lifepath examined the relative importance of lifetime effects by comparing studies on childhood and adult risks. A key component of the programme was the identification of policy-relevant results and messages.
The WHO recognizes six clear risk factors for unhealthy aging: tobacco use, alcohol consumption, insufficient physical activity, raised blood pressure, obesity, and diabetes. As part of Lifepath, researchers explored SEP as a risk factor for adult non-communicable diseases in a multi-cohort study of over 1.7 million individuals from 48 independent cohort studies from the UK, France, Switzerland, Portugal, Italy, the USA, and Australia (13). This work showed that SEP is an independent risk factor for mortality and functional decline, in addition to the risk factors listed above.
Low SEP was associated with 2.1 years of life lost (YLL) between ages 40 and 85 years and was comparable with YLL from the other six risk factors. This finding emphasized the importance of not only focusing on the six risk factors but also on addressing low socioeconomic position. Studies considered under Lifepath indicated that not only is socioeconomic status an independent risk factor for death and disease, it also helps drive the six risk factors.
Poorer health over the life course has been associated with early life factors, particularly adverse childhood experiences, and lack of availability of social support (14). In studies of health inequalities among older people, the strongest relationship was found to be between poverty and poor health (15). Furthermore, health inequalities in old age reflect accumulated disadvantage over the life course as well as inequalities experienced at older ages associated with geographic location of residence, gender, and ageist attitudes and practices (16).
Extensive existing evidence implies that to reduce health inequalities at older ages, policies, and interventions need to address social determinants of health in early life and across the life course. The consequences of early interventions may span the whole life course with important implications for policy-making. However, older people carry the burden of ill-health. Strategies to tackle inequalities in healthy aging must also address social inequalities experienced at older ages.
Older people at the lower end of the social gradient often have more difficulty in accessing health services even though they are already likely to experience poorer health (8). Examples include Nazroo (17), in a study of 12 European countries who observed inequalities, by education level, among people aged 50 and over in visits to medical specialists and dentists. In the UK, older people in lower SEP groups had less access to health services such as mammography screening, vaccinations, eye and dental exams, and heart surgery (18). People living in the most deprived areas of Scotland were diagnosed with more than one condition 10-15 years earlier than those living in the least deprived areas (19).
Policies can be implemented to influence health determinants including access to health and social care, risk behaviors such as smoking and physical inactivity, and health literacy which should support people to enter old age in good health. Such policy responses should be directed at people living in poverty and other disadvantaged groups. Psychosocial aspects such as social engagement have also been identified as supporting healthy aging (20).
STRATEGIES AND POLICIES TO REDUCE INEQUALITIES IN AGING
While there is strong evidence now that inequalities in health are influenced both by socioeconomic circumstances and risk behaviors, evidence-based policies for reducing these inequalities are, not at first sight, so obvious. A plethora of strategies and objectives exists but only a limited number of specific policies and practices to reduce inequalities have been described in the literature, partly because it is hard to experiment with policy on populations, both practically and ethically.
The WHO Global Strategy and Action Plan on Aging and Health (GSAP) includes five strategic objectives (1): The GSAP recognizes that healthy aging takes place across the entire life course. Consequently, policies and interventions can be designed and implemented at different life stages to impact the trajectory of healthy aging.
Since low SEP has strong effects on aging, socioeconomic circumstances should be included in local and global health strategies, health risk surveillance, interventions, and policies to reduce health inequalities throughout the life course. Improving socioeconomic circumstances may also reduce the uptake of behavioral risk factors.
A number of systemic policies improve socioeconomic circumstances, for example, free health care at the point of need, compulsory education, income tax credits, and requirements for safe school and work environments. Human rights and antidiscrimination legislation also affect health inequalities, along with employment and housing laws.
Policy and interventions in early childhood should be seen as part of a comprehensive strategy to reduce health inequalities in later life. However, policies and interventions are also needed specifically to support health in later life. In one study, the provision of more generous minimum pensions and higher expenditure on social care for the elderly, resulted in reduced health inequalities in the age group 65-80 years (21). In this way welfare policies can moderate the association between SEP and health. This finding reflects analysis from the European Office BOX 1 | The WHO Global Strategy and Action Plan on Aging and Health (GSAP) includes ve strategic objectives (1). of WHO which identified six policies with statistically significant potential to reduce short term health inequalities (22):
Commitment to action on healthy aging in every country
• Increasing public expenditure on housing and community amenities. • Increasing expenditure on labor market policies.
• Increasing social protection expenditure; reducing unemployment. • Reducing out-of-pocket payments for health.
Policies may also be designed to tackle ageism, as highlighted in the GSAP objectives in Box 1 above, which should help to reduce inequalities in employment practices for older people and access to certain healthcare interventions such as screening, surgery, and transplants. In the UK, for example, the Public Sector Equality Act aims to address ageism through a duty for public agencies to consider and apply fairness and equality in making decisions and developing policies or services (23).
While broad, systemic policies such as good pension provision and access to health care should be effective in all countries, some specific policies that are aimed at certain populations may work better in some countries than others. As noted, relatively few polices aimed at addressing health inequalities in aging have been tested experimentally.
One approach to increasing SEP to improve health early in the life course is the policy intervention of "conditional cash transfers (CCT)." Popular in low-and middle-income countries, they have been used infrequently in Europe and the U.S. CCT programmes aim to reduce short-term poverty and to break intergenerational poverty by providing a cash sum to people on low income in exchange for the pursuit of positive health behaviors. They are often designed around child health and might include vouchers for breastfeeding or support for attendance at vaccination clinics (24). The results from CCT programmes are mixed. While such programmes affect specific behaviors being promoted, it is not clear whether that they result in more fundamental changes which could deliver better child health (25).
Social prescribing-where GPs and other health professionals "prescribe" sessions at the gym or other activity based groups (dance, yoga, time in high quality natural environments)-is more applicable to adults as this is an approach that promotes better self-care rather than supporting care-givers, such as parents, which CCT aims to support. Social prescribing is used as a complementary activity to medication, often for vulnerable people with multiple health and social needs and aims to alleviate social isolation and increase physical activity in older age (26).
While interventions are needed throughout the life course, current older generations need specific support to reduce health inequalities. Resources are needed to reduce poverty in old age, provide disability support and care at home, and high-quality residential care. A growing elderly population may result in an increased health burden of dementia and more people requiring residential care. Such care may become a necessary option for more families with older relatives with advanced dementia, even if it is not a route that family members really want to take. Family members must be confident that their elderly relatives will be cared for safely and with dignity in their final years. Examples of woefully inadequate care which have been reported by the media highlight this is not always the case. Health and social care for older people needs sufficient investment, skilled staff and integration between services.
Many existing policies and interventions will help reduce inequalities in aging although they may not have been designed initially to do this. As an example, Table 1 shows some of the existing relevant policies and interventions in the UK (27). Other countries across Europe will have similar measures. The EU and UN Economic Commission for Europe (UNECE) created an online Active Aging Index to help inform policy making, for example European Commission (28). The AAI is a composite measure which aggregates scores from four domains: (a) employment; (b) participation in society; (c) independent, healthy, and secure living; and (d) enabling environment.
The four domains were considered in more detail as: • Encouraging working lives and maintaining work ability.
• Promoting participation, non-discrimination, and the social inclusion of older persons. • Promoting and safeguarding dignity, health, and independence in older age. • Maintaining and enhancing intergenerational solidarity.
Policies and interventions which support healthy aging through the life course can be grouped into six areasinvesting in children, welfare support, provision of a safety net, creating meaningful employment, healthy lifestyles, and universal health care (see Figure 1). Importantly, promoting healthy lifestyles is only one of these strands and yet so much policy effort focuses on these behaviors. While reducing individual behavioral risks is important, systemic change is also needed to reduce health inequalities, particularly in access to healthcare, provision of childcare for pre-school children and social care (help with feeding, dressing, housework and maintenance, reducing isolation).
Broader policies that protect and enhance local communities and environments which underpin sustainable planning will also help reduce health inequalities. Such an approach was recommended by the Marmot Review in the UK which identified six policy objectives in addressing health inequalities (11): • Give every child the best start in life.
• Enable all children, young people, and adults to maximize their capabilities and have control over their lives. • Create fair employment and good work for all.
• Ensure a healthy standard of living for all.
• Create and developing sustainable places and communities.
• Strengthen the role and impact of ill-health prevention.
COMMISSIONING SERVICES FOR OLDER PEOPLE
In order to explore possible polices and interventions to support healthy aging, another EU Horizon research project-ATHLOS 1 -has undertaken consultation with stakeholders as part of its research programme.
In the first wave of consultation, participants completed an online survey on factors identified as important for healthy aging among people aged 50+. This survey was part of a systematic review conducted by the ATHLOS consortium and from the wider literature (30). The online survey compiled 310 replies across Europe, of which 145 people were aged 50+. The top-five factors identified by the stakeholders as the most important influences on healthy aging for people aged 50+ were as follows: • Physical activity (58.6% of the total sample).
• Participation in community and/or social activities (51.0%).
When asked about the top-five factors that should be prioritized by policy-makers to enable older people (50+) to live a healthy life and keep on doing what they want to do, the majority of respondents replied with the following: • Access to preventive, diagnostic, and health care services (64% of those 145 respondents aged 50+). • Adequate income (53%).
• Providing opportunities to participation in community and/or social activities (48%). • Access to adequate social care services (46%).
• Access to safe and suitable transport and mobility options (41%).
As a comparison, in the UK, the social care Green Paper lists seven key outcomes (23): • Improved health and emotional well-being.
• Improved quality of life.
• Making a positive contribution.
• Increased choice and control.
• Freedom from discrimination or harassment.
• Maintaining personal dignity and respect.
In the UK the National Service Framework (NSF) outlines the evidence base for a range of health promotion activities for older people. The strongest evidence found was for increased physical activity, improved diet and nutrition, and immunization programmes for influenza (31). The importance of older people being able to access population-wide health promotion initiatives (such as smoking cessation) and initiatives to reduce poverty through benefits advice and support were also emphasized.
In the UK, specific care needs at the individual level are detailed in the Prevention Package for Older People (32). This was published as a series of resources to support commissioning of services for older people such as addressing falls, foot care, hearing services, intermediate care, and discharge from hospital. Forthcoming resources are planned on depression, continence, and arthritis.
CONCLUSION
The impact of socioeconomic circumstances on health inequalities in aging is clear. There are several well-evidenced strategies for reducing inequalities in aging, including increased access to health services and adequately-funded pension schemes. While few policies have been tested experimentally, we know that key systemic strategies such as universal health care and education, as well as welfare and employment support, are effective in reducing health inequalities. We also know that reducing individual and population behavioral risks supports healthy aging but we are less clear about the most effective policies and programmes to reduce those risks. Taking physical activity, as one example: how should government invest public money to promote, encourage, and support individuals and populations to be more active? Options include supporting active travel (walking, cycling) and social prescribing but barriers exist to uptake and more evidence of effective policy is needed.
A key area of concern is that support and funding for systemic policies and initiatives that are known to reduce inequalities are waning in some countries in Europe. This is happening despite evidence that fairer societies do better on a range of indicators and that some population-based, systemic polices are more effective than programmes targeting individuals. Societal investment is needed in early and middle years, as well as in older age, to support healthy aging and to reduce inequalities. Ultimately this is beneficial to individuals and society.
DATA AVAILABILITY STATEMENT
Data discussed in the paper is available through the EU Lifepath and ATHLOS programmes, contact p.vineis@imperial.ac.uk and m.bobak@ucl.ac.uk.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
|
2020-07-31T13:16:02.141Z
|
2020-07-31T00:00:00.000
|
{
"year": 2020,
"sha1": "43cf1c1b89e5ce41c723e03c22925370261cf2d6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2020.00315/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43cf1c1b89e5ce41c723e03c22925370261cf2d6",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
213638274
|
pes2o/s2orc
|
v3-fos-license
|
Some Extensions on Numbers
My previous work dealt finding numbers which relatively prime to factorial value of certain number, high exponents and also find the way for finding mod values on certain number’s exponents. Firstly, I retreat my previous works about Euler’s phi function and some works on Fermat’s little theorem. Next, I construct exponent parallelogram to find coherence numbers of Euler’s phi functioned numbers and apply to Fermat’s little theorem. Then, I test the primality of prime numbers on Pascal’s triangle and explore new ways to construct Pascal’s triangle. Finally, I find the factorial value for certain number by using exponent triangle.
Introduction
We know Fermat's little theorem and Euler's φ (phi) function. Such are well defined operations on number theory and algebra. Euler's φ (phi) function is considered as general proof of Fermat's little theorem. We seek other ways to find mod values on Fermat's little theorem, and generalize φ (phi) function for a certain integer's exponentiation and factorial value. We construct the exponent parallelogram to find the coherence values of Euler's φ (phi) function. We find higher valued exponents on Fermat's little theorem according to this. We also specify Fermat's last theorem by using prime numbers. Also we know binomial coefficients are constructing Pascal's triangle, in which we see the divisibility of prime numbers (primality test) in prime number exponentiation on Pascal's triangle. In addition, we construct Pascal's triangle and seek other ways except for binomial coefficients, i.e. and construct Pascal's triangle by arithmetic operations triangle. Finally instead of binomial coefficients in Pascal' triangle, we use exponents value of certain integer to construct Pascal's triangle, and then use "n"th expansion to find factorial of such certain number.
First Blaise Pascal (1623-1662) introduced Pascal's triangle, after that, Isaac Newton (1643-1727) used the facts of Pascal's triangle he developed binomial expansion. He and his followers used binomial theorem for Probability and Statistical problems. Factorial were used to count permutations at as early as the 12 th century, by Indian scholars. In 1677, Fabian Stedman described factorial as applied to change ringing, a musical art involving the ringing of many tuned bells. In his words "Now the nature of these methods is such that the change of one number comprehends (includes) changes on lesser numbers". In that mean period, James Stirling (1692-1770) first introduced one approximation for finding nth factorial of a certain number. Then Adrien-Marrie Legendre used Leonhard Euler's (1707-1783) second integral formula and notated a symbol for it and then named it as Gamma function. It was a good approximation finding factorial of Real numbers. Jacques Philippe Marie Binet (1786-1856), modified James Stirling's approximation. Finally, the notation n! was introduced by the French mathematician Christian Kramp in 1808. Pierre
Let's Now Examine φ(pn) When p Is Not a Factor of n
Lemma 2: Let p be a prime and p does not divide n, then Proof: We know that pΦ(n) is the number of numbers relatively prime to n and less than pn. Notice that all the multiples of p whose factors are relatively prime to n are counted, since , where all the r's are relatively prime to n. the set has Φ(n) numbers relatively prime to n and 0 relatively prime to p, because they are all multiples of p. we subtract this many from our original count and we have When n be a composite number and n divides ( ) Notice that all the numbers that are relatively prime to ( ) When n be a prime and n does not divide ( ) is the number of numbers relatively prime to ( ) 1 ! n − and less than ( ) 1 ! n n − . Notice that all the multiples of n whose factors are relatively prime to ( ) Suppose the list of multiples is when n is prime number. Since all even numbers are composites except 2 because 2 is prime. So we cannot find an even composite number less than four. And two is the only prime number less than three. Also 1 is the only number relatively prime to two and below it. So we obtained from these two equations we get Example 1: Find the value of ( ) Example 2: Find the value of ( ) Proof: The positive integers less than n a that are not relatively prime to n are those integers not exceeding n a that are divisible by n. There are exactly n a-1 such integers, so there are ( ) 1 a n n ϕ − integers less than n a that are relatively prime to n a .
Exponent Division on Fermat's Little Theorem
Preposition 4 (PRB): If p is prime and "a" is a positive integer with p does not divides "a", . r is a congruent of "a" for mod p, where "s" is a quotient and "t" is a residue when "n" divided by p and n ∈ N is any exponent.
Proof: Let p be a prime, and a is a positive integer with p does not divides a,
Proving Fermat's Little Theorem, Using Preposition 4
If p is prime and a is a positive integer with p does not divides "a" and By the above results we define, 1) If "E" is a 1 st line prime exponent and "a" is an integer with (a, E) = 1, then 2) If "E" is a prime exponent and "a" is an integer with (a, E) = 1, then , where "k" is any positive integer of 1 st operation to k-th operation coherence numbers of φ(E).
Prime Bases on Fermat's Last Theorem
Let we see following summations.
Let i p are prime numbers then From the above recursion, we formulate the result then we get, ( )
Constructing Pascal's Triangle by Arithmetic Triangles
Addition triangle Definition 3: Let , , , A B C Z ∈ then do 1 st operation is adding each element with its successive element of 1 st line elements, 2 nd operation is adding each element with its successive element of 1 st operation, and 3rd operation is adding each element with its successive element of 2nd operation. By this way we do the same up to nth operation. These 1 st line to nth operation diagonal elements coefficients construct Pascal's triangle. Now we construct addition triangle:
Backward Difference Triangle
where 0 n sign depends upon whether n is odd or even. If n is odd we get
Forward Difference Triangle
Definition 5: Let , , , A B C Z ∈ then do 1 st operation is subtracting each element with its successive element of 1 st line elements, 2 nd operation is subtracting each element with its successive element of 1 st operation, and 3rd operation is subtracting each element with its successive element of 2nd operation. By this way we do the same up to nth operation. These 1 st line to nth operation diagonal elements coefficients construct Pascal's triangle with negative coefficients.
Now we construct forward difference triangle: From the above, using the colored diagonal we can construct a negative Pascal's triangle: where n k sign depends upon whether n is odd or even. If n is odd we get n k − , else we get n k .
Forward Division Triangle
where n k sign depends upon whether n is odd or even. If n is odd we get n k − , else we get n k Upon whether n is odd or even. If n is odd we get n k , else we get n k − .
Backward Division Triangle
Definition 8: Let , , , A B C Z ∈ then do 1 st operation is dividing each element with its successive element of 1 st line elements, 2 nd operation is dividing each element with its successive element of 1 st operation, and 3rd operation is dividing each element with its successive element of 2nd operation. By this way we do the same up to nth operation. These 1 st line to nth operation diagonal elements degrees construct Pascal's triangle. Now we construct backward division triangle:
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
|
2019-12-05T09:25:26.293Z
|
2019-11-29T00:00:00.000
|
{
"year": 2019,
"sha1": "38bb89d24fc53491a4501d224a4c44b55e8a9de6",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=96739",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9e01d2f2d070c2556cded7021dfdb6568eaed5cc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
214584023
|
pes2o/s2orc
|
v3-fos-license
|
Biotechnological synthesis of water‐soluble food‐grade polyphosphate with Saccharomyces cerevisiae
Inorganic polyphosphate (polyP) is the polymer of phosphate. Water‐soluble polyPs with average chain lengths of 2–40 P‐subunits are widely used as food additives and are currently synthesized chemically. An environmentally friendly highly scalable process to biosynthesize water‐soluble food‐grade polyP in powder form (termed bio‐polyP) is presented in this study. After incubation in a phosphate‐free medium, generally regarded as safe wild‐type baker's yeast (Saccharomyces cerevisiae) took up phosphate and intracellularly polymerized it into 26.5% polyP (as KPO3, in cell dry weight). The cells were lyzed by freeze‐thawing and gentle heat treatment (10 min, 70°C). Protein and nucleic acid were removed from the soluble cell components by precipitation with 50 mM HCl. Two chain length fractions (42 and 11P‐subunits average polyP chain length, purity on a par with chemically produced polyP) were obtained by fractional polyP precipitation (Fraction 1 was precipitated with 100 mM NaCl and 0.15 vol ethanol, and Fraction 2 with 1 final vol ethanol), drying, and milling. The physicochemical properties of bio‐polyP were analyzed with an enzyme assay, 31P nuclear magnetic resonance spectroscopy, and polyacrylamide gel electrophoresis, among others. An envisaged application of the process is phosphate recycling from waste streams into high‐value bio‐polyP.
due to its beneficial physicochemical properties, whereas the chain length determines which properties are more pronounced. For example, polyP 2 and 3 reactivate the water holding capacity of meat after the rigor mortis and intermediate-chain polyP complexes higher valent cations in soft cheese production.
The biotechnological synthesis of polyP appears promising because the polyP chain length in microorganisms can reach up to a thousand P-subunits (Rao et al., 2009). Furthermore, microorganisms can produce polyP from impure P i, whereas chemical polyP synthesis relies on pure P i . We recently reported a process to obtain Saccharomyces cerevisiae (baker's yeast) containing up to 28% polyP (as KPO 3 ) in cell dry weight . S. cerevisiae was chosen as the polyP production host because it is generally regarded as safe (GRAS) and other food-related fungi, such as Pichia pastoris, Kluyveromyces lactis, and Hansenula saturnus, produced only little polyP (<7%) in our hands (data not shown).
In S. cerevisiae, the transport of P i across the cell membrane is mediated by the low-affinity transporters Pho87 and Pho90 (K m ∼1 mM) and two high-affinity P i transporters Pho84 and Pho89 (K m ∼10 μM, Figure 1; Canadell, Gonzalez, Casado, & Arino, 2015).
During polyP synthesis, S. cerevisiae primarily metabolizes glucose via alcoholic fermentation to ethanol, which produces both the required energy and activated P i in the form of adenosine triphosphate (ATP; . PolyP is synthesized by the vacuolar transporter chaperone (VTC). The enzyme complex is located in the vacuolar membrane, couples the synthesis of polyP to its translocation across the vacuolar membrane, consumes ATP, consists of the five subunits VTC 1-5, and is presumably dependent on the proton gradient that is created by the V-ATPase (Desfougeres, Gerasimaite, Jessen, & Mayer, 2016). Langen and Liss (1958) showed that, after P i starvation, long-chain polyP is produced de novo, and later hydrolyzed to shorter chain polyP. The high polyP content during P i feeding is due to an increased polyP synthesis and not due to reduced polyP degradation (Liss & Langen, 1962). The polyP degradation in the vacuole is facilitated by S. cerevisiae endopolyphosphatase 1 and 2 . P i that is generated in the vacuole is transported to the cytosol by Pho91 (Eskes, Deprez, Wilms, & Winderickx, 2018).
To the authors' best knowledge, there are no reports on the biosynthesis of a food-grade water-soluble polyP with a highly scalable biotechnological process. The overall goal of the here presented study was the biosynthesis of polyP from P i with the focus on the purification (so-called preparative polyP extraction) of the synthesized polyP from polyP-rich S. cerevisiae. The desired characteristics of such a biotechnologically synthesized polyP included: appearance as a dry white water-soluble powder, food-grade quality, a linear molecular structure, a purity comparable with chemically produced polyP, and one polyP with mostly sodium and one with mainly potassium as the counterion. The term "bio-polyP" is proposed for the product of the here described synthesis. All process steps were designed to be highly scalable for intended large scale production. The physicochemical properties of the bio-polyP were analyzed and compared with chemically synthesized polyP.
| Synthesis of bio-polyP
NaCl and NaOH were used in Steps 10-12 to produce sodium bio-polyP. KCl and KOH were used to synthesize potassium bio-polyP.
Nondenatured ethanol was used. All steps were performed at room temperature except Steps 1 and 6. 1. PolyP-rich S. cerevisiae was produced according to Verduyn, Postma, Scheffers, and Van Dijken (1992), and pH 5 with HCl/NaOH) with mild agitation. After cell harvesting and one washing with sterile water, starved S. cerevisiae was stored overnight at 4°C, and incubated at 7.5 g cell dry weight × L −1 and 30°C anaerobically for 2.5 hr in feeding medium (250 mM glucose, 60 mM KH 2 PO 4 , 20 mM MgCl 2 , and pH 6 with HCl/KOH) with mild agitation. PolyP-rich S. cerevisiae was washed twice with sterile water and dried for 5 min on P i -free filter paper to obtain a wet cell mass that contained ca. 25% dry matter. 2. The wet cell mass (weight w 2 ) was transferred to an aluminum flask and stored at −20°C. 3. The cell mass was thawed. 4. To the cell mass, 5 ml autoclaved Milli-Q water was added per g wet cell mass (referring to w 2 ) and mixed. 5. The flask was incubated for 10 min at 70°C in a vigorously rocking water bath. An aluminum or stainless steel flask was used for optimal heat transfer. 6. The flask was placed on ice for 5 min to cool the content down to room temperature. The content was moved to a centrifugation bucket. 7.
The insoluble matter was removed by centrifugation (10,000 g, 5 min). The pellet was discarded, the volume of the supernatant measured (v 7 ), and the supernatant moved to a new centrifugation bucket. 8. To precipitate the protein and nucleic acid, 0.02 vol of v 7 HCl (2.5 M) was added, and the content mixed. Because polyP hydrolyzes at low pH values, Steps 9-11 were carried out quickly. It was important not to change the order of Steps 8-10 because the HCl precipitation was reversible by alkali addition.
9. The insoluble protein and nucleic acid were removed by centrifugation (10,000 g, 15 min). The prolonged centrifugation time was necessary due to the fine nature of the suspended solids. The supernatant was moved to a new centrifugation bucket, while the pellet was discarded. 10. To neutralize the HCl from Step 8, 0.02 vol of v 7 NaOH or KOH (both 2.5 M) was added after the centrifugation, and the solution was mixed. 11. The pH was set to 7 with the help of a pH electrode with NaOH or KOH.
The used alkali volume was noted. The overall volume (v 11 ) was calculated by adding the volumes of Steps 8, 10, and 11 to v 7 . 12.
| PolyP analytics
To determine the cellular polyP content and the average polyP chain length in S. cerevisiae, the polyP was extracted from the cells with an analytical polyP extraction (Christ & Blank, 2018). Briefly, S. cerevisiae was suspended in a pH-buffered ethylenediaminetetraacetic acid (EDTA) solution and lyzed with phenol. The lysate was washed with chloroform and then used for further analysis. The total polyP (only linear polyP and no cyclic polyP), P i , and the average polyP chain length were determined enzymatically (Christ, Willbold, & Blank, 2019). Briefly, P i was assayed colorimetrically after the addition of a P i detection agent, which contained antimony, molybdate, ascorbate, and sulfuric acid. For total polyP analysis, polyP n was enzymatically hydrolyzed to n P i by S. cerevisiae exopolyphosphatase 1 and S. cerevisiae inorganic pyrophosphatase 1. The released P i was measured colorimetrically. The average polyP chain length was measured as the quotient of the total polyP concentration and the polyP chain concentration. The polyP chain concentration was quantified in an enzyme cascade with the enzymes S. cerevisiae exopolyphosphatase 1 (polyP n → polyP 2 ), ATP sulfurylase (polyP 2 + AMP-sulfate → ATP + sulfate), hexokinase (ATP + glucose → ADP + glucose 6-phosphate), and glucose 6-phosphate dehydrogenase (glucose 6-phosphate + NADP + → 6-phosphogluconolactone + NADPH, NADPH measured fluorometrically). For the study of the precipitation behavior of chemically produced polyP (Figure 2), the total polyP was measured gravimetrically after drying the dissolved polyP at 120°C. To determine the water solubility and pH of the polyP, a 1% (w/v) polyP suspension was vigorously stirred for 5 hr. If some of the polyP did not dissolve, the suspension was centrifuged (5 min, 10,000 g), the pellet dried in a desiccator for 7 days, and the insoluble matter weighed. The
| RESULTS
The envisaged workflow for the synthesis of bio-polyP included six process steps. In process Step 1, S. cerevisiae was starved in P i -free medium (P i starvation). The starved cell mass was subsequently moved to P i -containing medium, where S. cerevisiae took up P i and intracellularly polymerized it into polyP (P i feeding, Step 2). The combination of Steps 1 and 2 is called polyP hyperaccumulation and was already developed in a previous study . In process Step 3, the polyP was liberated from the yeast cell and brought into aqueous solution. In process Step 4, the dissolved polyP was recovered and purified by precipitation. Afterward, the polyP was dried and milled (process Steps 5 and 6). Process Steps 3-6 were developed in this study.
3.1 | Optimal conditions for the precipitation, drying, and milling of polyP The first aim was to understand which process conditions were required for the precipitation, drying, and milling of polyP. The process conditions that were developed here with chemically produced polyP were later used for the synthesis of bio-polyP. To verify that polyP can be precipitated with an organic solvent, a sodium polyP (Budit 4) was precipitated with 2 vol of ethanol, propanol, or acetone with or without NaCl (Figure 2a). PolyP was collected as a sticky viscous gel after precipitation. The recovery without NaCl was unsatisfactory with all organic solvents (≤44%). With the combination of either NaCl and ethanol, or NaCl and acetone, almost the entire polyP was recovered (95% and 97%, respectively). The recovery with isopropanol and NaCl was somewhat lower (92%). Because bio-polyP will be used as a food additive, ethanol was chosen as it has the lowest toxicity of the three tested compounds. The different polyP cation compositions were achieved by displacing the counterions with either Na + or K + .
Budit 4 was precipitated with a combination of different concentrations of NaCl or KCl, and 2 vol ethanol (Figure 2b). At a concentration of 50-750 mM of either salt, Budit 4 was fully recovered (≥95%). Because the recovered polyP was measured gravimetrically, it was concluded that NaCl itself did not precipitate. The precipita- (92% and 91% recovery with NaCl and KCl, respectively). Drying of the polyP gel was done in a desiccator that was filled with dried silica (without vacuum, Figure 2d). The initial water content of the sodium and potassium polyP gels amounted to 49.0% and 42.6%, respectively. After 7 days, water contents of 0.9% and 1.6% were measured in the sodium and potassium polyP gels, respectively. The water content of unprocessed Budit 4 was 0.2 ± 0.0% (mean ± standard error of the mean (SEM), five replicate measurements). The obtained water content was considered adequate for storage and milling.
Budit 4 was recovered as a coarse white crust after drying. To obtain a homogenous fine-grained powder, polyP was milled for 2 min in a bead beater. The fine-grained sodium and potassium polyP powders dissolved readily in water. As for all polyPs, prolonged vigorous stirring was necessary during dissolution to avoid the formation of clumps. The pH of a 1% (w/v) sodium polyP solution amounted to 7.4 ± 0.0, and the pH of the potassium polyP solution was measured at 7.2 ± 0.0 (mean between two independent batches ± SEM). With a pH of 7.0 before the precipitation, this indicated that the pH remained almost unchanged throughout precipitation, drying, and milling.
| Biotechnological synthesis of polyP
The polyP content in polyP-rich S. cerevisiae amounted to 26.5 ± 0.8% polyP (as KPO 3 ) in cell dry weight with an average polyP chain length of 24 ± 1 P-subunits (mean ± SEM from three analytical extractions).
The starting protocol for the preparative extraction included a heat treatment to release the polyP from cells, pH neutralization, and precipitation with NaCl-ethanol. The parameters of the heat treatment (1 hr, 70°C) were inspired by Kuroda et al. (2002) who employed those parameters to release polyP from sewage sludge. Two dependent variables (extraction efficiency and average polyP chain length) were analyzed during the optimization experiments. The amount of extracted polyP and the chain length was constant for 3.5-8 ml water per g wet cell mass (Figure 3a). About 5 ml water per g of wet cell mass was chosen. Interestingly, the bio-polyP did not precipitate as a gel but as a solid due to the presence of higher valent cations. An incubation time of 10 min led to the highest extraction efficiency and was, thus, chosen (Figure 3b). The chain length decreased significantly by 1 P-subunit per 10 min due to the heat catalyzed hydrolysis of the polymer (multiple correlation coefficient r = .983, p < .001). No shorter incubation time was tested because quicker heating and cooling would increase the process cost in an upscaling. Because the highest extraction efficiency was found at an incubation temperature of 70°C, this temperature was chosen ( Figure 3c). The pH after the heat treatment was acidic. Dilute NaOH was tested as an extractant instead of pure water to immediately neutralize the pH (Figure 3d). About 1 mM NaOH showed the same performance as pure water. Five and 10 mM NaOH decreased the extraction efficiency and the polyP chain length. Fifty and 100 mM NaOH increased the extraction efficiency but decreased the chain length profoundly, and led to precipitate formation during pH neutralization. Pure water was chosen. Dilute NaCl is commonly used to liberate RNA, which behaves chemically somewhat similar to polyP, from yeast cells (Kuninaka, Fujimoto, Uchida, & Yoshino, 1980). All tested NaCl concentrations 1-200 mM) reduced both the extraction efficiency and the chain length (Figure 3e). To remove protein and nucleic acid, an HCl precipitation was inserted before the ethanol precipitation. PolyP keeps its negative charge, even at very low pH, due to the low pK a value (pK a = 0-3) of all but two hydroxyl groups per polymer. In contrast, protein and nucleic acid protonate and precipitate at low pH. The ratio of polyP to protein and nucleic acid was increased from 4.1 to 6.2 (w/w), while the extraction efficiency and chain length decreased only by 2.2% points and 0.6 P-subunits, respectively, if 50 mM HCl was used (Figure 3f). Intermediate-chain polyP (41 P-subunits) was recovered in a fractional precipitation with 0.15 vol ethanol (26.9% extraction efficiency, Figure 3g,h). The remaining short-chain polyP (18 P-subunits, 52.5% extraction efficiency) was recovered by adding 1 final vol ethanol. Overall, 80% of the polyP was recovered, which agreed with the extraction efficiency that was obtained with only one precipitation step with 1 vol ethanol.
| Physicochemical characterization of the bio-polyPs
The synthesis of bio-polyP was scaled up by Factor 200 (from 5 mg to 1 g). In the last paragraph, the bio-polyP was liberated from the cells and precipitated with ethanol. The analytics were done with the pellet that was obtained after the ethanol precipitation. In this paragraph, the bio-polyP was dried and milled. A potassium bio-polyP was produced as well as a sodium bio-polyP. The molecular structures of the four newly synthesized bio-polyPs are displayed in Figure 4. The results of the physicochemical characterization of the bio-polyPs in comparison to the three longest, in bulk available commercial polyPs are shown in Table 1. All polyPs appeared as a fine-grained white powder, except for polyP P100, which was delivered as large Liberation of bio-polyP from polyP-rich Saccharomyces cerevisiae and fractional precipitation of bio-polyP. The extraction efficiency was calculated by dividing the amount of recovered polyP by the amount of polyP that was extractable with the reference method (analytical polyP extraction from Christ & Blank (2018). (a-h) Show individual experiments that build upon each other. 100 mg wet cell mass (25% dry matter) was suspended in different volumes of water in a 2 ml reaction tube. The suspension was incubated for 1 hr at 70°C and 750 rpm with one 3.2 mm stainless steel bead per reaction tube to facilitate agitation. After the insoluble matter was removed by centrifugation, the supernatant was transferred to a new reaction tube. The pH was neutralized and the polyP precipitated with 100 mM NaCl and 1 vol ethanol. (a) The volume of water, in which the cell mass was suspended, was varied and set to 5 ml per g wet cell mass. P100 contained a small amount of P i (0.7%). The intermediate-chain bio-polyPs did not contain P i , as well. The sodium and potassium short-chain polyPs contained 1.8% and 0.6% P i , respectively. The nucleic acid content of the bio-polyPs was estimated spectrophotometrically, a nonspecific method, to be-as desired-low (0.1-1.7%). NaCl or KCl was used to aid the ethanol precipitation. Both did not precipitate, because no chloride was detected in the bio-polyPs. Arsenic, cadmium, calcium, chromium, copper, iron, lead, nickel, and vanadium were neither
| Mass fluxes in the biosynthesis of polyP
The substrates and products of the P i starvation and P i feeding are displayed in Reaction 1 and Reaction 2, respectively. The production of the cell mass, which was required for Reaction 1, and the vitamins and trace elements in the P i starvation medium were not included in the reactions. The mass balance of the preparative sodium polyP extraction is depicted in Reaction 3. The increase in the water volume stemmed from cell water. For the preparative extraction of Currently, there is only the chemical route to produce food-grade polyP on an industrial scale. Chemical polyP synthesis (a condensation reaction) is done by heating pure P i at 400-800°C for several hours.
The main challenge of chemical polyP synthesis lies in its dependence on pure substrate (P i ). P i is mined from P i rock, purified, and imported as phosphoric acid into countries that do not possess P i rock reserves.
The substrate for chemical polyP synthesis (P i ) is obtained by pH neutralization of phosphoric acid with NaOH. Problems associated with P i rock mining include an uneven global P i rock distribution, the limited nature of P i rock, environmental destruction and pollution during P i rock mining, contamination of P i rock with toxic and radioactive elements, and transportation cost (Reta et al., 2018). Strategies for the more efficient use of P i and the recycling of P i from unused P i waste streams must be developed to sustain human life on earth .
We developed a green biotechnological process to synthesize pure food-grade polyP with S. cerevisiae. The biotechnological polyP synthesis F I G U R E 5 PAGE analysis of bio-polyP and chemically produced polyP. The DNA low range ladder (NEB) was used in lane 1. The DNA fragments measured 766,500,350,250,200,150,100,75,50, and 25 base pairs. The depicted polyP chain lengths were calculated according to Smith et al. (2018). The lanes 2 and 3, 4 and 5, 6 and 7, and 8 and 9 show individual batches. Abbreviations: Inter., intermediate; K, potassium; Na, sodium. consumes less energy than the chemical synthesis because it is done at ≤ 30°C. In comparison with chemical polyP synthesis, S. cerevisiae can directly utilize low P i concentrations (ca. 10-60 mM P i ) from impure sources. Chemical polyP synthesis cannot be done from such waste streams without extraction and extensive purification of the P i . In this study, we used pure P i to feed S. cerevisiae. Different kinds of waste streams should be tested as P i source in future studies. The primary requirements for our process include that the P i is dissolved, and other dissolved substances do not inhibit S. cerevisiae excessively. Food-grade P i waste streams, such as agricultural plant waste (Carraresi, Berg, & Bröring, 2018;Herrmann, Ruff, & Schwaneberg, 2020;Herrmann, Ruff, Infanzon, & Schwaneberg, 2019) and some spent fermentation broths, would allow the production of food-grade bio-polyP. There are many applications of polyP not related to food (e.g., paint, fertilizer, cleaning agents, and flame-retardants). Nonfood-grade P i waste streams, such as industrial wash water and sewage sludge ash, can be used for biotechnological technical-grade polyP production.
The released polyP is precipitated with Ca 2+ and used as fertilizer.
The Heatphos polyP cannot be used in food because the product is neither food-grade (P i source: sewage sludge) nor water-soluble (calcium polyP). The water-solubility of polyP is of importance for food applications where polyP can only display its desired physicochemical properties when dissolved. The here described process bypasses the disadvantages of the Heatphos process by extracting the polyP from S. cerevisiae and polyP precipitation with NaCl (or KCl) and ethanol.
|
2020-03-20T13:05:44.645Z
|
2020-03-19T00:00:00.000
|
{
"year": 2020,
"sha1": "6d9ef3f3cb9aeef4d527181e7043c9b07368f931",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/bit.27337",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "06db9db326f12c26c2d8dd4c7012f9c5559d7f4f",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
233789300
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between effectiveness and match outcome in the Spanish Water Polo League Relación entre efectividad y resultado del partido en la Liga Española de Waterpolo
The purpose of the present study was to identify the indicators of offensive effectiveness which best discriminate by match score (favourable, balanced or unfavourable) in water polo. The sample comprised 88 regular season games (20112014) from the Spanish Professional Water Polo League. Univariate (ANOVA test; Kruskal-Wallis test and Generalized Linear Model test (GLM)) and multivariate (Discriminant) analysis were used to compare favourable, balanced or unfavourable games, and effect sizes of the differences for the indicators were calculated. The results showed that favourable games had averages that were significantly higher for the success rate in even attacks and shots, power-play attacks and shots, counterattack and counterattack shots, shots from zone 1, 2, 3, 4, 5 and 6, drive shots, and shots after 1, 2 and more than 2 fakes. The indicators of offensive effectiveness that most discriminated were the success rate for drive shots (SC=-.624), even attacks and shots (SC=-.359 and SC=-.322, respectively), and for power-play actions and shots (SC=-.343 and SC=-.321, respectively). These results could help coaches when planning training and competition, providing them with the percentages of offensive effectiveness that must be reinforced in order to have more chances to win the match. This information can help coaches to evaluate their teams and to design training aimed at improving their weakest skills.
Introduction
Identifying the determinants of success in team sports is a major topic in the scientific community and the available research has grown intensively in the last few years (Sampaio, Lago, Casáis & Leite, 2010). Nowadays, coaches prepare the competition and training process using notational analysis with the aim of improving both the team´s and the players´ performances (Hughes & Franks, 2004;Ortega, Serna, Lupo & Sampaio, 2009;Leite, Baker & Sampaio, 2009). Notational analysis has been described as the process of recording, treatment and diagnostics of events that take place in competition (Drust, 2010).
Game-related statistics are very popular among coaches, a selection or combination of these statistics whose aim is to define some or all the aspects of performance is a performance indicator (Hughes & Batlett, 2002). Players and researches have used these performance indicators to improve the understanding of game performance in different types of competitions. Thus, a large number of research works have studied performance indicators in short term competitions such as Olympic Games, World Championships, or European Championships (Escalante, Saavedra, Mansilla & Tella, 2011;Escalante, Saavedra, Tella, Mansilla, García & Domínguez, 2012;Escalante, Saavedra, Tella, Mansilla, García & Domínguez, 2013;García-Marín & Argudo, 2017;Lupo, Condello, Capranica & Tessitore, 2014;Lupo, Condello & Tessitore, 2012a;Martínez & González, 2020;Sabio, Argudo, Guerra & Cabedo, 2021;Sabio, Guerra & Cabedo, 2018). For example, some research identified offensive characteristics (centre goals, power-play goals, counterattack goal, assists, offensive fouls, steals, blocked shots, and won sprints) and defensive characteristics (goalkeeper-blocked shots, goalkeeper blocked inferiority shots, and goalkeeper-blocked 5m shots) which distinguished performance for each phase in international championships (Escalante et al. 2013). They also graded a global efficacy (i.e. preliminary, classificatory, and final phases: 92%, 90%, and 83% respectively). Other studies have focused on analysing performance indicators in a regular season García, Touriño & Iglesias, 2015;Iglesias, García & Touriño, 2018;Lupo, Tessitore, Mingati & Capranica, 2010;Lupo, Tessitore, Mingati, King, Cortis & Capranica, 2011). Specifically, from the data of a large and extensive sample in a water polo league, it has been) identified offensive indicators which distinguished performance based on the match score (favourable, balanced or unfavourable) and they suggest that winning teams (favourable games) have averages that are higher for counterattack attacks and shots, goals, and goals from zones close to goal (zone 5 and 6), whereas losing teams (unfavourable games) have higher averages in even attacks and shots, no goals shots, and shots originated from zones far from goal (zone 2 and 4) (Garcia et al. 2015). In the same way, Iglesias et al. (2016) searched for the differences between strong and weak teams depending on their final classification in the league competition, and they found that strong level teams made more counterattacks, counterattack shots, goals, penalties achieved, shots originated from zone 5 and 6, and shots after 2 fakes than weak level teams, whereas, they made less even attacks, even shots, no goals shots, shots originated from zone 2 and 4, and drive shots than weak level teams. A substantial contribution to the understanding of this performance analysis in team sports is the investigation of situational variables that can influence the team performance at a behavioural level, such as the quality of the opponent, the starting quarter score and the match location. (García, Touriño & Iglesias, 2017;Gómez, Delaserna, Lupo & Sampaio, 2014;Gómez, Lago-Peñas, Viaño & González-García, 2014;Ruano, Serna, Lupo & Sampaio, 2016).
Describing the success rate for the different offensive actions in water polo according to the match score and identifying performance indicators of offensive effectiveness associated with winning is useful to develop reference values for water polo matches. These values can be used by coaches and support staff to inform practical guidelines for technical and tactical development. Reference values can assist in understanding the variability of team performance, and aid coaches in establishing quantifiable objectives for training and competition performance, as well as aid in evaluating the efficacy of training interventions and tactical changes. Knowledge of performance indicators of offensive effectiveness can also be used to create performance profiles to predict team behaviours and performance outcomes. However, only a few studies have analysed performance based on indicators of offensive effectiveness between match scores in water polo competitions (Argudo, 2009;Argudo, Alonso, García & Ruiz, 2007;Argudo, Ruiz & Abraldes, 2007;Argudo, Ruiz & Abraldes, 2010;Hraste, Jelaska & Granic, 2016;Sabio et al. 2021). Therefore, the aim of the present study was to identify the indicators of offensive effectiveness which best discriminate between unfavourable, balanced and favourable match score in regular season games.
Participants
Non-probability sampling was comprised of 88 games (2 performances for game, in total 176) corresponding to 10 teams from the First Spanish Professional Water Polo League during 3 seasons (2011)(2012)(2013)(2014). This sample represented the 22.2% of all the matches played.
Measures
The dependent variable was the match score (unfavourable, balanced, and favourable). In relation to the match score, we considered a balanced score (difference d»3 goals), and an unfavourable or favourable score (difference > 3 goals) using k-means cluster procedures.
18 potential performance indicators of offensive effectiveness (Table I) were used as independent variables to compare the match score described previously. These variables are defined as the success rate of the offensive actions, namely, the percentage of offensive actions (attacks and shots) of each type that end in goal (also, percentage of successful actions).
Procedures
Data were obtained using video camera and a match analysis system (LongoMatch, System version 0.20.8, Barcelona, Spain). The camera was positioned at a side of the pool, at the level of the midfield line.
Data reliability was assessed through intra-and interobserver testing procedures (James, Taylor & Stanley, 2007). Intra-observer reliability was assessed by the first author of this study, an experienced observer with more than 300 water polo matches analysed. Three randomly selected matches were coded and, after a 6-week period, the matches were re-analysed with the data being compared with those of the original coding sessions. The second author of this study, after two weeks training in data collection, completed inter-observer reliability testing. He coded each of the three matches, and his data were compared with those of the experienced observer´s first coding session. Intra-and inter-observer agreements were evaluated via Kappa index, and were globally 0.97 and 0.79, respectively.
Statistical analysis
The basic descriptive statistics (mean, standard deviation, count) of the offensive effectiveness variables were calculated separately on the match score. Normal distribution was checked with the Kolgomorov-Smirnov and Shapiro-Wilk tests. To compare the distribution of the variables between favourable, balanced or unfavourable score, different tests were used: One-way ANOVA was used to compare means, Kruskal-Wallis test was used to compare medians and GLM with binomial response was used for the percentage variables. A significance level of 5% was considered.
Subsequently, the results were subjected to a discriminant analysis. The dependent variable was the match score, and the independent variables were those indicators of offensive effectiveness giving p-value <.05 in the one dimensional tests. Indicators with structural coefficients (SC) greater than or equal to 0.30 were considered relevant (Sampaio, Ibáñez, Lorenzo & Gómez, 2006;Tabachnick & Fidell, 2007).
The eigen value (small=0.1; medium=0.3; high=.5) (Cohen, 1988), the canonical correlation index, Wilk´s lambda, and the percentage of right classification were used to measure the discriminant power. The homogeneity assumption was evaluated with the Box´s M test. All statistical analyses were performed using SPSS software release 18.0 (SPSS Inc., Chicago, IL, USA). (Lupo et al., 2012a). Percentage of successful penalties shots respect to total penalties shots Origin of shots (see Figure 1) ("Zone") % Successful shots zone 1 (SS1) Percentage of successful shots originated from zone 1 respect to total shots zone 1 % Successful shots zone 2 (SS2) Percentage of successful shots originated from zone 2 respect to total shots zone 2 % Successful shots zone 3 (SS3) Percentage of successful shots originated from zone 3 respect to total shots zone 3 % Successful shots zone 4 (SS4) Percentage of successful shots originated from zone 4 respect to total shots zone 4 % Successful shots zone 5 SS5) Percentage of successful shots originated from zone 5 respect to total shots zone 5 % Successful shots zone 6 (SS6) Percentage of successful shots originated from zone 6 respect to total shots zone 6 Technical Execution ("Fakes") % Successful drive shots (SDS) Percentage of successful drive shots respect to total drive shots % Successful shots after 1 fake (S1FS) Percentage of successful shots after 1 fake respect to total shots after 1 fake % Successful shots after 2 fakes (S2FS) Percentage of successful shots after 2 fakes respect to total shots after 2 fakes % Successful shots more than 2 fakes (SM2FS) Percentage of successful shots more than 2 fakes respect to total shots more than 2 fakes Table II presents basic descriptors of offensive effectiveness by match score (favourable, balanced and unfavourable) in the men games, together with the corresponding one dimensional test results. There were sixteen indicators of offensive effectiveness that differed between the match scores. These indicators with statistically significant differences were SEA (p<.001), SPO (p<.01), SCO (p<.01), SES (p<.01), SPOS (p<.01), SCOS (p<.05), SS1 (p<.05), SS2 (p<.01), SS3 (p<.001), SS4 (p<.05), SS5 (p<.001), SS6 (p<.001), SDS (p<.001), S1FS (p<.01), S2FS (p<.05), and SM2FS (p<.05).
Discussion
The aim of the current study was to identify the indicators of offensive effectiveness which best discriminate between match scores (favourable, balanced or unfavourable) in regular men water polo seasons. The main findings of the study have shown that the performance indicators differentiating between unfavourable, balanced and favourable games were sixteen (even attacks and successful shots, power-play attacks and successful shots, counterattack and successful shots, successful shots from zone 1, 2, 3, 4, 5 and 6, successful drive shots, and successful shots after 1, 2 and more than 2 fakes). Similarly, the indicators of offensive effectiveness that most discriminated were the success rate for drive shots, following by the success rates for even attacks, power-play, even shots, and power-play shots, respectively. These results could help coaches plan and structure their training and competitions.
There were 16 performance indicators of offensive effectiveness that differentiated between the match scores. The results indicate that winning teams (>3 goals), made more successful even attacks and shots, successful power-play attacks and shots, successful counterattack and shots, shots from zone 1, 2, 3, 4, 5 and 6 that end in goal, successful drive shots, and shots after 1, 2 and more than 2 fakes ended in goal than losing teams, while the teams with balanced match scores made more successful shots after 2 fakes. These results are indicative of the importance of efficiency of actions in water polo. They are consistent with those of a study (Escalante et al. 2012) in which similar values of offensive performance indicators that differentiated winning and losing teams were found. In the same way, a study of the 10th Water Polo World Championship ), concluded that efficacy values in the microsituations in numerical equality (even attacks), in counterattack and in simple temporary numerical inequality (power-play) were significantly different between winners and losers, while in the penalty they were not significantly different. Also, our results are in line with other team sports such handball, where the greatest effectiveness of the winning teams has been found in all the parameters of final actions of the attack (Foretic, Rogulj & Trninic, 2010). On the other hand, in basketball, the winning and losing teams play differently in regular season and playoff games (García, Ibáñez, De Santos, Leite & Sampaio, 2013). The regular season games were dominated by the importance of assists, showing the relevance of the teamwork during this phase. On the contrary, the playoff games were dominated by the importance of effectiveness in defensive rebounding . These findings highlight the interest in studying the effectiveness according to the type of competition, which is why values of offensive effectiveness in regular water polo seasons are provided in this study.
The performance indicators of offensive effectiveness introduced in the discriminant analysis were those that were significant in the one-dimensional tests. The correct classification percentages achieved by the model were 72.1% (original sample) and 48.8% (cross-validation). According to the first discriminant function the effectiveness indicator that most discriminated between match scores was SDS (SC=-.624), indicating that the winning teams made more successful drive shots. Considering that the most frequently performed technical shot was the drive shot (between 63%-70% over all performed shots) (García et al. 2015), it is not surprising that success in this type of shot is the most discriminant between match scores.
The second effectiveness indicator in terms of discriminate power was SEA (SC=-.359), followed by SPO (SC= -.343), SES (SC=-.322), and SPOS (SC=-.321), pointing out that the percentage of successful even and power-play actions and shots, were very important to distinguish between match scores. However, the winning teams (>3 goals) perform more counterattacks, while losing teams (>3 goals) perform more even attacks the study by (García et al. 2015). Although these results may seem contradictory they are really compatible, and highlight the importance not only of performing some specific actions but also of being effective in them. In fact, the effectiveness of power play shots was a performance indicator which discriminated between winning and losing teams in the final phase of the 2008 Olympic Games held in Beijijng (Escalante et al. 2011). In the same way, some authors concluded that winning and losing teams had approximately the same opportunities to play with a numerical advantage, but in other studies the results to where they selected matches without penalties, showed the importance of the performance indicators related to numerical inequality (exclusion, power play attacks and shots) ). These studies reinforce our results where the success in even and power play actions are very useful to distinguish between match scores.
Concerning the limitations of the current study, we should underline that, although the sample size was the largest one used in an analysis of these characteristics in water polo research, the sample was not random, because of the difficulty of achieving the match recordings. This generated an unbalanced design of the match scores (favourable, balanced or unfavourable).
Conclusion
In summary, the results of the current study are an important contribution to sport performance in water polo, since this paper presents reference values for the performance indicators of offensive effectiveness according to match score in regular men water polo seasons.
There are two main conclusions to be gathered from the study of these indicators. Firstly, the effectiveness of the teams was determinant to discriminate between match score, where the winning teams had significant higher averages in all indicators of offensive effectiveness except one (penalties). Secondly, the percentages of successful drive shots and successful even and powerplay actions and shots were the offensive performance indicators that most discriminated between match score.
In practical applications, these results could help coaches when planning training and competition, providing them with the percentages of offensive effectiveness that must be reinforced in order to have more chances to win the match. This information can help coaches to evaluate their teams and to design training aimed at improving their weakest skills.
|
2021-05-07T00:03:23.715Z
|
2021-03-03T00:00:00.000
|
{
"year": 2021,
"sha1": "84374ff0dc6fe8fd6186a5469052034c59b8ebb1",
"oa_license": "CCBY",
"oa_url": "https://recyt.fecyt.es/index.php/retos/article/download/86353/64148",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fc861ece094b9e18e647736388d2007aff6a2aef",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
3426347
|
pes2o/s2orc
|
v3-fos-license
|
Porous Zinc Oxide Thin Films : Synthesis Approaches and Applications
Zinc oxide (ZnO) thin films have been widely investigated due to their multifunctional properties, i.e., catalytic, semiconducting and optical. They have found practical use in a wide number of application fields. However, the presence of a compact micro/nanostructure has often limited the resulting material properties. Moreover, with the advent of low-dimensional ZnO nanostructures featuring unique physical and chemical properties, the interest in studying ZnO thin films diminished more and more. Therefore, the possibility to combine at the same time the advantages of thin-film based synthesis technologies together with a high surface area and a porous structure might represent a powerful solution to prepare ZnO thin films with unprecedented physical and chemical characteristics that may find use in novel application fields. Within this scope, this review offers an overview on the most successful synthesis methods that are able to produce ZnO thin films with both framework and textural porosities. Moreover, we discuss the related applications, mainly focused on photocatalytic degradation of dyes, gas sensor fabrication and photoanodes for dye-sensitized solar cells.
Introduction
Zinc Oxide (ZnO) is a well-known metal oxide material showing interesting biocompatible [1,2], semiconducting [3], optical [4], photocatalytic [5], resistive switching [6] and piezoelectric properties [7].Among the main advantages, the easy preparation of ZnO in the form of thin films [8], nanowires [9], nanorods [10], crystalline nanoparticles [11] and flower-like structures [12] has strongly encouraged its investigation for various applications, including ultraviolet (UV) photodetectors [13], photoanodes [14], photocatalysis [5], gas sensors [15] and energy harvesting systems [16].Historically speaking, ZnO thin films prepared by means of several synthetic approaches have been first investigated, and the resulting properties exploited for a huge number of application fields [17].These include surface acoustic wave sensors [18], thin-film based transistors [19] and gas sensors [20].However, ZnO thin-film based technologies suffer from some major limitations, mainly due to their intrinsic low surface area combined with the lack of a framework porosity, i.e., porosity contained within each particle composing the framework [21].Actually, these aspects are of particular importance especially for bio-and gas-sensing applications; low surface areas prevent effective surface chemical modification treatments, limiting the sensing response and selective properties.On the other side, the absence of framework porosities prevents the possibility to host molecules of interest such as drugs and proteins, thereby limiting the use of ZnO thin films in biomedical applications like drug-delivery systems and tissue engineering.Some alternative solutions have been explored in view of improving at least the surface area.To this purpose, plasma-assisted chemical vapor deposition (CVD) approaches represented a valid solution for preparing low-density structure ZnO thin films [22].
Actually, plasma-CVD allowed the catalyst-free growth of ZnO nanocolumnar thin films with a more pronounced textural porosity, i.e., porosity due to voids and spaces formed by contacts among nanocolumns.However, no framework porosity, i.e., pores within the single ZnO nanocolumn, was achieved.Anyway, thanks to the higher surface area, the proposed ZnO nanocolumnar films were successfully applied to gas sensing [23], solar cells [24] and photocatalysis [25].
Most of the limitations mentioned above have been successfully overcome with the advent of low-dimensional ZnO structures.A wide plethora of synthesis methods have been explored and optimized, allowing to obtain ZnO structures with various shapes and morphologies, ranging from the micrometer to the nanometer scale.Among them, ZnO hollow-sphere particles and quantum dots are the most promising ones [26][27][28].With a high surface area combined to a framework porosity, such low-dimensional ZnO structures exhibited improved physical properties, thanks to the presence of physical quantum confinement effects occurring in low-dimensional nanomaterials [29][30][31].Despite the promising behaviors, some major concerns still prevent the integration of low-dimensional ZnO nanomaterials into final product applications, such as limitations in the scalability of the synthetic approaches towards large areas, as well as the reproducibility of the resulting materials properties.Therefore, the thin-film based technology still represents one of the most valid solutions in terms of industrial scalability and integration of functional materials into product applications.
Within this scope, this review aims at presenting an overview on the synthesis and applications of porous ZnO thin-film with well-defined textural and framework porosities.The most successful methods are found to be physical vapor depositions, especially concerning pulsed laser deposition and sputtering techniques, electrodeposition and spray pyrolysis.Other wet-chemistry approaches and template-assisted growth methods are discussed as well.In the next paragraphs, the main achievements in terms of various porous ZnO morphologies and corresponding application properties are discussed and correlated to each specific synthetic approach.
Physical Vapor Deposition of Porous ZnO Thin Films
Physical Vapor Deposition (PVD) methods are based on the formation of a vapor phase from a solid source material, and the following condensation on a substrate surface.Atoms and/or molecules making the vapor phase are physically extracted from the source material.This extraction process can be pursued by using various sources of energy, each one characteristic of the particular deposition method.For example, the presence of plasma is required for sputtering process, while high-energy photons coming from a laser source are exploited in the case of pulsed laser deposition (PLD).The formation process of thin films may be roughly summarized into the adsorption, nucleation and coalescence steps.The first one deals with the adsorption of atoms and/or molecules, coming from the vapor phase, on a substrate surface (adatoms).This process is driven by physisorption, i.e., weak electrostatic interactions due to Van der Waals forces, and/or chemisorption, i.e., formation of strong chemical bonds between atoms and the surface.After adsorption, nucleation and coalescence steps take place.In such situations, different adatoms start to aggregate together (nucleation), resulting in the formation of islands.These can further increase in dimensions and coalesce together, finally leading to the formation of a continuous thin film network that cover the whole substrate surface, if desired.Depending on the specific deposition parameters, each of the abovementioned steps can be properly influenced to promote the growth of island separately, avoiding the formation of a compact film.The final result would in this case lead to thin porous film, with specific micro/nanostructures and morphologies.
The following paragraphs aim at presenting an overview of the main results achieved for the growth of high surface areas and porous ZnO thin films by PVD methods, with a particular focus on the use of sputtering and PLD techniques.
Sputtering
Sputtering is a plasma-assisted PVD process where collisions between high-energy ions and the source material (target) are exploited for the formation of a vapor phase.Plasma is obtained by Coatings 2018, 8, 67 3 of 24 injecting a noble gas (usually argon) into the deposition chamber which is ionized by the application of a proper direct-current (DC)/radio-frequency (RF) signal voltage between a cathode, where the target is clamped, and the rest of the chamber.The impinging of ions on the target surface allows the extraction of atoms.Once the vapor phase is formed, condensation on the substrate surface takes place and thin-film formation may be pursued by the following nucleation and coalescence steps.Sputtering technology has been widely investigated because of its multiple advantages, as it is a high-yield production technology, compatible with integrated-circuit processing, and allows for the homogeneous deposition of materials on wide-area substrates.Moreover, sputtering does not necessarily require the use of high deposition temperatures.Therefore, it is compatible with the use of a wide range of substrates, including polymers.
The possibility to obtain a uniform distribution of nanopores in sputtered ZnO thin films was exploited for the fabrication of bio-electrodes for cholesterol detection [32].ZnO thin films were grown on gold-coated glass substrates by RF magnetron sputtering, using a very high deposition pressure (50 mTorr).This allowed to introduce a uniform distribution of nanopores within the thin film network, as confirmed by Atomic Force Microscope (AFM) analyses.The formation of this particular structure (average rms roughness ~4 nm) and of the nanopores was mainly due to the high sputtering pressure used for the growth of ZnO thin film, since inducing a strong in-situ bombardment of the growing thin film from high-energy species.The resulting high-surface area was successfully exploited to immobilize Cholesterol Oxidase (ChO x ) enzyme onto the nanoporous ZnO thin-film/Au/glass bio-electrodes.Both cyclic voltammetry measurements and optical studies revealed a stable and linear response of the ChO x /ZnO/Au bio-electrode up to 10 weeks, coupled with a promising sensitivity (detection of cholesterol concentration in the range 25-400 mg•dL −1 ).
Instead of using high pressure regimes during sputtering deposition, an alternative way to introduce a controlled porosity within the thin film structure is the use of a glancing-angle sputter deposition approach.By following a one-step oblique-angle deposition method, non-polar ZnO thin films showing a high crystal quality and porosity were successfully grown on glass substrates [33].In this case, the sputtering gun was collimated at an oblique angle of 30 • with respect to the substrate surface, without any substrate rotation.Figure 1 shows the particular surface morphology featured from ZnO thin films obtained with this method.These were composed by highly crystalline ZnO microrods (approximately 1-2 µm in length and 200-600 nm in width), mainly oriented along the [002] crystallographic direction, nearly parallel to the substrate surface.At the beginning of the deposition process, the ZnO microrods were densely packed to each other.Then, formation of pores was observed by increasing the film thickness.This approach favored the gradual rotation of the c-axis growth direction, from the vertical to the nearly lateral direction with respect to the substrate, finally leading to the formation of gaps between neighbor crystal grains, and hence to the formation of pores.
Sputtering
Sputtering is a plasma-assisted PVD process where collisions between high-energy ions and the source material (target) are exploited for the formation of a vapor phase.Plasma is obtained by injecting a noble gas (usually argon) into the deposition chamber which is ionized by the application of a proper direct-current (DC)/radio-frequency (RF) signal voltage between a cathode, where the target is clamped, and the rest of the chamber.The impinging of ions on the target surface allows the extraction of atoms.Once the vapor phase is formed, condensation on the substrate surface takes place and thin-film formation may be pursued by the following nucleation and coalescence steps.Sputtering technology has been widely investigated because of its multiple advantages, as it is a highyield production technology, compatible with integrated-circuit processing, and allows for the homogeneous deposition of materials on wide-area substrates.Moreover, sputtering does not necessarily require the use of high deposition temperatures.Therefore, it is compatible with the use of a wide range of substrates, including polymers.
The possibility to obtain a uniform distribution of nanopores in sputtered ZnO thin films was exploited for the fabrication of bio-electrodes for cholesterol detection [32].ZnO thin films were grown on gold-coated glass substrates by RF magnetron sputtering, using a very high deposition pressure (50 mTorr).This allowed to introduce a uniform distribution of nanopores within the thin film network, as confirmed by Atomic Force Microscope (AFM) analyses.The formation of this particular structure (average rms roughness ~4 nm) and of the nanopores was mainly due to the high sputtering pressure used for the growth of ZnO thin film, since inducing a strong in-situ bombardment of the growing thin film from high-energy species.The resulting high-surface area was successfully exploited to immobilize Cholesterol Oxidase (ChOx) enzyme onto the nanoporous ZnO thin-film/Au/glass bio-electrodes.Both cyclic voltammetry measurements and optical studies revealed a stable and linear response of the ChOx/ZnO/Au bio-electrode up to 10 weeks, coupled with a promising sensitivity (detection of cholesterol concentration in the range 25-400 mg•dL −1 ).
Instead of using high pressure regimes during sputtering deposition, an alternative way to introduce a controlled porosity within the thin film structure is the use of a glancing-angle sputter deposition approach.By following a one-step oblique-angle deposition method, non-polar ZnO thin films showing a high crystal quality and porosity were successfully grown on glass substrates [33].In this case, the sputtering gun was collimated at an oblique angle of 30° with respect to the substrate surface, without any substrate rotation.Figure 1 shows the particular surface morphology featured from ZnO thin films obtained with this method.These were composed by highly crystalline ZnO microrods (approximately 1-2 μm in length and 200-600 nm in width), mainly oriented along the [002] crystallographic direction, nearly parallel to the substrate surface.At the beginning of the deposition process, the ZnO microrods were densely packed to each other.Then, formation of pores was observed by increasing the film thickness.This approach favored the gradual rotation of the c-axis growth direction, from the vertical to the nearly lateral direction with respect to the substrate, finally leading to the formation of gaps between neighbor crystal grains, and hence to the formation of pores.Alternatively, the introduction of a tunable, porous microstructure within ZnO thin films has been observed by using unbalanced magnetron sputtering conditions [34].To prove the effect of the magnetron configuration, the porosity of ZnO thin films sputtered in three different types of magnetron electrode configurations was considered, and its effect on the resulting crystal structure and UV photo-response investigated.The unbalanced conditions were obtained by progressively lowering the strength of the central magnet in the magnetron, in order to increase the ion and electron flux at the substrate.Accordingly, the plasma confinement conditions could be changed.In the case of weak central magnet conditions, a very intense bombardment effect on the substrate occurred, with the erosion of the source material occurring mainly in the center.On the other hand, minimal bombardment effects were obtained in presence of a balanced magnetron configuration.X-ray Diffraction (XRD) analyses revealed the transition from a randomly oriented, polycrystalline ZnO thin film with no (002) orientation for the unbalanced configuration, to a strong c-axis orientation along the (002) direction for the balanced case.In comparison with high (002)-oriented dense columnar ZnO thin films, the presence of a mixed crystallographic orientation, promoted from the unbalanced magnetron conditions, favored films transparency, the formation of smaller grain size and the arise of a porous microstructure.The porous voids, coupled to the lower kinetic energy of species sputtered in unbalanced magnetron conditions, favored oxygen trapping within the thin film structure, especially at grain boundaries.Such trapped oxygen actively participated to photo-desorption and adsorption processes occurring during UV irradiation of the sample, thereby improving UV photoresponse (rise time of 792 ms and fall time of 805 ms under low radiation intensity of 9.5 mW•cm 2 at λ UV = 365 nm).In contrast, no appreciable UV photoresponse was observed for dense ZnO films.
Porous ZnO thin films were also obtained by thermally oxidizing metallic Zn films deposited by DC sputtering [35].The effect of using different pressure conditions (2 and 10 mTorr) and deposition atmospheres (pure Ar instead of mixed Ar + O 2 , 10%) was first investigated.Figure 2a shows the morphology of Zn films grown with an Ar pressure of 2 mTorr.Alternatively, the introduction of a tunable, porous microstructure within ZnO thin films has been observed by using unbalanced magnetron sputtering conditions [34].To prove the effect of the magnetron configuration, the porosity of ZnO thin films sputtered in three different types of magnetron electrode configurations was considered, and its effect on the resulting crystal structure and UV photo-response investigated.The unbalanced conditions were obtained by progressively lowering the strength of the central magnet in the magnetron, in order to increase the ion and electron flux at the substrate.Accordingly, the plasma confinement conditions could be changed.In the case of weak central magnet conditions, a very intense bombardment effect on the substrate occurred, with the erosion of the source material occurring mainly in the center.On the other hand, minimal bombardment effects were obtained in presence of a balanced magnetron configuration.X-ray Diffraction (XRD) analyses revealed the transition from a randomly oriented, polycrystalline ZnO thin film with no (002) orientation for the unbalanced configuration, to a strong c-axis orientation along the (002) direction for the balanced case.In comparison with high (002)-oriented dense columnar ZnO thin films, the presence of a mixed crystallographic orientation, promoted from the unbalanced magnetron conditions, favored films transparency, the formation of smaller grain size and the arise of a porous microstructure.The porous voids, coupled to the lower kinetic energy of species sputtered in unbalanced magnetron conditions, favored oxygen trapping within the thin film structure, especially at grain boundaries.Such trapped oxygen actively participated to photodesorption and adsorption processes occurring during UV irradiation of the sample, thereby improving UV photoresponse (rise time of 792 ms and fall time of 805 ms under low radiation intensity of 9.5 mW•cm 2 at λUV = 365 nm).In contrast, no appreciable UV photoresponse was observed for dense ZnO films.
Porous ZnO thin films were also obtained by thermally oxidizing metallic Zn films deposited by DC sputtering [35].The effect of using different pressure conditions (2 and 10 mTorr) and deposition atmospheres (pure Ar instead of mixed Ar + O2, 10%) was first investigated.Figure 2a shows the morphology of Zn films grown with an Ar pressure of 2 mTorr.The presence of hexahedron-like particles (average size of 200 nm) appearing as stacks of many flat facets, was noticed.By further increasing the pressure, the particle size slightly decreased and the outer flat facets broke.By including a small oxygen percentage to the deposition atmosphere, the arise of very fine particles (~20 nm) was observed, as shown in Figure 2b.These were interconnected together, forming a porous network.When the total pressure increased to 10 mTorr, the film showed The presence of hexahedron-like particles (average size of 200 nm) appearing as stacks of many flat facets, was noticed.By further increasing the pressure, the particle size slightly decreased and the outer flat facets broke.By including a small oxygen percentage to the deposition atmosphere, the arise of very fine particles (~20 nm) was observed, as shown in Figure 2b.These were interconnected together, forming a porous network.When the total pressure increased to 10 mTorr, the film showed many clusters Coatings 2018, 8, 67 5 of 24 (~150 nm) made of fine particles (~50 nm).The metallic Zn films were further oxidized in air at 600 • C for 1 h to be totally converted into ZnO, as confirmed by XRD analyses.The morphologies of the resulting ZnO thin films were strongly correlated with the deposition conditions of the starting Zn films.
Figure 2c shows the surface morphology of ZnO films obtained starting from Zn layers grown in pure Ar conditions at 10 mTorr.Independently of the pressure value used during Zn deposition, a dense and compact structure was obtained after thermal oxidation, together with the presence of oxide whiskers on the surface.On the other side, the surface morphology changed after the oxidation of Zn films obtained from a mixed Ar + O 2 atmosphere.In this case, the Zn films grown at low pressure still showed a relatively dense structure, made of very fine particles (~40 nm) and tower-like clusters due to some particles aggregation.However, the films deposited at a higher pressure possessed a porous structure composed of particles in the range of 60-90 nm after oxidation, as clearly visible in Figure 2d.The observed morphological changes both for Zn and ZnO thin films and the presence of a porous structure were mainly discussed in terms of deposition atmosphere conditions.The incorporation of oxygen during sputtering resulted in the formation of two phases, Zn and ZnO, and into the promotion of fine particles, eventually showing a Zn/ZnO core-shell configuration.Such structures could promote the nucleation process of oxides in the initial oxidation stage, inhibit evaporation of molten components and limit preferential growth along specific directions, thus resulting in the formation of porous films with fine particles without whisker oxides.Finally, the optical properties of the samples were investigated and correlated to the corresponding morphologies.Dense ZnO films coming from Zn films deposited in pure Ar exhibited low optical transmittance in visible light region, extremely strong UV emission and weak defect-related photoluminescence (PL) emissions.On the other hand, the porous multiphase ZnO showed a high transparency and relatively strong defect-related PL emission at room temperature.
In a similar way, porous nanobranched ZnO thin films with average thicknesses ranging from few µm up to tens of µm, were easily fabricated by a two-step synthetic approach, involving RF magnetron sputtering of metallic Zn films and their oxidation by thermal annealing in ambient air a 380 • C for 2 h [36] or alternatively, by low-temperature water-vapor oxidation treatment [37].The synthesis of metallic Zn films was performed at room temperature in a pure Ar atmosphere, using very mild conditions in terms of applied RF signal, Ar flow and pressure.In this way, a porous metallic network with a very high surface area was obtained still from the beginning of the synthesis process (Figure 3a), and was completely preserved after the oxidation treatments mentioned above (Figure 3b,c).
Coatings 2018, 8, x FOR PEER REVIEW 5 of 24 many clusters (~150 nm) made of fine particles (~50 nm).The metallic Zn films were further oxidized in air at 600 °C for 1 h to be totally converted into ZnO, as confirmed by XRD analyses.The morphologies of the resulting ZnO thin films were strongly correlated with the deposition conditions of the starting Zn films.
Figure 2c shows the surface morphology of ZnO films obtained starting from Zn layers grown in pure Ar conditions at 10 mTorr.Independently of the pressure value used during Zn deposition, a dense and compact structure was obtained after thermal oxidation, together with the presence of oxide whiskers on the surface.On the other side, the surface morphology changed after the oxidation of Zn films obtained from a mixed Ar+O2 atmosphere.In this case, the Zn films grown at low pressure still showed a relatively dense structure, made of very fine particles (~40 nm) and tower-like clusters due to some particles aggregation.However, the films deposited at a higher pressure possessed a porous structure composed of particles in the range of 60-90 nm after oxidation, as clearly visible in Figure 2d.The observed morphological changes both for Zn and ZnO thin films and the presence of a porous structure were mainly discussed in terms of deposition atmosphere conditions.The incorporation of oxygen during sputtering resulted in the formation of two phases, Zn and ZnO, and into the promotion of fine particles, eventually showing a Zn/ZnO core-shell configuration.Such structures could promote the nucleation process of oxides in the initial oxidation stage, inhibit evaporation of molten components and limit preferential growth along specific directions, thus resulting in the formation of porous films with fine particles without whisker oxides.Finally, the optical properties of the samples were investigated and correlated to the corresponding morphologies.Dense ZnO films coming from Zn films deposited in pure Ar exhibited low optical transmittance in visible light region, extremely strong UV emission and weak defect-related photoluminescence (PL) emissions.On the other hand, the porous multiphase ZnO showed a high transparency and relatively strong defect-related PL emission at room temperature.
In a similar way, porous nanobranched ZnO thin films with average thicknesses ranging from few μm up to tens of μm, were easily fabricated by a two-step synthetic approach, involving RF magnetron sputtering of metallic Zn films and their oxidation by thermal annealing in ambient air a 380 °C for 2 h [36] or alternatively, by low-temperature water-vapor oxidation treatment [37].The synthesis of metallic Zn films was performed at room temperature in a pure Ar atmosphere, using very mild conditions in terms of applied RF signal, Ar flow and pressure.In this way, a porous metallic network with a very high surface area was obtained still from the beginning of the synthesis process (Figure 3a), and was completely preserved after the oxidation treatments mentioned above (Figure 3b,c).The possibility to obtain the so-called "sponge-like" porous thin-film morphology was explained through a modified "structure zone" model [36].According to this, specific thin-film morphologies are defined considering the ratio between the substrate temperature and the melting temperature of the deposited material.Therefore, totally different morphologies can be obtained by changing the substrate temperature.On the basis of the structure zone model, the substrate temperature should lie at around ~350 K so that the sponge-like morphology can be formed in the specific case of Zn thin The possibility to obtain the so-called "sponge-like" porous thin-film morphology was explained through a modified "structure zone" model [36].According to this, specific thin-film morphologies are defined considering the ratio between the substrate temperature and the melting temperature of the deposited material.Therefore, totally different morphologies can be obtained by changing the substrate temperature.On the basis of the structure zone model, the substrate temperature should lie at around ~350 K so that the sponge-like morphology can be formed in the specific case of Zn thin films (melting temperature ~690 K).Such a local heating can be easily achieved during sputtering depositions and without providing any intentional heating to the substrates.This is due to the energy exchange occurring when high-energy particles coming from the vapor-phase collide with the substrate surface.
The developed nanobranched ZnO thin films were successfully exploited for a huge number of applications.By taking advantage of promising electrical and optical properties, in combination with a high specific surface area, the abovementioned porous ZnO layers successfully allowed optimal dye loading, resulting into efficient photoanodes for the fabrication of dye-sensitized solar cells (DSSCs) with a solar conversion efficiency up to 4.58% [38].The porous and almost isotropic nanobranched network also promoted fast charge transport properties and a good interaction with electrolyte solutions.These factors resulted into superior performances of the porous ZnO matrix when tested in lithium cells for prolonged times, obtaining an almost stable specific capacity higher than 50 µA•h•cm −2 and high Coulombic efficiencies [39].On the other hand, photocurrent values up to four orders of magnitude higher than those measured in dark conditions underlined their promising UV sensing capability [40].Additionally, such porous ZnO films showed encouraging piezoelectric properties [41], especially if compared to those obtained from ZnO thin films showing the more conventional dense microstructure [42].In particular, upon external mechanical stimulation of the nanobranched ZnO structures, intense piezoelectric output voltage peaks and power density values were achieved, hence suggesting their promising use for sensing and energy harvesting applications [40,42].The improved piezoelectric behavior was ascribed to the higher defectiveness of the porous structure with respect to the long-range ordered one, typical of dense ZnO thin films.This led to a general reduction in free carrier concentration and mobility, in turn limiting the screening potential and improving piezoelectric voltage generation at the same time.Lastly, the presence of a high porosity and hydrophilic behavior represented the key elements to design a novel synthetic approach for easily obtaining p-type doped nanobranched ZnO structures.In this case, unprecedented ferroelectric, piezoelectric and photovoltaic properties were effectively demonstrated [43].
Pulsed Laser Deposition
PLD is based on the ablation of a solid source upon interaction with a laser radiation.The ablated species form a vapor phase, condensate on a substrate surface and form the desired thin film after the usual nucleation and coalescence processes.One of the main advantages in using PLD is the possibility to strictly control the chemical composition of the deposited thin film, as the target stoichiometry is more effectively replied than for other PVD methods.However, particulate emission during source ablation strongly affects the performances of this deposition method.
Similar to sputtering, PLD was successfully investigated for growing highly porous ZnO thin films as well.Several works highlighted the importance in using specific oxygen background gas pressures during ablation of the material source, if a porous structure want to be pursued [44][45][46][47].For example, dense and porous ZnO thin films were obtained at room temperature on silicon (Si) substrates in vacuum and in 100 mTorr O 2 , respectively.It was found that vacuum deposition formed a dense ZnO layer, while O 2 atmosphere promoted the formation of a porous structure.This last also favored ZnO stoichiometry and the controlled formation of crystal defects like oxygen vacancies, which were almost absent for the vacuum-deposited material.By optimizing the O 2 pressure (66 mTorr) and post-deposition annealing conditions, porous ZnO films made of 100 nm diameter isolated ZnO columns were obtained, showing good crystallinity and strong UV luminescence emission [46].More recently, the effect of changing the O 2 partial pressure on the porosity of the resulting PLD-grown ZnO thin films was further demonstrated [44].In this case, Field-Emission Scanning Electron Microscopy (FESEM) and AFM analyses evidenced that small variations of oxygen pressure dramatically changed the resulting thin film morphology from porous ZnO crater-like nanostructures to nanoparticles.The formation process leading to the conversion of pores into nanoparticles as the oxygen pressure increased was effectively demonstrated by the corresponding reduction of surface roughness observed from AFM results.Another study about ZnO thin films grown by PLD discussed the formation of nanopores as a function of the deposition time [48].
This study evidenced how the formation, size and density of these nanopores was influenced by deposition time, due to the different interaction time between the ambient gas and the plasma plume.The growth of pores surrounded by craters was discussed on the basis of the Stranski-Krastanov growth model.All the above-mentioned results are in good agreement with previous observations of ZnO nanoparticles formation during PLD processes and in presence of high oxygen gas pressures.Actually, such nanoparticles allowed the following growth of high aspect ratio ZnO nanostructures by PLD [45].Similarly, these nanoparticles may be considered as a sort of catalyst promoting the formation of high surface area ZnO thin films.
Alternatively to the use of high O 2 pressure regimes, the glancing angle PLD approach, dealing with a highly oblique incident angle (88 • ) between rotating ZnO source and Si substrates, allowed the deposition of porous ZnO thin films as well, consisting of 100 nm diameter ZnO posts or helices [49].The formation of such high surface area network was due to the mutual combination of self-shadowing effects between the ingrowing ZnO structures and substrate rotation speed.In particular, by changing the rotational speed of the substrates from 0.04 to 0.5 rpm, the morphology of the resulting ZnO thin films changed from few isolated helices to vertical posts having 100 nm in diameter.It was also hypothesized that a higher degree of porosity could be achieved by increasing the incident angle.This approach was further exploited to get porous nanostructured ZnO thin films applied to photoelectrochemical cells (PEC) for hydrogen generation from water splitting [50].To find out the effect of using the oblique-angle deposition, a comparative study on the properties of ZnO thin films fabricated using normal PLD and oblique-angle PLD was carried out.The standard approach resulted into dense thin films with relatively large grain sizes (200 nm), while glancing-angle PLD returned highly porous ZnO structures, made of interconnected spherical nanoparticles of 15-40 nm in diameter.The PEC studies demonstrated that initial photocurrent and hydrogen generation efficiency were strongly influenced by the ZnO thin film morphology, the semiconductor-electrolyte interaction and defect density.In particular, the optimal photon-to-hydrogen efficiency (0.6%) was obtained in the case of the porous morphology obtained by the glancing-angle approach.The improved PEC performances were ascribed to multiple effects, mainly deriving from the presence of a porous network.Firstly, the superior charge transport properties owning to diffusion phenomena taking place from nanoparticle to nanoparticle.Secondly, the decreased oxygen vacancies and Zn interstitials defect density compared to the dense thin film microstructure.Lastly, the large surface-to-volume ratio of the ZnO nanoparticle network, which guaranteed an optimal semiconductor-electrolyte interaction, enhancing the electron-hole separation properties.
The combination of effects from both oxygen-pressure and substrate temperature on the growth of ZnO thin films by PLD was also demonstrated.This approach was exploited to prepare high surface area, three dimensional (3D) ZnO nanowall networks with a nest-like structure, shown in Figure 4 [51].The nanowall structure was obtained by a two-step PLD process.This involved first the deposition of a thin ZnO seed layer at a substrate temperature of 300 • C and O 2 pressure of 10 mTorr.Then, the 3D nanowall ZnO network was obtained at 550 • C and O 2 pressure of 500 mTorr.The nest-like structures were composed of a network of remarkably uniform and interconnected nanowalls, whose average thickness was around 15 nm (Inset A of Figure 4).AFM characterization also revealed that about 80% of the depths of the nanowall was 70 nm (Inset B of Figure 4).The formation of such particular 3D structure was explained in terms of a vapor-solid model.According to this, a templating/seeding effect due to self-nucleation directly occurred on the substrate surface during the beginning of the growth, at low temperature and low O 2 pressure conditions.Then, formation of the 3D ZnO nanowall network was promoted by the following high substrate temperature and high O 2 background regime.
Concerning again the effect due to different substrate temperatures and oxygen partial pressure values during PLD growth, a parametric study on the resulting morphologies, optical and structural properties was carried out [52].Regarding the deposition time, ZnO nanowalls were obtained at different period of times of 5, 7, 10, 15, and 45 min.Figure 5a shows that formation of ZnO nanoparticles (average size 40-390 nm) randomly distributed on the substrate surface occurred after 5 min.Then, coalescence phase was observed after 7 min while two-dimensional ZnO nanowalls were grown vertically after 10 min.In this case, the average pore size was between 50 and 140 nm and the walls between the honeycombs showed a uniform thickness of around 50 nm.Concerning again the effect due to different substrate temperatures and oxygen partial pressure values during PLD growth, a parametric study on the resulting morphologies, optical and structural properties was carried out [52].Regarding the deposition time, ZnO nanowalls were obtained at different period of times of 5, 7, 10, 15, and 45 min.Figure 5a shows that formation of ZnO nanoparticles (average size 40-390 nm) randomly distributed on the substrate surface occurred after 5 min.Then, coalescence phase was observed after 7 min while two-dimensional ZnO nanowalls were grown vertically after 10 min.In this case, the average pore size was between 50 and 140 nm and the walls between the honeycombs showed a uniform thickness of around 50 nm.Similarly, crystalline ZnO thin films with a tunable porosity and anisotropic structure were prepared by changing the O 2 pressure (from 100 mTorr to 400 mTorr), during PLD fabrication process [53].The resulting films were tested as photoanodes for the fabrication of glass-based and flexible, polymer-based DSSCs.By selecting the most appropriate O 2 pressure value (300 mTorr) and thickness (10 µm), high surface area ZnO films were obtained.This allowed for an optimal dye loading, prolonged electron lifetime and enhanced electrolyte diffusion through the crystalline porous ZnO framework, resulting into better photovoltaic behaviors and improved conversion efficiencies (up to 3.89%) under light illumination.Another PLD parameter affecting the porosity of ZnO thin films is pulse duration.This was effectively demonstrated by using a laser (λ = 810 nm, laser fluence 2 J•cm −2 ) with different pulse durations of 50 fs, 200 fs, 1 ps and 10 ps.In such cases, porous ZnO films were obtained, with a degree of porosity decreasing for longer pulse durations [54].Concerning again the effect due to different substrate temperatures and oxygen partial pressure values during PLD growth, a parametric study on the resulting morphologies, optical and structural properties was carried out [52].Regarding the deposition time, ZnO nanowalls were obtained at different period of times of 5, 7, 10, 15, and 45 min.Figure 5a shows that formation of ZnO nanoparticles (average size 40-390 nm) randomly distributed on the substrate surface occurred after 5 min.Then, coalescence phase was observed after 7 min while two-dimensional ZnO nanowalls were grown vertically after 10 min.In this case, the average pore size was between 50 and 140 nm and the walls between the honeycombs showed a uniform thickness of around 50 nm.Similarly, crystalline ZnO thin films with a tunable porosity and anisotropic structure were prepared by changing the O2 pressure (from 100 mTorr to 400 mTorr), during PLD fabrication process [53].The resulting films were tested as photoanodes for the fabrication of glass-based and flexible, polymerbased DSSCs.By selecting the most appropriate O2 pressure value (300 mTorr) and thickness (10 μm), high surface area ZnO films were obtained.This allowed for an optimal dye loading, prolonged electron lifetime and enhanced electrolyte diffusion through the crystalline porous ZnO framework, resulting into better photovoltaic behaviors and improved conversion efficiencies (up to 3.89%) under light illumination.Another PLD parameter affecting the porosity of ZnO thin films is pulse duration.This was effectively demonstrated by using a laser (λ = 810 nm, laser fluence 2 J•cm −2 ) with different pulse durations of 50 fs, 200 fs, 1 ps and 10 ps.In such cases, porous ZnO films were obtained, with a degree of porosity decreasing for longer pulse durations [54].
Spray Pyrolysis
Spray pyrolysis is a well-established technique used for preparing high-quality thin and thick ZnO films in a very simple, cheap and easy way.This synthetic approach allows for growing both dense and porous films, as well as powdered materials.The process roughly consists in three steps: atomization of a metal salt precursor solution, transportation of the resulting vapors, condensation of the drops and their thermal decomposition on a heated substrate.The formation of a thin film network is then obtained by the superimposition and overlap of the metal salt drops over the substrate surface, and their conversion into oxides by heating of the substrate.The main parameters affecting the final thin-film structure and properties are the solvent, type of salt and concentration, and additives present in the precursor solution.
Porous crystalline ZnO films obtained by spray pyrolysis have been reported in numerous cases.The precursor solution generally consists of zinc acetylacetonate [55], zinc nitrate [56], or zinc acetate dehydrate [57,58] salts dissolved in aqueous solution.In all the cases it was found that both the use of different precursor concentrations, substrate temperatures or post-deposition thermal annealing treatments strongly influenced the resulting film morphology, photoconductive and photoluminescent properties [56][57][58][59][60].The porous ZnO structures resulting from spray pyrolysis method generally showed good electrical conductive properties and light transparency.These aspects, coupled to optimal dye absorption properties, demonstrated their promising use as photoanodes in DSSCs fabrication [55].Moreover, their application as blocking layer (BL) in standard TiO2-based solar cells has been successfully reported; the presence of spray-pyrolysis derived porous ZnO BL effectively
Spray Pyrolysis
Spray pyrolysis is a well-established technique used for preparing high-quality thin and thick ZnO films in a very simple, cheap and easy way.This synthetic approach allows for growing both dense and porous films, as well as powdered materials.The process roughly consists in three steps: atomization of a metal salt precursor solution, transportation of the resulting vapors, condensation of the drops and their thermal decomposition on a heated substrate.The formation of a thin film network is then obtained by the superimposition and overlap of the metal salt drops over the substrate surface, and their conversion into oxides by heating of the substrate.The main parameters affecting the final thin-film structure and properties are the solvent, type of salt and concentration, and additives present in the precursor solution.
Porous crystalline ZnO films obtained by spray pyrolysis have been reported in numerous cases.The precursor solution generally consists of zinc acetylacetonate [55], zinc nitrate [56], or zinc acetate dehydrate [57,58] salts dissolved in aqueous solution.In all the cases it was found that both the use of different precursor concentrations, substrate temperatures or post-deposition thermal annealing treatments strongly influenced the resulting film morphology, photoconductive and photoluminescent properties [56][57][58][59][60].The porous ZnO structures resulting from spray pyrolysis method generally showed good electrical conductive properties and light transparency.These aspects, coupled to optimal dye absorption properties, demonstrated their promising use as photoanodes in DSSCs fabrication [55].Moreover, their application as blocking layer (BL) in standard TiO 2 -based solar cells has been successfully reported; the presence of spray-pyrolysis derived porous ZnO BL effectively reduced charge carrier recombination phenomena, improving the cell efficiency more than 20% with respect to cells efficiencies obtained without the BL [61].Most of the applications based on ZnO films obtained by spray pyrolysis also rely on the fabrication of gas sensors.Several works gave evidence of their promising use as gas sensors for the detection of various gas species, including acetaldehyde [62], ammonia [63,64] and H 2 S [65].In these cases, the room temperature sensing characteristics showed that gas concentrations ranging from hundreds of ppm up to few ppm could be successfully detected with good selectivity and fast response/recovery times.In addition, other gases like methanol, ethanol, 2-propanol, benzyl alcohol and acetone were considered, further proving the best selectivity of such porous ZnO structures towards those abovementioned gases [62,63,66].
Another promising application of spray pyrolysis technique is the easy preparation of multifunctional doped ZnO films with a porous structure.In this case, doping can be achieved by simply including an additional doping precursor within the synthesis solution, such as aluminum chloride, tin chloride and silver nitrate.This approach was explored to successfully dope porous ZnO with various elements, including Al [67,68], Sn [69], Ag [70], Na [71], Mg [72] and many others [73,74].Similarly to pristine ZnO, the resulting doped structures were found to be highly promising in view of gas sensors fabrication, especially for ammonia and H 2 S detection.Actually, transition metal doping (Co, Cu, Ni) was proved as an effective way to achieve gas sensing properties with improved response and selectivity [75].The H 2 S sensing property and selectivity of Ti-doped ZnO thin films was investigated as well.The influence of Ti doping concentration on H 2 S detection was considered, finding that 2 wt.%Ti-doped ZnO thin films showed the maximum response (~0.29) to 20 ppm H 2 S exposition at 200 • C [74].In a similar way, H 2 S sensing properties of Cu-doped ZnO thin films (1-4 wt.%) were also demonstrated [76].In this case, the best response (~0.38) towards 20 ppm H 2 S at 523 K operating temperature was achieved for the 4 wt.%Cu-doped ZnO.Ni-doped and V-doped ZnO thin films featuring similar sensing capabilities were demonstrated too [77][78][79].The acetone gas sensing tests performed on Ni-doped ZnO highlighted a good sensing response for acetone concentration as low as 116 ppb, with response and recovery times of about 6 s and 2 s, respectively [79].Concerning V-doped ZnO, gas testing analyses gave evidence of good sensing response in a wide range of operating temperatures (from 350 • C to 300 • C) towards 100 ppm of acetone, 50 ppm of ethanol and 500 ppm of H 2 .Furthermore, a maximum response of 100 was achieved for acetone 100 ppm at 450 • C [80].Alternatively, good ammonia sensing properties were achieved for porous Mg-doped ZnO thin films, with the lower Mg-doping concentration showing the best performances, with quick response and recovery times at room temperature [81].
Other successful applications for doped ZnO films obtained by spray pyrolysis were expressed in terms of their improved photocatalytic efficiencies and electrical properties, resulting into their successful application as photocatalyst [82,83] and as photoanodes in DSSCs fabrication [84].Finally, In-doped and Sn-doped ZnO thin films also showed very interesting antibacterial properties against Staphylococcus aureus [85,86], with better antibacterial activities found by increasing the doping concentration.
Electrodeposition
Electrochemical deposition, also called electrodeposition, is a versatile, low cost, easy and scalable method, particularly useful for growing highly porous ZnO thin films at relatively low working temperatures (generally lower than 100 • C).This method deals with the use of charged reactive species diffusing through a solution, under the application of an external electric field.Electrodeposition is carried out in a three-electrode electrochemical cell, composed by a reference electrode (Ag/AgCl), the counter electrode (platinum wire or sheet), the working electrode and the electrolyte solution.The application of a constant voltage between the electrodes allows the diffusion of reactive species within the electrolyte solution.
Depending on several deposition parameters, like the applied voltage, deposition time, charge density and solution precursor, the porosity and morphology of electrodeposited ZnO films may be tuned, accordingly.For example, it has been reported that the growth rate of ZnO films showing different porosities and morphologies prepared by cathodic electrodeposition was influenced by sodium laurylsulfate concentration.This surfactant was added to the growth solution, made of aqueous oxygen saturated zinc chloride and other organic acids.If sodium laurylsulfate concentration was high enough, formation of micelles and their assembly on the charged electrode surface could be achieved, allowing for the formation of the porous structure, but also leading to a strong increase in the current density and finally, to the growth rate [87].The promising optical and electrical properties of porous ZnO films obtained by electrodeposition have been reported in numerous cases [88][89][90].In particular, it was found that enhanced photocurrent values were due to a combination of multiple effects, dealing with improved visible absorption properties, optimization of oxygen defects within the crystal structure, and finally the presence of an appropriate porous surface morphology, allowing for the incoming light to more effectively generate multiple reflections and diffusion scattering effects before being emitted, finally increasing the ZnO solar light absorption properties [89].Such a pronounced photocurrent, coupled with a porous morphology, was successfully exploited for the fabrication of sensors and DSSCs [90].Also in this case, the addition of surfactants during ZnO electrodeposition turned out to be an effective way for inducing high porosities and fast growth rates.The resulting porous samples displayed a persistent photoconductive behavior, which conductivity transients of several hours in dry atmosphere, independently of illuminating conditions.More interestingly, the photoconductive behavior was observed even when illuminating with low-bandgap energy light.This property was explained in terms of lattice relaxation processes involving surface states within the ZnO bandgap, which favored capture of electrons immediately after photoexcitation phenomena.Similar photoconductive properties were explored also for the fabrication of flexible photosensors [88].In this case, fast photo-response (0.821 s) and recovery times (1.257 s) were obtained under solar light irradiation, together with a large on/off current density ratio (65.94).Again, such promising results were mainly due to the porous ZnO network, able to provide more convenient photoelectron pathways and additional reaction sites for photocurrent generation.
Exploiting the promising photoelectrical properties of electrodeposited porous ZnO films, several works demonstrated their effective application as photoanodes in DSSCs fabrication.This was proved by Chen et al. [91], who prepared porous ZnO electrodes by cathodic electrodeposition from an aqueous zinc nitrate solution, also containing polyvinylpyrrolidone (PVP) as surfactant.Morphological and structural characterization showed that the porous framework was made of hexagonal wurtzite crystalline grains in the 20-40 nm range.Taking advantage of the film porosity and crystallinity, coupled to the optimization of the final photoanode thickness (8 µm), DSSCs showing conversion efficiencies as high as 5.08% were obtained.In another case, the promising properties of squaraine-sensitized mesoporous ZnO electrodes were expressed in terms of larger photocurrent generation and solar-to-energy conversion efficiencies in comparison with those obtained from standard TiO 2 -based electrodes sensitized with the same dye molecule [92].Electrodeposition of nanoporous ZnO films were successfully deposited on conductive nanofibers as well, and tested in DSSCs [93].Prior to deposition of the porous matrix, a compact ZnO BL was electrodeposited, in order to suppress charge carrier recombination at the interface with the conductive support.Then, during the same electrodeposition process, porous ZnO structures were grown by including Eosin Y as a pore-creating additive in the electrochemical bath.
The addition of Eosin Y agent to electrodeposition found use also in view of photocatalytic degradation applications.In particular, the deposition time and the Eosin concentration were optimized to get mesoporous ZnO thin films with large internal surface areas and good mechanical properties [94].The photodegradation rate of methylene blue (MB) and Congo Red molecules were maximized with Eosin concentrations higher than 40 µM.Since the photodegradation behavior was found to be promoted in the case of the large-diameter pores, the development of narrower pores (8 nm) did not enhance further the ZnO photocatalytic performance.Another approach for the development of porous ZnO films with good photocatalytic properties was the electrodeposition of metallic Zn coatings on mild carbon steel sheet in sulfate bath by DC current, and their subsequent thermal oxidation in air at temperatures ranging between 400 • C and 800 • C [95].In another work, the influence of annealing conditions of electrodeposited Zn films on the resulting photocatalytic activities were studied again in terms of MB photodegradation under UV light.The ZnO films showed good photodegradation efficiency and photostabilization, especially for the samples annealed at 500 • C for 4 h [96].With this particular set of annealing conditions, uniform intertwining-rod structures were formed, showing around 100% photodegradation efficiency and good photostabilization following three successive growth reaction cycles.The observed superior properties were due to the large effective area present in the so-formed ZnO structures, which provided more active Coatings 2018, 8, 67 12 of 24 sites for radical-organic interactions and effective interfacial charge transfer, finally resulting into better photocatalytic activities.The effect of thermal oxidation at 800 • C on the morphology of electrodeposited ZnO films was further considered.Improved photocatalytic degradation of MB was obtained due to additional morphological effects deriving from the oxidation process, which led to better oxidation conditions but more strikingly, to the rise of high surface area, columnar needle or rod like ZnO morphologies.By controlling various deposition parameters, like the applied potential (in the range −0.9 ~−1.1 V), the electrodeposition duration (from 1800 to 7200 s) and times (from 1 to 6), a direct method was implemented to grow porous ZnO nanorod arrays (ZNRAs) featuring different morphologies, on stainless steel mesh substrates [97].The photocatalytic degradation of Rhodamine-β under UV light irradiation was investigated.The results are shown in Figure 6, highlighting how the degradation efficiency could be highly improved from 89.4% to 98.3% if deposition times increased from one to six.This was mainly due to the higher amount of ZnO catalyst deposited onto the steel mesh, hence resulting into higher photocatalytic efficiencies.Alternatively, electrochemical deposition allows the formation of ZnO nanosheets on pre-seeded substrates.Similar to porous ZnO thin films, the prepared nanosheet array demonstrated promising MB photodegradation properties under visible-light irradiation.In particular, the degradation efficiency could reach 90% after 180 min, and was 1.5 times better than for commercial ZnO powders [98].
Due to the high specific surface area of electrodeposited ZnO thin films, conductive hydrophobic or even superhydrophobic surfaces were prepared as well.Superhydrophobicity could be achieved directly [99] or even by post-synthesis chemical modification treatments on the prepared ZnO surfaces [99,100].In particular, Lin et al. [99] succeeded in preparing biomimetic self-cleaning ZnO surface on steel substrates, by coupling the electrodeposition of metallic Zn films on steel substrates, the consequent hydrothermal growth of low-dimensional ZnO structures (contact angle of 137.85 • ) and finally, their surface modification with low-surface-energy chemical moieties (contact angle 157.59 • ).
Coatings 2018, 8, x FOR PEER REVIEW 12 of 24 studied again in terms of MB photodegradation under UV light.The ZnO films showed good photodegradation efficiency and photostabilization, especially for the samples annealed at 500 °C for 4 h [96].With this particular set of annealing conditions, uniform intertwining-rod structures were formed, showing around 100% photodegradation efficiency and good photostabilization following three successive growth reaction cycles.The observed superior properties were due to the large effective area present in the so-formed ZnO structures, which provided more active sites for radicalorganic interactions and effective interfacial charge transfer, finally resulting into better photocatalytic activities.The effect of thermal oxidation at 800 °C on the morphology of electrodeposited ZnO films was further considered.Improved photocatalytic degradation of MB was obtained due to additional morphological effects deriving from the oxidation process, which led to better oxidation conditions but more strikingly, to the rise of high surface area, columnar needle or rod like ZnO morphologies.By controlling various deposition parameters, like the applied potential (in the range −0.9 ~ −1.1 V), the electrodeposition duration (from 1800 to 7200 s) and times (from 1 to 6), a direct method was implemented to grow porous ZnO nanorod arrays (ZNRAs) featuring different morphologies, on stainless steel mesh substrates [97].The photocatalytic degradation of Rhodamine-β under UV light irradiation was investigated.The results are shown in Figure 6, highlighting how the degradation efficiency could be highly improved from 89.4% to 98.3% if deposition times increased from one to six.This was mainly due to the higher amount of ZnO catalyst deposited onto the steel mesh, hence resulting into higher photocatalytic efficiencies.Alternatively, electrochemical deposition allows the formation of ZnO nanosheets on pre-seeded substrates.Similar to porous ZnO thin films, the prepared nanosheet array demonstrated promising MB photodegradation properties under visible-light irradiation.In particular, the degradation efficiency could reach 90% after 180 min, and was 1.5 times better than for commercial ZnO powders [98].
Due to the high specific surface area of electrodeposited ZnO thin films, conductive hydrophobic or even superhydrophobic surfaces were prepared as well.Superhydrophobicity could be achieved directly [99] or even by post-synthesis chemical modification treatments on the prepared ZnO surfaces [99,100].In particular, Lin et al. [99] succeeded in preparing biomimetic self-cleaning ZnO surface on steel substrates, by coupling the electrodeposition of metallic Zn films on steel substrates, the consequent hydrothermal growth of low-dimensional ZnO structures (contact angle of 137.85°) and finally, their surface modification with low-surface-energy chemical moieties (contact angle 157.59°).
Sol-Gel Assisted Methods
Sol-gel assisted methods have been explored as cheap and simple alternative synthetic techniques to get porous ZnO thin films.These include spin-coating [101], dip-coating [102], hydrothermal routes [103] and chemical bath deposition (CBD) [104][105][106].The sol-gel approach first deals with the preparation of a colloidal solution combining zinc precursor powders (zinc acetate dihydrate, zinc nitrate hexahydrate or zinc chloride) and bases (sodium hydroxide), both mixed in organic solvents (ethanol, methanol or 2-propanol).The addition of hexamethylenetetramine to the solution is also widely recommended, since it promotes ZnO crystallization and allows for a strict control over the final ZnO morphology.The prepared solution is stirred for few hours at mild conditions (60-70 • C) and finally deposited on the desired substrates.To further promote ZnO crystallization as well as the formation of the desired porous morphology, a final sintering process is generally performed for several hours at temperatures ranging between 300 • C and 500 • C.
Sol-gel derived porous ZnO structures show high surface areas (see Figure 7) coupled with the existence of good optical and electrical properties.All these aspects result into interesting application properties.For example, porous ZnO obtained by CBD exhibited good sensing properties against a wide range of toxic and combustible gases like hydrogen, liquid petroleum gas, methane and H 2 S. The response of the ZnO thin film sensors was found to be significant, even for low gas concentrations, i.e., 50 ppm for methane, 15 ppm for H 2 S [104].Highly porous ZnO thin films prepared by a sol-gel approach showed promising photocatalytic properties, efficiently promoting the aqueous solution decomposition of phenol, chlorophenol, naphthalene and anthracene to CO 2 [101].Similar promising results were observed also for porous ZnO structures grown on alumina substrates.In this case, different morphologies (from nest-like to globular shape ones) were investigated and the resulting photocatalytic properties expressed in terms of Methyl Orange degradation.The highest photocatalytic activity was obtained for porous ZnO films sintered at 500 • C and showing successive nest-like structures [107].The growth of highly porous ZnO by sol-gel approach was also applied in the pores of anodic alumina matrices having tens of µm in thickness, followed by thermal treatment [108,109].Lamellar-like morphology, high surface areas (between 99 and 198 m 2 •g −1 ) and pore volumes (0.35 and 0.1 cm 3 •nm −1 •g −1 ) were obtained for ZnO nanostructures into alumina.In some cases, however, the partial alumina dissolution during the synthesis process led to highly porous membrane with mixed phases of wutzitic ZnO and γ-Al 2 O 3 .The sol-gel method was also successfully applied to the preparation of highly porous ZnO films for CO gas sensing applications.By changing the calcination temperature, different morphologies and gas sensing responses were possible, with the best sensing response achieved for a calcination temperature of 500 • C [110].Hierarchically, 3D porous ZnO structures were obtained by hydrothermal method as well [103].The analyses of both ethanol and methanol gas sensing properties demonstrated that hierarchically porous structures highly improved the gas sensing performances with respect to commercial ZnO powders.This was due to the high porosity and three-dimensional morphology, as making easier gas diffusion and transport within the sensing material.More recently, mesoporous ZnO film structures were obtained by CBD, following a green organic-solvent-free route.The high specific surface area of the prepared ZnO structures (19-66 m 2 •g −1 ) allowed efficient drug loading and release, thereby highlighting the ability of mesoporous ZnO structures to work as promising drug delivery carriers [111].
Sol-Gel Assisted Methods
Sol-gel assisted methods have been explored as cheap and simple alternative synthetic techniques to get porous ZnO thin films.These include spin-coating [101], dip-coating [102], hydrothermal routes [103] and chemical bath deposition (CBD) [104][105][106].The sol-gel approach first deals with the preparation of a colloidal solution combining zinc precursor powders (zinc acetate dihydrate, zinc nitrate hexahydrate or zinc chloride) and bases (sodium hydroxide), both mixed in organic solvents (ethanol, methanol or 2-propanol).The addition of hexamethylenetetramine to the solution is also widely recommended, since it promotes ZnO crystallization and allows for a strict control over the final ZnO morphology.The prepared solution is stirred for few hours at mild conditions (60-70 °C) and finally deposited on the desired substrates.To further promote ZnO crystallization as well as the formation of the desired porous morphology, a final sintering process is generally performed for several hours at temperatures ranging between 300 °C and 500 °C.
Sol-gel derived porous ZnO structures show high surface areas (see Figure 7) coupled with the existence of good optical and electrical properties.All these aspects result into interesting application properties.For example, porous ZnO obtained by CBD exhibited good sensing properties against a wide range of toxic and combustible gases like hydrogen, liquid petroleum gas, methane and H2S.The response of the ZnO thin film sensors was found to be significant, even for low gas concentrations, i.e., 50 ppm for methane, 15 ppm for H2S [104].Highly porous ZnO thin films prepared by a sol-gel approach showed promising photocatalytic properties, efficiently promoting the aqueous solution decomposition of phenol, chlorophenol, naphthalene and anthracene to CO2 [101].Similar promising results were observed also for porous ZnO structures grown on alumina substrates.In this case, different morphologies (from nest-like to globular shape ones) were investigated and the resulting photocatalytic properties expressed in terms of Methyl Orange degradation.The highest photocatalytic activity was obtained for porous ZnO films sintered at 500 °C and showing successive nest-like structures [107].The growth of highly porous ZnO by solgel approach was also applied in the pores of anodic alumina matrices having tens of μm in thickness, followed by thermal treatment [108,109].Lamellar-like morphology, high surface areas (between 99 and 198 m 2 •g −1 ) and pore volumes (0.35 and 0.1 cm 3 •nm −1 •g −1 ) were obtained for ZnO nanostructures into alumina.In some cases, however, the partial alumina dissolution during the synthesis process led to highly porous membrane with mixed phases of wutzitic ZnO and γ-Al2O3.The sol-gel method was also successfully applied to the preparation of highly porous ZnO films for CO gas sensing applications.By changing the calcination temperature, different morphologies and gas sensing responses were possible, with the best sensing response achieved for a calcination temperature of 500 °C [110].Hierarchically, 3D porous ZnO structures were obtained by hydrothermal method as well [103].The analyses of both ethanol and methanol gas sensing properties demonstrated that hierarchically porous structures highly improved the gas sensing performances with respect to commercial ZnO powders.This was due to the high porosity and three-dimensional morphology, as making easier gas diffusion and transport within the sensing material.More recently, mesoporous ZnO film structures were obtained by CBD, following a green organic-solvent-free route.The high specific surface area of the prepared ZnO structures (19-66 m 2 •g −1 ) allowed efficient drug loading and release, thereby highlighting the ability of mesoporous ZnO structures to work as promising drug delivery carriers [111].
Template-Assisted Methods
Porous ZnO thin films have been also prepared by template-assisted methods.In these cases, the macro/microporous framework is given by the use of template agents with suitable geometries.Once ZnO deposition on the pre-treated substrates is completed, the template is removed, leaving the desired ZnO porous framework.
Three-dimensional polystyrene (PS) opal [112,113] and polyethylene glycol [114] have been proposed as organic template to obtain two-dimensional or three-dimensional porous ZnO structures.In the first case, PS spheres were dispersed on conductive glass substrates using the vertical deposition technique, resulting in the formation of PS opal films covering the substrate surface (see Figure 8a).Then, the PS-coated substrates were used as electrode in a three-electrode cell configuration, also containing a Zn plate and the electrolyte solution (0.04 M Zn(NO 3 ) 2 in water or mixed ethanol-water solvents).The deposition process was carried out for 40 min or 2 h at 62 • C and the reference voltage was kept at −0.96 V vs. reference electrode.After electrodeposition, the PS template was thermally or chemically removed, leaving the long-range ordered porous ZnO framework shown in Figure 8b,c.By exploiting again the combination of electrodeposition and PS templates, ordered porous ZnO films were obtained on conductive indium-tin-oxide glass substrates [115].The influence of electrolyte concentration (zinc nitrate aqueous solution) on the current density, growth rate and the resulting film morphology were mostly investigated.For higher electrolyte concentrations, the deposition rate was higher, accordingly.This resulted into a better filling of the template structure, finally giving a more robust porous ZnO film, without showing cracks or deformation after removing the PS template.In a similar way, porous ZnO layers were successfully obtained on PS template glass substrates by dip coating method [116].In this case, the influence of ZnO sol concentration and dipping time on the morphology of the resulting ZnO porous structures was pointed out, showing a shrinkage ratio of about 30% from pore to PS in the optimal synthesis conditions.In addition, it was highlighted how the electrostatic potential could affect the quality of the fabricated porous ZnO structures.For low electrodeposition potential values (1 V) the growth rate of ZnO crystals on the substrate was slow, allowing to sufficiently fill the interstices among PS spheres.This led to hemi-spherical hollow arrays after 2 h deposition and removal of PS template.On the contrary, at a higher potential (1.4 V) the crystallites grew rapidly and could not fully fill the interstices, resulting in a nanowall-like structure.Therefore, the control over the deposition potential allowed to change the pore morphology from hemispherical to a well-like structure [117].More recently, patterned spherical nanoshells of ZnO were obtained for the first time [118], using an array of PS spheres prepared by self-assembly method and 80 nm-thin ZnO layer deposited onto PS array by a drop-coating method.The PS array was immersed in a zinc precursor solution for several times, until the final ZnO thickness was achieved.Then, calcination was performed to stabilize the coating and to promote ZnO crystallization.Finally, PS sphere template was thermally removed, allowing the formation of nanoshell ZnO structures with clear evidence of internal voids.The UV-visible light absorption properties were highly improved due to the formation of this spherical ZnO nanoshell cavities.The combination of PS opal templates with ZnO thin films was obtained also in other cases.ZnO thin films deposited by RF magnetron sputtering led to the fabrication of three dimensional core-shell ZnO photonic crystals [119].The PS template allowed the porous structure and formation of cavities to be properly controlled, positively affecting the resulting photonic band gap properties.Wet infiltration of PS opal templates with ZnO precursors also produced ZnO inverse-opal films with improved photodetecting properties, showing excellent selectivity and reversible response to optical switch [120].Three-dimensionally ordered, macroporous ZnO structures were obtained as well [121].Due to the high surface area (18.7-34.5 m 2 •g −1 ), the macroporous structure was successfully proposed as ethanol sensor, showing good sensitivity, selectivity and electron transfer properties.In a similar preparation method, multilayered porous ZnO thin films were obtained and tested as NO 2 gas sensor under UV light irradiation [122].The film porosity positively influenced the ever-decaying light intensity or ever-decreasing photogenerated carriers, finally maximizing the film response when interacting with NO 2 gas.Nanopatterned ZnO cavity-like structures were obtained as well, by the combination of hydrothermal synthesis of ZnO together with the use of PS opal template and nanosphere lithography technique [123].p-n heterojunctions were then fabricated using copper oxide as p-layer, and the corresponding photoelectric conversion efficiencies evaluated.In comparison to the use of planar ZnO layers, the presence of a high-surface area ZnO cavity-like structure effectively improved the charge carrier collection within the heterojunction.
Atomic Layer Deposition (ALD) is well known for its ability to coat complex 3D substrate geometries in a conformal way.This peculiarity, in combination with the use of PS template substrates, was recently proved as an effective way to get micrometer-thick 3D mesoporous ZnO networks, showing a periodic gyroid structure or a random worm-like morphology as well.The presence of a mesoporous structure, with an average pore size of 30 nm, was confirmed for both the geometries, which are shown in Figures 9 and 10.Such mesoporosity was found to be the ideal condition for promoting exciton dissociation in hybrid photovoltaic devices.This was successfully demonstrated in the case of the worm-like morphology, which was integrated into a hybrid P3HT/ZnO hybrid photovoltaic device.The presence of the mesoporous 3D worm-like ZnO structure effectively resulted in improved short-circuit current density values [124].
Coatings 2018, 8, x FOR PEER REVIEW 15 of 24 by a drop-coating method.The PS array was immersed in a zinc precursor solution for several times, until the final ZnO thickness was achieved.Then, calcination was performed to stabilize the coating and to promote ZnO crystallization.Finally, PS sphere template was thermally removed, allowing the formation of nanoshell ZnO structures with clear evidence of internal voids.The UV-visible light absorption properties were highly improved due to the formation of this spherical ZnO nanoshell cavities.The combination of PS opal templates with ZnO thin films was obtained also in other cases.ZnO thin films deposited by RF magnetron sputtering led to the fabrication of three dimensional core-shell ZnO photonic crystals [119].The PS template allowed the porous structure and formation of cavities to be properly controlled, positively affecting the resulting photonic band gap properties.Wet infiltration of PS opal templates with ZnO precursors also produced ZnO inverse-opal films with improved photodetecting properties, showing excellent selectivity and reversible response to optical switch [120].Three-dimensionally ordered, macroporous ZnO structures were obtained as well [121].
Due to the high surface area (18.7-34.5 m 2 •g −1 ), the macroporous structure was successfully proposed as ethanol sensor, showing good sensitivity, selectivity and electron transfer properties.In a similar preparation method, multilayered porous ZnO thin films were obtained and tested as NO2 gas sensor under UV light irradiation [122].The film porosity positively influenced the ever-decaying light intensity or ever-decreasing photogenerated carriers, finally maximizing the film response when interacting with NO2 gas.Nanopatterned ZnO cavity-like structures were obtained as well, by the combination of hydrothermal synthesis of ZnO together with the use of PS opal template and nanosphere lithography technique [123].p-n heterojunctions were then fabricated using copper oxide as p-layer, and the corresponding photoelectric conversion efficiencies evaluated.In comparison to the use of planar ZnO layers, the presence of a high-surface area ZnO cavity-like structure effectively improved the charge carrier collection within the heterojunction.Atomic Layer Deposition (ALD) is well known for its ability to coat complex 3D substrate geometries in a conformal way.This peculiarity, in combination with the use of PS template substrates, was recently proved as an effective way to get micrometer-thick 3D mesoporous ZnO networks, showing a periodic gyroid structure or a random worm-like morphology as well.The presence of a mesoporous structure, with an average pore size of 30 nm, was confirmed for both the geometries, which are shown in Figures 9 and 10.Such mesoporosity was found to be the ideal condition for promoting exciton dissociation in hybrid photovoltaic devices.This was successfully demonstrated in the case of the worm-like morphology, which was integrated into a hybrid P3HT/ZnO hybrid photovoltaic device.The presence of the mesoporous 3D worm-like ZnO structure effectively resulted in improved short-circuit current density values [124].
Others
Apart from PVDs, chemical synthetic methods and template-assisted approaches, other works demonstrated that porous ZnO thin films may be easily obtained by following alternative synthetic routes/technological fabrication processes.Yong et al. exploited the oxidative action of femtosecond laser radiation to design a simple, one-step fabrication method leading to ZnO layer made of hierarchical micro-and nano-structures [125].This was achieved by femtosecond laser ablation of a metallic Zn layer.The resulting laser-ablated Zn surface showed switchable wetting properties between superhydrophobic and quasi-superhydrophilic states upon UV irradiation and dark storage, respectively.The observed switchable properties were ascribed to the dual effect of the ablation process, which induced oxidation of the Zn surface and promoted the formation of a hierarchical rough microstructure at the same time.An alternative way to get hierarchical ZnO structures was achieved by a simple oxidation of metallic Zn films in hot water at 90 °C [126].By changing the oxidation time, a huge amount of various morphologies could be obtained and ranging from pencillike nanorods (6 h), to nanotubes (16 h) and lotus-like (24 h) structures.The occurrence of different morphologies as a function of the oxidation time was explained in terms of specific electrochemical reactions occurring at the Zn surface, each one predominating on the others as long as ZnO micro/nanostructures were going to be formed.The most interesting and promising ZnO structures were the lotus-like ones; when tested in hybrid organic-inorganic solar cells, a power conversion efficiency as high as 1.18% was achieved.Alcaire et al. successfully showed the fabrication of porous ZnO layers by the combination of vacuum-and plasma-assisted processes.In the first step, Znphthalocyanine (ZnPc) solid precursor was sublimated in vacuum conditions, leading to the formation of polycrystalline films rather than single crystal ZnPc nanowire arrays.Then, oxygen plasma treatment was used to oxidize the starting ZnPc film and to form the porous structure [127].At very high substrate temperatures and/or for prolonged times, the complete conversion from ZnPc to ZnO could be achieved.In this way, highly porous ZnO thin films with surface coverage as low as 55% were obtained.Such a reduced density resulted into an extremely low refractive index (n(550 nm) = 1.11) for an optical thickness of 135 nm, being one of the lowest refractive index ever reported for ZnO.This might open the way to possible applications of such porous ZnO films as antireflective coatings and for graded-index multilayer systems.The anodic oxidation technique was also investigated [128].Metallic Zn sheets were set as anodes in a three-electrode electrochemical cell apparatus, containing
Others
Apart from PVDs, chemical synthetic methods and template-assisted approaches, other works demonstrated that porous ZnO thin films may be easily obtained by following alternative synthetic routes/technological fabrication processes.Yong et al. exploited the oxidative action of femtosecond laser radiation to design a simple, one-step fabrication method leading to ZnO layer made of hierarchical micro-and nano-structures [125].This was achieved by femtosecond laser ablation of a metallic Zn layer.The resulting laser-ablated Zn surface showed switchable wetting properties between superhydrophobic and quasi-superhydrophilic states upon UV irradiation and dark storage, respectively.The observed switchable properties were ascribed to the dual effect of the ablation process, which induced oxidation of the Zn surface and promoted the formation of a hierarchical rough microstructure at the same time.An alternative way to get hierarchical ZnO structures was achieved by a simple oxidation of metallic Zn films in hot water at 90 • C [126].By changing the oxidation time, a huge amount of various morphologies could be obtained and ranging from pencil-like nanorods (6 h), to nanotubes (16 h) and lotus-like (24 h) structures.The occurrence of different morphologies as a function of the oxidation time was explained in terms of specific electrochemical reactions occurring at the Zn surface, each one predominating on the others as long as ZnO micro/nanostructures were going to be formed.The most interesting and promising ZnO structures were the lotus-like ones; when tested in hybrid organic-inorganic solar cells, a power conversion efficiency as high as 1.18% was achieved.Alcaire et al. successfully showed the fabrication of porous ZnO layers by the combination of vacuum-and plasma-assisted processes.In the first step, Zn-phthalocyanine (ZnPc) solid precursor was sublimated in vacuum conditions, leading to the formation of polycrystalline films rather than single crystal ZnPc nanowire arrays.Then, oxygen plasma treatment was used to oxidize the starting ZnPc film and to form the porous structure [127].At very high substrate temperatures and/or for prolonged times, the complete conversion from ZnPc to ZnO could be achieved.In this way, highly porous ZnO thin films with surface coverage as low as 55% were obtained.Such a reduced density resulted into an extremely low refractive index (n (550 nm) = 1.11) for an optical thickness of 135 nm, being one of the lowest refractive index ever reported for ZnO.This might open the way to possible applications of such porous ZnO films as antireflective coatings and for graded-index multilayer systems.The anodic oxidation technique was also investigated [128].Metallic Zn sheets were set as anodes in a three-electrode electrochemical cell apparatus, containing a 3% phosphoric acid in ethanol.Then, oxidation was performed by applying a constant voltage of 15 V for different times, ranging from 5 min to 2 h.In this way, porosity of ZnO films was tuned accordingly.The cytotoxic effects of the prepared ZnO films were investigated, demonstrating the existence of a pore density-dependent cytotoxic behavior against fibroblast cells.
Conclusions and Future Outlooks
The main achievements in the synthesis of high-surface areas, porous ZnO thin films are summarized in Table 1.Various porosities of different size and cavity shapes may be successfully achieved by exploiting several deposition techniques.Sputtering and electrodeposition generally provide a mesoporous ZnO structure, while pulsed laser deposition, spray pyrolysis, electrodeposition and sol-gel ones often allow for different types of porosity, ranging from the meso-up to the macro-scale.Independently of the particular synthetic approach, the prepared porous ZnO films found successful application for the fabrication of photoanodes for DSSCs, the photocatalytic degradation of various dye molecules, and for the fabrication of gas sensors.In the specific case of spray pyrolysis, this method turned out to be the most successful and simple methods to easily synthesize doped ZnO thin films with a porous structure.As an alternative to the methods mentioned above, template-assisted methods successfully allowed for the growth of three dimensional porous ZnO structures, also showing very complex 3D geometries.In most of the cases, polystyrene opals have been definitely proved as the most promising sacrificial template useful to confer the desired macro/microporosity after ZnO deposition.
New future applications could be envisioned for porous ZnO thin films, thanks to the combination of the following aspects: (i) very interesting ZnO properties, i.e., antibacterial activity, piezoelectricity and biocompatibility; (ii) the existence of mesoporous/macroporous structures with high surface areas; (iii) the use of thin-film-based technologies, allowing for the preparation of large-area substrate materials in a controllable and repeatable way.Actually, the existence of mesoporous ZnO thin films would allow for several drugs and biologically relevant molecules to be loaded and delivered,
Figure 1 .
Figure 1.Scanning Electron Microscope low-magnification images: (a) Top view of as-synthesized ZnO film on glass substructure, where the white arrow indicates the projection of the incident flux on the film; (b) Cross-sectional view of the sample, where the ion flux and growth direction are denoted.Adapted with permission from [33]; Copyright 2013 Elsevier.
Figure 1 .
Figure 1.Scanning Electron Microscope low-magnification images: (a) Top view of as-synthesized ZnO film on glass substructure, where the white arrow indicates the projection of the incident flux on the film; (b) Cross-sectional view of the sample, where the ion flux and growth direction are denoted.Adapted with permission from [33]; Copyright 2013 Elsevier.
Figure 2 .
Figure 2. Surface morphologies of Zn films deposited on glass substrates with magnetron sputtering: (a) Ar, 2 mTorr; and (b) Ar + O 2 , 2 mTorr.Surface morphology of ZnO films formed by thermal oxidation of Zn films at 600 • C in air for 1 h: (c) deposited in Ar, 10 mTorr; and (d) deposited in Ar + O 2 , 10 mTorr.Adapted with permission from [35]; Copyright 2005 Elsevier.
Figure 3 .
Figure 3. Field Emission Scanning Electron Microscope images of porous Zn thin films: (a) asprepared; (b,c) after conversion into ZnO by thermal oxidation.Scale bar for (a) and (b) is 400 nm, for (c) is 2 μm.
Figure 3 .
Figure 3. Field Emission Scanning Electron Microscope images of porous Zn thin films: (a) as-prepared; (b,c) after conversion into ZnO by thermal oxidation.Scale bar for (a) and (b) is 400 nm, for (c) is 2 µm.
Figure 4 .
Figure 4. Low-magnification scanning electron microscope image and atomic force microscope profile of the 3D ZnO nanowall network grown vertically on Si(100) at 550 °C under 0.5 Torr O2 background pressure.Scale bar is 1 μm.Reproduced with permission from [51]; Copyright 2015 Elsevier.
Figure 4 .
Figure 4. Low-magnification scanning electron microscope image and atomic force microscope profile of the 3D ZnO nanowall network grown vertically on Si(100) at 550 • C under 0.5 Torr O 2 background pressure.Scale bar is 1 µm.Reproduced with permission from [51]; Copyright 2015 Elsevier.
Figure 4 .
Figure 4. Low-magnification scanning electron microscope image and atomic force microscope profile of the 3D ZnO nanowall network grown vertically on Si(100) at 550 °C under 0.5 Torr O2 background pressure.Scale bar is 1 μm.Reproduced with permission from [51]; Copyright 2015 Elsevier.
Figure 5 Figure 5 .
Scanning Electron Microscope images showing the effect of (a) deposition time, (b) oxygen pressure, and (c) substrate temperature on the morphology of the resulting ZnO nanowall network.Adapted with permission from [52]; Copyright 2014 Elsevier.
Figure 5 .
Figure 5. Scanning Electron Microscope images showing the effect of (a) deposition time, (b) oxygen pressure, and (c) substrate temperature on the morphology of the resulting ZnO nanowall network.Adapted with permission from [52]; Copyright 2014 Elsevier.
Figure 7 .
Figure 7. Examples of porous ZnO morphologies obtained by sol-gel methods.(a): a typical FESEM image of ZnO thin solid films deposited via a modified chemical bath deposition method.Scale bar is 2 µm.Reproduced with permission from [106]; Copyright 2006 Elsevier.(b): Cross-section FESEM image of ZnO films prepared at 0.05 mol/L methanolic zinc acetate solution and sintered at 500 • C. Scale bar is 5 µm.Reproduced with permission from [107]; Copyright 2005 Elsevier.
Figure 8 .
Figure 8.(a) A typical SEM image of the original PS opal templates.(b,c) SEM images of the 2D ordered ZnO porous films at different deposition times of (a) 40 min and (b) 2 h.Adapted with permission from [113]; Copyright 2005 Elsevier.
Figure 10 .
Figure 10.SEM of ZnO replication of the worm-like morphology: (a,c) worm-like PS template; (b) asdeposited ZnO-PS hybrid; (d) ZnO morphology after annealing at 400 °C followed by etching of the top compact layer.Scale bar are 200 nm.Reproduced with permission from [124]; Copyright 2014 John Wiley & Sons, Inc.
Figure 10 .
Figure 10.SEM of ZnO replication of the worm-like morphology: (a,c) worm-like PS template; (b) as-deposited ZnO-PS hybrid; (d) ZnO morphology after annealing at 400 • C followed by etching of the top compact layer.Scale bar are 200 nm.Reproduced with permission from [124]; Copyright 2014 John Wiley & Sons, Inc.
Table 1 .
Synthesis method, porous structure characteristics and final applications of porous zinc oxide thin films.
|
2018-02-19T10:15:02.710Z
|
2018-02-09T00:00:00.000
|
{
"year": 2018,
"sha1": "9b43447412f76249d41987152f69ebaa31747ba7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/8/2/67/pdf?version=1518171607",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "144a405d30673e3cc174d2423ae0a5e7492a59ed",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
235449613
|
pes2o/s2orc
|
v3-fos-license
|
Putting the Patient First: A Scoping Review of Patient Desires in Canada
Patient-centred care is a key priority for governments, providers and stakeholders, yet little is known about the care preferences of patient groups. We completed a scoping review that yielded 193 articles for analysis. Five health states were used to account for the diversity of possible preferences based on health needs. Five broad themes were identified and expressed differently across the health states, including personalized care, navigation, choice, holistic care and care continuity. Patients' perspectives must be considered to meet the diverse needs of targeted patient groups, which can inform health system planning, quality improvement initiatives and targeting of investments.
Introduction
Increasingly, policy makers and health system managers are considering the perspectives and experiences of patients in reforming health systems (1)(2)(3)(4)(5). The emphasis on "patient-centred care" places patients (and caregivers) at the forefront of the planning, delivery and evaluation of healthcare services (2,6,7). The recently established Ontario Minister' s Patient and Family Advisory Council (PFAC) is the first of its kind in Canada and provides a formal mechanism for incorporating patient and family perspectives into decision making and system planning (7,8). The idea of organizing healthcare around the patient seems, at first, uncomplicated, yet the concept itself is complex, and the application of the concept is multifaceted. The Institute of Medicine (IOM) informs much of the discourse and application on this topic (5). In this context, patient-centred care refers to "providing care that is respectful of, and responsive to, individual patient preferences, needs and values, and ensuring that patient values guide all clinical decisions" (5: p. 3). As described by the IOM, the concept applies not only to care delivery but also to system planning and research (5). It also relates closely to patient experience and engagement (4,9). Whereas patient engagement aims to solicit patient and family input based on their needs and preferences to co-design solutions (4), patient experience is defined as "how patients perceive and experience their care" (10).
The effective design and delivery of patient-centred care require a comprehensive understanding of the needs, desires and preferences of patients. Although high-performing health systems identify patient-centred care as a critical health system priority, the healthcare system is often criticized because of its tendency to focus on the needs of healthcare providers, who often do not have a comprehensive understanding of patient needs. Alongside policy makers' increasing interest in patient-centred care is a growing body of scholarly research that aims to understand patient experiences with and their perspectives on the health system. Furthermore, when patient needs are explicitly recognized, the system is designed based on generalized assumptions of these needs, as if patients are a homogeneous group, yielding a "one size fits all" approach. Previous research has suggested that patients' needs vary significantly across different patient populations (6)(7)(8)(9)(10), but this research has not yet been systematically reviewed.
The purpose of this study was to gain a systematic understanding of the preferences of Canadian patients and, where possible, their caregivers. This information can be used to inform the design and tailored delivery of healthcare services for different patient and caregiver groups.
Methods
This scoping review follows the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines (11), as well as Arksey and O'Malley' s stepwise approach to conducting a scoping review (12).
Search strategy
We used the PICo (population, interest, context) framework for qualitative studies to operationalize the research purpose into searchable keywords (13). Four databases were searched in January 2019 (OVID Medline, CINAHL, EMBASE and PsycInfo). To maximize search results and derive search results that were manageable and focused, we conducted four separate searches within each database involving various combinations of Boolean operators (and/or) for life stage, health stage and population of interest. These searches are listed below:
Study selection
Two reviewers performed a title/abstract screening of all articles following the removal of duplicate articles. The same reviewers then both performed the full-text review on all remaining articles. Articles were included if they met inclusion criteria concerning study setting (Canada and/or a Canadian province), study participants (patients of all ages and/or family/friend/caregiver) and study topic (healthcare experiences).
Data extraction and analysis
To account for the diversity of health needs across the Canadian population, we hypothesized five identifiable health states to organize patient groups, described in Table 1. Health states were initially identified within our interdisciplinary team and were adapted based on the presentation of populations within the literature (8-10, 14, 15). Although other health states could have been used for categorization, the five were generally quite effective and appropriate for describing the health needs found in studies. Studies were organized by health state based on the description of the population included in the study. It is possible that some populations may have fallen across multiple health states. In such cases, two reviewers independently allocated articles to the most appropriate health state, and disagreement was reconciled by the principal investigator. Appendix 2 (available online at longwoods.com/content/26499) provides a tally of how many articles spoke to each health state across a variety of life stages, which include age and other population subgroups (i.e., LGBTQ+ populations). Two reviewers were responsible for independently extracting data from all articles following a pilot extraction of two articles. The extraction table included details on the study itself (i.e., year, purpose, location, population of interest, methods, key findings, etc.). All key findings were summarized within the extraction table, and representative quotations were pulled from the document. Following completion of the extraction, three researchers systematically reviewed the data, focusing mostly on the summarized key findings, and coded details based on life stage (pediatric, youth and children, young adult, adult and older adult) while colour-coding experiences based on health state, as described above. Following this coding, an inductive thematic analysis was then conducted by three reviewers to summarize findings and themes (16), where the three researchers met to discuss consistent themes across life and health states. As themes and consistent experiences were identified, the researchers looked to identify both the similarities and the differences in desires across health stages.
Results
The PRISMA Flow Diagram (Appendix 3, available online at longwoods.com/content/ 26499) presents the article selection process. A total of 12,341 studies were pulled across all databases searched; 7,763 and 4,127 articles were excluded following deduplication and title/abstract screening, respectively. Full-text screening was then performed on the remaining 451 studies, and 193 were included for analysis . Appendix 4 (available online at longwoods.com/content/26499) presents a summary of all included articles, including author(s), year, location and aim. Recognizing that the populations studied may fall within different health states, we include a matrix tally that illustrates the overlap in health states across the studies included (Appendix 2, available online at longwoods.com/content/26499). Additionally, where data were available, the results highlight the perspectives of patients and caregivers. The majority of papers spoke to patient perspectives only, and in those instances we discuss the results and refer only to patients. If caregivers' perspectives were reported in the literature, we reference patients and caregivers together.
Among many possible areas on which to focus health system improvement, there were common preferences expressed across health states (summarized in Table 2) within the included articles. Table 2 demonstrates these reported preferences; if a preference was identified in the literature and associated with a particular health state, there is a check mark. These preferences were generally described to be of equal importance: 1) personalized care; 2) information on resources available and how to navigate the system; 3) choice in treatment, care setting and/or care provider; 4) holistic care and non-medical supports to overcome barriers to accessing care; and 5) care continuity (including care coordination). The following describes overarching preferences common to two or more health states as presented in the literature. We then explore the subtle nuances between health groups (unique preferences expressed by one health state), which are summarized in Table 3. These themes and the nuances within them have different implications for how the health system could be shaped or reshaped.
Although holistic, individualized and culturally safe care was a common preference across all health states, respecting linguistic needs, such as a provider who speaks the patient' s language or availability of interpretation services, was particularly important to the walking well group (53,69,74,94,99,100,102,107,108,116,122,124,134,135,156,176,195,197,201,208).
Although the preference for information was evident across all the health states, the specific information needs differed between the groups. For example, the walking well group was interested in information on funding resources (117,121,129,178). The acute life-threatening and chronic conditions groups were both interested in knowing the next steps after leaving the hospital (21, 22, 31, 50, 61,64,65,70,75,87,89,92,103,110,123,130,139,154,158,163,166,167,171,175,183,196,204). However, the chronic conditions group wanted to know about care plans and community resources given their prolonged trajectory of illness (21, 31, 50, 61,64,65,70,75,87,89,92,103,106,123,139,158,163,166,167,171,175,183). The walking well patient group and the mental health and addictions patient group noted the preference for online resources that would support improved selfmanagement of health opportunities (23, 24, 28, 76,96,102,124,128,168). The walking well and palliative care groups wanted information regarding illness prognosis and treatment outcomes (e.g., drug side effects) in order to make informed decisions. Whether a decision was less sensitive (e.g., the decision to get screened for a medical condition or to receive a vaccination) or more sensitive (e.g., decision making around end-of-life care) did not obviate the need for comprehensive information (17,23,24,27,28,76,96,102,124,128,168,173).
Choice of HCP and choice of setting for the care services were particularly important for the walking well group (19,24,32,41,46,59,94,104,113,115,117,125,128,132,138,140,145,149,156,161,177,195,201,206,207). However, this was not the case in the acute life-threatening group, which likely relates to the short-term relationship that a patient often has with hospital-based providers. Patient groups with prolonged disease trajectories (i.e., chronic conditions and mental and cognitive health issues) wanted their caregivers to be partners in their care (39,42,43,72,119,163,170,171,185,186,189,194,202,209).
Holistic care and non-medical supports
These supports were preferred to overcome barriers to accessing care among the walking well; those with chronic conditions; those with mental, cognitive and addiction-related issues; and those with acute life-threatening illnesses (19,28,39,55,72,76,90,94,99,116,121,124,129,132,134,147,150,154,168,169,172,178,195,198,200,201,208). The relationship between HCPs and patients was important across all of these health states. Interactions with HCPs were described positively in many cases, representing the trust that patients (particularly older patients) and caregivers placed within their HCPs. However, patients wanted HCPs to be more respectful of patient needs and treatment preferences; offer non-judgmental care; communicate in ways that patients and caregivers can understand; allow more time during patient interactions to listen to patients; treat and consider social needs; and help them navigate the healthcare system (discussing next steps, available resources and treatment options).
In terms of variations across patient groups, for the walking well group, holistic care meant being able to access non-Western approaches to healthcare free of financial barriers, including traditional Chinese medicine (28, 74,76,94,99,116,121,124,129,132,162,168,178,195,201,208). For the mental and cognitive health groups, holistic care specifically meant being able to access spiritual and culture-based services (39, 55,72,90,147,150,169,172,189,198,200).
Coordinated, continuous care
Coordinated, continuous care represents an uninterrupted relationship with the same primary care provider. This is particularly significant given their critical role as gatekeepers and the first point of contact in the health system. This preference was expressed by all three health states, where patients live with multiple conditions or receive care from multiple providers (chronic conditions, palliative care and mental and cognitive health groups) (25, 27, 34, 43, 45, 65,72,75,83,123,126,152,157,163,166,169,181,185,189,198,202). For the two groups who often receive care from more than one HCP -chronic conditions and mental health and addictions -coordinated transitions across various care settings were deemed vital (25,34,42,43,45,65,72,141,148,157,169,189,198,202).
For the mental and cognitive health group, coordination of services while transitioning from youth to adult care services was important given the early onset and long-term nature of diseases affecting this group (126,152). For the chronic conditions group, coordinated flow of information among providers and receiving care from interdisciplinary teams was crucial (21,42,148,159,169,170,181). For the palliative care group, there was a strong preference for both an ongoing relationship with their providers and having the same provider until the end of life (17,26,27). As presented in Table 3, there were some nuances in how patient groups perceived the five common preferences across the different health states in the reviewed papers. Although holistic, individualized and culturally safe care was a common preference across all health states, respecting language preferences and needs was of particular importance for the walking well group (53,69,74,94,99,100,102,107,108,116,122,124,134,135,156,176,195,197,201,208). Access to information was expressed differently across health states. For example, the walking well group was interested in understanding the availability of funding (28, 76,121,132,168,195,201) whereas the chronic conditions group was more interested in having access to their health information and community resources (21, 31, 61,64,65,70,75,87,89,92,103,110,123,139,158,163,166,167,171,175,183). The ability to choose their provider and healthcare settings was notable for the walking well group (24, 46, 113,149,207). Those with a chronic condition or mental health illness noted that they preferred the choice to engage their caregivers as partners in care. For the walking well and mental and cognitive health groups, holistic care was about accessing care that goes beyond traditional medical services to spiritual and culture-based services (28, 74,76,94,99,116,121,124,129,132,162,168,178,195,201,208). For the mental and cognitive health groups, coordination of services meant smooth transitions from youth to adult services (126,152). For the chronic conditions group, coordinated flow of information among providers and receiving care from interdisciplinary teams were important (21, 42, 148,159,169,170,181). For the palliative care group, this meant having ongoing relationships with the same provider until the end (27).
Interpretation
Recognizing the diversity of experiences, values and expectations that reflect the broader health and socio-demographic profile of Canadians, a comprehensive understanding of the current needs of patients and their caregivers is needed to better inform tailored, patient-oriented and equitable approaches to health system design and health service delivery. Although patient-centred care is ultimately an individual concern, this review reveals five broad preferences across a wide range of patient groups, which we have further subcategorized as five distinct health states. Even with similarities across health states, the way these preferences and needs were expressed and the examples of changes to healthcare systems that were suggested differed across these groups. These differences have implications for provincial and territorial as well as more local (based on geography or defined population) health systems in Canada in terms of how they should be shaped or reshaped.
Past research eliciting the views of healthcare users has largely focused on the needs and experiences of disease-specific groups -for example, those with diabetes (105,106,153). Much of the healthcare system, however, is not organized around disease-specific groups, as clearly illustrated in the case of primary care. This review has shown that needs can be organized around health states and that disease state does have an impact on care preferences.
Provincial governments in Canada are initiating a number of changes to achieve more integrated and coordinated care. With the consolidation of all the province' s health regions into a single province-wide health authority (Alberta Health Services) in 2008 and, more recently, the introduction of Primary Care Networks, the Government of Alberta has tried to better coordinate care through aligning governance structures. British Columbia is working toward patient medical home and primary care network models to improve the needs of patients by linking integrated systems of care between health professionals, networks and coordinated specialty services within the community. Similarly, Manitoba' s creation of "shared health" is an attempt to centralize services and offer an integrated clinical services plan. In Ontario, this has manifested as Ontario Health Teams (OHTs). It is hoped that OHTs will coordinate care at an organizational level (shared governance, shared medical information and streamlined approaches to funding, with local regions' spending autonomy based on patient demographics and regional needs). However, the degree to which these efforts align with patient desires, as well as how all of these approaches will consider patients' needs and preferences in health service delivery, remains unknown. The results from our study align with notions of integration proposed by Singer et al. who view integrated care as a concept that should be built around the patient and as composed of two pillars: coordinated care across time and between settings (which OHTs aim to address) and patient-or person-centredness (210). The latter is where the results from our study are particularly relevant in the ongoing evolution of health systems to be integrated. Additionally, with the effort to achieve Quadruple Aim outcomes to improve patient/caregiver experience, population health and provider experience and to maintain per capita costs, this research becomes increasingly relevant to inform evaluative efforts to ensure that measurements are capturing the identified desires of patients and caregivers depending on the priority population.
Patient-centred care improves health outcomes and is instrumental to addressing racial, ethnic and other healthcare inequities. We identified four areas where this work could be used to inform the development, implementation and evaluation of integration efforts across Canada. This includes how patients, specifically members of each of the different health states, should be engaged in planning and improvement efforts. Additionally, depending on the targeted priority population, these findings could help inform which partners and/or care providers should be considered part of the integrated care team (i.e., having caregivers included as part of the care team for those living with chronic conditions) and support public and patient involvement. As Canadian jurisdictions transition toward more integrated health systems, they will require measurement and evaluation plans. These findings will inform the development of quality improvement plans and the construction of meaningful outcome measures that consider the differences and needs between and among health states (i.e., access to transparent information with treatment plans for individuals living with acute life-threatening illnesses). Finally, targeted investments to improve the system must consider the potential magnitude of any benefits given that different patient populations will benefit to different degrees (i.e., directing resources toward online information for individuals living with mental health illnesses).
Limitations
First, the experiences with and perspectives on the health system presented in this review do not necessarily represent a full or comprehensive characterization of people who could fall within the respective health states. Relatedly, we recognize that patient populations may fall into multiple health states. This reinforces complexities in addressing gaps in health service delivery, particularly for Canadians living with multiple morbidities. Second, our characterization of health states was based on the literature, but other groupings or subgroupings would likely identify further distinctions. However, the hypothesized categorization comprehensively described the identified literature, with few exceptions. Third, much of the literature was reflective of patients and caregivers living in urban settings, with few studies focusing on the perspectives of patients and caregivers living in rural and/or remote locations of Canada. The collective literature was also relatively less reflective of certain social groups, including racialized populations, non-English-speaking communities, the unemployed or underemployed and persons living across the income and education spectrum. This underrepresentation makes it challenging to discern how health inequities may impact patient and caregiver desires of the health system. Finally, given the heterogeneity of study types, we could not make any assertions on the relative importance of one desire over another. Instead, the desires captured in this study are a composite of those most commonly expressed across all studies.
Conclusion
There were similarities in desires expressed across health states. However, the way these preferences and desires were expressed, and the examples of how to adapt health systems, varied across health states. If the healthcare system is going to be truly patient-centred, then one size does not fit all. The patient groups in our study -the walking well, those with mental and cognitive health challenges, those with life-threatening or more chronic conditions and those needing palliative care in the final stages of life -have varying preferences for and different perspectives on personalized care, health system information, choice, non-medical supports and the coordination and continuity of care. These findings can be used to inform patient-centred integrated care efforts on how the health system can be shaped or reshaped for identifiable patient groups. We highlight four particular ways this could work to support the development, implementation and evaluation of integration efforts. First, the results can support policy and practice planning by offering an improved understanding of the preferences of a variety of potential target populations; they could also, depending on the priority population, inform as to which partners should be included as part of the care team. Second, central to the effective development of patient-centred models is the meaningful engagement of patients, and these results provide some insight into differing experiences of patients based on their health state. Third, the results of this scoping review could be used to inform quality improvement efforts and evaluation strategies that reflect the desires of patients and caregivers. Finally, these results can inform the worthwhile targeting of investments, highlighting areas that are relevant and important for a variety of priority populations.
|
2021-06-17T06:16:23.612Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "930b41bdb1dd389c2f49d4b108769a8453a5d568",
"oa_license": "CCBYNC",
"oa_url": "https://www.longwoods.com/product/download/code/26499",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "910758dc9ead23ae88e505ff321ba7623ea4ff9d",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247271447
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of Public Knowledge and Attitude towards Chronic Kidney Disease by Using a Validated Questionnaire: An Observational Study and towards an
by using a validated questionnaire: observational study.
[≥3 mg/mmol]) which is considered as a marker of kidney damage.
Chronic kidney disease, being a condition that affects a large number of populations, the prevalence consistently ranges from 11% to 13% with majority in stage 3 (eGFR 30-59): 7.6% (95% CI; 6·4-8·9). The estimated worldwide gender specific prevalence of CKD is 10.4% in men (95% CI; 9.3 to 11.9) and 11.8% in women (95% CI; 11.2 to 12.6). It is Implicit, that future research must focus on evaluating such strategies that can help to prevent the progression of chronic kidney disease and improve cardiovascular disease outcomes [1]. Chronic kidney disease increases the risk of cardiovascular morbidity and premature mortality among the patients which substantially decrease the quality of life. This risk increases as chronic kidney disease advances to higher stages and worsening of excretory function (usually manifest as declining glomerular filtration rate (GFR) and increasing proteinuria) [2].
Chronic kidney disease is also associated with age-related renal function decline accelerated by hypertension, diabetes, obesity and primary renal disorders [3]. CKD also shares an appearance of glomerulosclerosis, vascular sclerosis and tubulointerstitial fibrosis, which advocates a common final pathway of progressive injury. [4] One of the major challenges regarding chronic kidney disease is, it is asymptomatic in its early stages, its progression to end stage renal disease occurs over a period of several years and delayed diagnosis. Therefore, strategies to reduce the progression to end-stage renal disease require effective methods of screening early in the disease process [5]. Compared with patients who had early evaluation, the risk for death is found to be greater among patients evaluated late [6]. Early screening and detection thus help physicians to structure and implement a treatment strategy that will well fit to reduce the progression of the disease and comorbidities [7]. Early referral resulted in cost savings and improved patient survival along with more life-years free of RRT and fewer hospital inpatient days [8].
According to a report from the National Health Service (NHS) India, treating kidney disease costs more than skin, lung, and breast cancer combined. Early diagnosis of CKD thus helps to minimize the economic burden imparted to the patient [9]. In India, there is one doctor for every 1,445 Indians as per the country's current population estimate of 135 crores, which is lower than the WHO's prescribed norm of one doctor for 1,000 people. Public awareness is considered as an important determinant of the uptake of screening programs. However, there is scarcity of the data with respect to public knowledge. Public awareness can play a significant role in the early detection and diagnosis of chronic kidney disease which can save enormous amount of spending on healthcare. In this backdrop, this study focusses to assess the knowledge and attitude towards chronic kidney disease among general public in northern part of India.
Study Design
This cross-sectional study was conducted by employing a self-administered questionnaire to assess the knowledge and attitude towards chronic kidney disease among general public.
The questionnaire consists of a total 30 questions pertaining to knowledge, and attitude towards chronic kidney disease. The complete assessment took 6 minutes to read and answer. This study was carried out in full compliance with the ethical standards provided by the Indian Council of Medical Research to carry out such study. The experimental protocol had the approval from the ethics committee of the concerned institute. The informed consent has been received prior to enrolling from all the participants.
Statistical Analysis
Descriptive statistical methods were majorly employed to summarize the data on demographic characteristics and responses to questions concerning knowledge and attitude towards chronic kidney disease. The data was summarized as frequencies (n) and percentages (%) for categorical variables. Knowledge on chronic kidney disease was assessed by calculating total cumulative knowledge score for each participant. A mean chronic kidney disease knowledge score with standard deviation was assigned for each demographic characteristic.
Multiple linear regression analysis was conducted to identify factors associated with knowledge, All the demographic variables and knowledge score were considered as the independent variables and outcome variable respectively. To identify factors associated with attitudes, Multinomial logistic regression analyses were used. Unstandardized regression coefficients (β) and their 95% confidence intervals (CIs) were used to quantify the associations between variables and attitudes. Likewise, to identify factors associated with practices, binary logistic regression analyses were used. Odds ratios (ORs) and their 95% confidence intervals (CIs) were considered to quantify the associations between variables and practices. Factors were selected with a backward selection procedure in a stepwise regression analysis. Data analyses were performed using SPSS (Statistical package for social sciences) version 25.0. The p<0.05 was considered as "statistically significant".
Results
A total of 507 participants completed the online and offline questionnaire based cross-sectional study. A majority of participants were male (67.7%), lived in the rural areas (53.5%), had either a bachelor's or master's degree (77.4%), and did not have a family history of kidney stones (91.9%). The value of Cronbach's alpha coefficient for the questionnaire was 0.87, which is well above the acceptable threshold for internal consistency. Demographic characteristics and mean chronic kidney disease knowledge scores of participants are shown in Table 1. The mean chronic kidney disease knowledge score of all the study participants was 16.49 (SD = ± 7.0), with scores ranging from 0 to 29. The mean knowledge score of participants from the state of Jammu & Kashmir was nearly 40% less than the participants from other states of India.
Also, participants with master's degree or above had the highest mean chronic kidney disease knowledge score (20.0) among all the demographic variables. Table 2 Note: Values in bold are above the overall mean chronic kidney disease knowledge score.
N, number of participants; CKD, chronic kidney disease; SD, standard deviation; ROI, rest of India. in knowing the health status of their kidneys. As Figure 1 suggests, the distribution of CKD knowledge scores among the participants of study, nearly 65% of participants had a knowledge score in the range of 17 to 24. All five variables added statistically significantly to the prediction, p < 0.05. Table 4 shows the results of the standard multiple regression analysis between CKD knowledge score and participant characteristics. The multivariate analysis shows higher knowledge scores associated with a higher level of education, such as holding a postgraduate degree, bachelor degree or completing the school.
Unemployed participants had significantly lower knowledge level for CKD than those employed. Participants who are single/ never married had significantly higher knowledge scores than participants who are within a relationship (married). Participants from the state of Jammu & Kashmir were found to have significantly lower knowledge score for chronic kidney disease as compared to rest of India. No difference was found in the knowledge scores of participants with and without a family history of kidney stone. CKD, chronic kidney disease; OR, odds ratio; CI, confidence interval; ROI, rest of India.
Discussion
Conservative management as a treatment alternative to dialysis and kidney transplant is getting more recognition in the United States. The Kidney Disease Improving Global Outcomes (KDIGO) highly advocates conservative management as a supportive care in chronic kidney disease and also a priority to improve patientcentered care. The results from this study demonstrate that there is a good amount of knowledge regarding chronic kidney disease.
Participants with master's degree or above had the highest mean CKD knowledge score (20.0) among all the demographic variables, similarly our findings were aligned with those of (Stanifer, et al, [10]) and (Khalil,et al,[11]), which also showed, education level is a key determinant for the awareness about chronic kidney disease.
The overall knowledge score, about chronic kidney disease and its risk factors was found to be higher in this study as compared to what was found by (Kumela, et al, [12]), in Ethiopian study; however, the sample size in this study was smaller in comparison to our study.
This study reported that the kidney's function of keeping the bones healthy had lower knowledge score, other kidney functions had comparatively better knowledge scores (Table 4). This study also suggests majority of participants believed that people with diabetes and kidney disease should stringently adhere to medical advice provided to them by doctors. Also, most participants were found interested in knowing the health status of their kidneys, which confirms that the attitude towards chronic kidney disease and related outcomes is significantly good. The management of chronic kidney disease is very challenging as patients are mostly asymptomatic during the early stages of disease, its inevitable progression to higher stages and its late diagnosis. Therefore, strategies to reduce the incidence of end-stage renal disease require effective methods of early screening in the disease process [5].
Early detection of chronic kidney disease allows implementation of treatments and strategies that can influence both progression of kidney disease and cardiovascular health [7]. The costs of treatment for chronic kidney disease in India consumes a lot of capital [13].
High blood pressure as a risk factor was associated with a low knowledge score. The economic burden of hypertension imparts in Indian population is quite high and adds to the lifelong expenditure on antihypertensive drugs [14]. The gap in knowledge related to blood pressure was also alike with a study in Saudi Arabia [15].
Earlier detection and improved knowledge about chronic kidney disease and its risk factors will significantly delay the progression of disease and save a lot of out-of-pocket expenditure in India.
Conclusion
To conclude, our findings suggest that Indian adults demonstrated good knowledge and a positive attitude regarding chronic kidney disease, which is important to prevent the disease and early detection. However, knowledge was lower among older adults and less educated groups will improve the outcomes and delay the progression and the costs associated with the treatment.
The study findings will help healthcare providers to understand the extent of knowledge and attitude towards CKD and thus provide relevant education to patient and family members. The kidney's function of keeping the bones healthy had lower knowledge score, other kidney functions had comparatively better knowledge scores. The multivariate analysis found higher knowledge scores associated with a higher level of education.
|
2022-03-08T16:25:35.505Z
|
2022-02-11T00:00:00.000
|
{
"year": 2022,
"sha1": "3fe211b6a7f8e64f1d1d8df425df1272041bceb3",
"oa_license": "CCBY",
"oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.006662.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6bf90fea9141e2c034d3865d2966e88add48f12b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
235737563
|
pes2o/s2orc
|
v3-fos-license
|
Association between frequency of spicy food consumption and hypertension: a cross-sectional study in Zhejiang Province, China
Background Hypertension is a known risk factor for multiple chronic diseases. Existing literature on the association between frequency of spicy food consumption and hypertension shows mixed findings. Methods The analyses are based on the Tongxiang baseline dataset of the China Kadoorie Biobank prospective study, including data from electronic questionnaires, physical measurements and blood sample collection. A total of 53,916 participants aged 30–79 years were included in the final analysis. Multivariable logistic regression was used to estimate the association of spicy food consumption with hypertension, and multiple linear regression was performed to explore the association of spicy food consumption with systolic and diastolic blood pressure. Results Of the 53,916 participants, 23,921 had prevalent hypertension. 12.3% of participants reported consuming spicy food weekly. Among female participants, after adjusting for socio-demographic status, lifestyle factors, BMI, waist circumference, sleep duration and snoring, when compared with females who never consumed spicy food, the odds ratios (95% CI) for hypertension were 1.02 (0.96–1.08), 0.90 (0.79–1.01), and 0.88 (0.78–0.99), respectively, for females who consumed spicy food less than once weekly, 1–2 times weekly, and ≥ 3 times weekly (Ptrend = 0.04). The corresponding odds ratios for males were 1.02 (0.95–1.09), 1.07 (0.95–1.20), and 0.91 (0.81–1.01), respectively (Ptrend = 0.39). Among current alcohol drinkers, compared to participants who never consumed spicy food, the odds ratio (95% CI) for hypertension among participants consuming spicy food daily was 0.98 (0.80–1.20). The corresponding figure for non-current drinkers was 0.72 (0.62–0.84). The association was stronger among non-current alcohol drinkers than among current drinkers (Pheterogeneity = 0.02). Conclusions Frequency of spicy food consumption is inversely associated with hypertension in females, but not in males. Supplementary Information The online version contains supplementary material available at 10.1186/s12986-021-00588-7.
Recently, a nationally representative survey, including 451,755 adults over 18 years of age from 31 provinces in mainland China, indicated that 23.2% (≈ 244.5 million) of Chinese adults had hypertension, and 41.3% (≈ 435.3 million) had prehypertension [4]. Hence, hypertension remains a serious public health problem in China.
Worldwide, spices are an essential part of culinary cultures, with a long history of use for flavoring, coloring and preserving food, as well as for medicinal purposes [5,6]. Spices are recognized as one of the primary tastes in ancient Asia, especially in India and China [7,8].
In past decades, the effects of spicy food consumption on chronic diseases and other conditions, including obesity, cancer, ischemic heart disease, cerebrovascular disease, fracture, dyslipidemia and impaired cognitive function, have been studied [9][10][11][12][13][14][15]. Some studies suggest lower disease risks associated with consumption of spicy food [11,12,14], while others suggest disease risks are higher [9,10,15]. The association between spicy food and hypertension has received considerable attention in recent years. However, the results from existing studies are similarly controversial [12,[16][17][18]. For example, while Ahuja and colleagues found that four weeks of regular chilli consumption had no obvious effects on blood pressure among healthy free-living individuals [16], Tingchao et al. reported that frequency of spicy food consumption was inversely associated with risk of hypertension in women [12]. Hence, the aim of this study was to examine the association of frequency of spicy food consumption with hypertension using data from the CKB study in Tongxiang, Zhejiang.
Study population and design
Detailed information about the CKB study design, survey methods and population has been described elsewhere [19][20][21]. The data utilized in the current study were obtained from Tongxiang, one of the 10 regions included in the CKB study. In brief, 57,704 participants aged 30-79 years were recruited and participated in the baseline survey between August 2004 and January 2008. The baseline survey consisted of a questionnaire, physical measurements and blood sample collection. All survey operations were conducted by trained, qualified staff using standardized procedures. For the current study, participants who had a history of cancer (n = 163), stroke (n = 349), heart disease (n = 464), physician-diagnosed diabetes (n = 1380), or baseline screen-detected diabetes (n = 1432) were excluded. After these exclusions, a total of 53,916 (22,573 men, 31,343 women) participants remained for inclusion in the final analyses.
Outcome variable
Blood pressure was measured in a seated position at least twice using a digital sphygmomanometer (Omron UA-779) in a quiet room with constant indoor temperature around 20 °C. Two measurements were undertaken with a 5-min interval between measurements. If the difference between the first systolic blood pressure (SBP) and second SBP measurement was greater than 10 mm Hg, a third measurement was conducted and average of the last two measurements was recorded and used for analyses [1]. Participants were considered to be hypertensive if they had a measured SBP ≥ 140 mm Hg, a measured diastolic blood pressure (DBP) ≥ 90 mm Hg, or reported a prior history of doctor-diagnosed hypertension or use of antihypertensive medication.
Assessment of exposure variable
Frequency of spicy food consumption was assessed through the question "how often did you eat hot spicy food during the past month?". Answer options included: "Never or almost never" (i.e., non-consumers), "only occasionally" (i.e., less than once weekly), "1-2 days/ week", "3-5 days/week", and "daily or almost every day". In analyses, those who chose "3-5 days/week" or "daily or almost every day"" were combined into one group (i.e., ≥ 3 days/week). The reproducibility of frequency of spicy food consumption was tested twice with a median interval of 1.4 years. Spearman's coefficient for the correlation was 0.71, indicating that spicy food consumption was reported consistently [11].
Assessment of covariates
An interviewer-administered electronic questionnaire included socio-demographic characteristics (age, sex, education level, marital status, and household income), lifestyle factors (cigarette smoking, alcohol drinking, physical activity, fresh fruit intake, meat intake, sleep duration and snoring), personal medical history (cancer, stroke, diabetes, and heart attack), family history of hypertension, and, among women, menopause status.
Cigarette smoking and alcohol drinking were categorized into four groups according to participants' responses: (1) non-smokers (or non-drinkers); (2) former smokers (or former drinkers); (3) occasional smokers (or occasional drinkers); and (4) current smokers (or current drinkers) [22,23]. Total physical activity was converted into metabolic equivalent of task hours per day (MET-hours/day) based on transportation, occupation, housework, and non-sedentary recreation as described in previous studies [24,25].
Physical measurements included height, weight, and waist circumference (WC), measured using calibrated instruments by trained health workers. Standing height was measured to the nearest 0.1 cm with a stadiometer. Weight was measured to the nearest 0.1 kg with a body composition analyzer. Body mass index (BMI) was calculated as weight in kilograms divided by the square of standing height in meters, and obesity was defined as BMI ≥ 25.0 kg/m 2 [26]. WC was measured to the nearest 0.1 cm with a non-stretchable tape measure at the midpoint between the lowest rib and the iliac crest. Excessive WC was defined as ≥ 85 cm for males, and ≥ 80 cm for females [27]. A non-fasting venous blood sample was collected. Immediate on-site testing of plasma glucose level was undertaken.
Statistical analysis
Statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). Descriptive statistics were presented as mean ± SD or percentages for continuous or categorical variables, respectively. To examine the association between frequency of spicy food consumption and risk of hypertension, univariate and multivariable logistic regression analyses were applied. Participants who never consumed spicy food were considered as the reference group. Potential confounding factors, including sociodemographic status and lifestyle factors were adjusted for in different models. In model 1, odds ratios were adjusted for age (continuous) and sex. Model 2 included additional adjustment for education level (no formal school, primary school, middle school, and high school or above) and household income (≤ 19,999 yuan, 20,000-34,999 yuan, ≥ 35,000 yuan). Model 3 included additional adjustment for cigarette smoking (never, occasional, former, and current), alcohol drinking (never, occasional, former, and current), physical activity (continuous), meat consumption (daily and non-daily), fruit consumption (daily and non-daily), BMI (continuous), WC (continuous), snoring (never, occasional, and habitual) and sleep duration (continuous). Multiple linear regression analyses were further performed to explore the associations of frequency of spicy food consumption with systolic blood pressure and diastolic blood pressure. Stratified analyses were conducted to detect whether the associations of daily spicy food consumption with prevalent hypertension differed according to age (30-49 y or 50-79 y), education level (illiterate or primary, and above), household income (< 35,000 yuan or ≥ 35,000 yuan), physical activity (< 30 MET-h/d or ≥ 30 MET-h/d), smoking status (current smokers or non-current smokers), alcohol status (current drinkers or non-current drinkers), meat consumption (daily or non-daily), fruit consumption (daily or non-daily), BMI (< 25 kg/m 2 or ≥ 25 kg/m 2 ), WC (normal or excessive), sleep duration (< 7.6 h/d or ≥ 7.6 h/d), or menopause status (menopausal or non-menopausal). In sensitivity analyses, 1128 participants with self-reported physician-diagnosed peptic ulcer disease were excluded from the analyses. All statistical significance was set at P = 0.05.
Characteristics of participants
Of the 53,916 participants, 23,921 had prevalent hypertension. The prevalence of hypertension among participants who consumed spicy food never, less than once per week, 1-2 times weekly, and ≥ 3 times weekly were 64.0%, 23.7%, 5.8% and 6.5%, respectively.
Compared with non-consumers, participants who consumed spicy food frequently were more likely to be younger, male, well-educated, current smokers, current drinkers, habitual snorers, physically inactive, to consume meat and fruit frequently, to have higher BMI and WC, and to report longer sleep duration. No significant difference was found in household income (P = 0.51) or in menopause status (P = 0.65) according to frequency of spicy food consumption ( Table 1).
Similar to the association with systolic blood pressure, frequency of spicy food consumption was negatively related to diastolic blood pressure among females (P trend < 0.05), but no significant association was observed among males (P trend = 0.38) ( Table 4).
Subgroup analyses
In subgroup analyses, the strength of the association between frequency of spicy food consumption and hypertension was largely consistent across subgroups defined by age, education level, household income, physical activity, cigarette smoking, meat consumption, fruit consumption, BMI, WC, sleep duration, and menopause status (P heterogeneity > 0.05). However, there was a significantly stronger association among non-current alcohol drinkers (OR = 0.72, 95% CI, 0.62-0.84) than among current alcohol drinkers (OR = 0.98, 95% CI, 0.80-1.20) (P heterogeneity = 0.02) ( Table 5).
Discussion
This large cross-sectional study explored the association of spicy food consumption with prevalent hypertension. Frequent spicy food consumption was inversely associated with hypertension in females. However, such an inverse association was not found in males.
Frequency of spicy food consumption
A previously published paper from the CKB study indicated that 99.7% of participants in Hunan consumed spicy food weekly, while only 8.8% of participants in Haikou consumed spicy food weekly [9]. In the present study, the prevalence of weekly spicy food consumption was 12.3%, much lower than in the CKB population as a whole, in which 42.5% of participants consumed spicy food weekly [9]. This discrepancy reflects geographic distribution of Chinese residents' preferences for spicy food. Zhejiang is located in the east of China, and compared with residents living in most other provinces, residents in Zhejiang have a preference for a more bland diet rather than heavy tastes. Consistent with previous studies [12,28], participants who more frequently consumed spicy food were more likely to be young, male, current smokers, alcohol drinkers, and to more frequently eat meat and fruit. In contrast with an earlier study, indicating that individuals with high levels of spicy food consumption had lower BMI levels when compared with non-consumers [17], the present study documented that individuals who consumed spicy food more frequently seemed to have higher BMI and WC than non-consumers. This may reflect investigation of crude BMI values, without adjustment for age. Sun et al. found that spicy food consumption was positively associated with adiposity, including both general and abdominal obesity [9], similar with the current study. This positive association may reflect increased palatability of meals including spicy food [7].
Relationship of frequency of spicy food consumption with hypertension
Harada et al. found that SBP and DBP were significantly lower among hypertensive volunteers after administration of a mixture of capsaicin and isoflavone for 5 months, but not among normotensive volunteers [18]. In a randomized cross-over dietary intervention study from Australia, 36 individuals (22 women and 14 men) consumed a chilli diet (30 g chilli per day) and a bland diet (chilli-free) for 4 weeks each [16]. This study concluded that four weeks of regular chilli consumption had no obvious beneficial or harmful effects on SBP or DBP, which was inconsistent with the present study. However, a prospective cohort involving 13,670 Chinese adults [17], with a median follow-up of 9.0-years, demonstrated Table 2 Unadjusted and adjusted odds ratios for hypertension associated with frequency of spicy food consumption among adults in Zhejiang In model 1, odds ratios were adjusted for age (continuous) and sex. Model 2 included additional adjustment for education level (no formal school, primary school, middle school, and high school or above), household income (≤ 19,999 yuan, 20,000-34,999 yuan, ≥ 35,000 yuan), Model 3 included additional adjustment for cigarettes consumption (never, occasional, former, and current), alcohol consumption (never, occasional, former, and current), physical activity (continuous), meat consumption (daily and non-daily), fruit consumption (daily and non-daily), BMI (continuous), WC (continuous), snoring (never, occasional, and habitual), sleep duration (continuous) a Without adjustment for sex , including 9273 adults aged ≥ 18 years old, was conducted in nine geographically diverse provinces. Findings from the CHNS indicated that, compared with females who did not eat spicy food, the adjusted odds ratios (95% CI) of hypertension for women who consumed spicy food 1-2 times/week, 3-4 times/week, and ≥ 5 times/ week were 0.92 (0.78-1.09), 0.91 (0.73-1.13), and 0.74 (0.57-0.96), respectively, but this inverse association was not found in men [12]. This sex disparity in the association between frequency of spicy food consumption and hypertension was compatible with the current study.
Two large prospective population studies conducted in China and Italy illustrated that spicy food intake was associated with a lower risk of death due to cardiovascular diseases [11,14]. High blood pressure is a well-known risk factor for cardiovascular diseases [1,2], and the inverse association between spicy food consumption and hypertension found in the current study suggests the apparent protective effect of spicy food on cardiovascular disease mortality may be mediated via lowering of blood pressure.
In subgroup analyses, significant differences in the association of spicy food consumption with hypertension were observed across strata of alcohol consumption, with a stronger inverse association among non-current drinkers than current drinkers. Intriguingly, prospective analyses based on 0.48 million CKB participants showed that the inverse association of spicy food consumption with all-cause mortality was stronger in participants who did not drink alcohol than among those who did drink alcohol [11], which is in line with the current study.
Although mechanisms underlying the potential beneficial effect of spicy food consumption on hypertension have not yet been fully elucidated, several hypotheses have been proposed. First, capsaicin, as the major pungent element in red pepper, is a neurotoxic agent, and could activate transient receptor potential Table 3 Unadjusted and adjusted β coefficients for systolic blood pressure associated with frequency of spicy food consumption among adults in Zhejiang In model 1, odds ratios were adjusted for age (continuous) and sex. Model 2 included additional adjustment for education level (no formal school, primary school, middle school and high school or above), household income (≤ 19,999 yuan, 20,000-34,999 yuan, ≥ 35,000 yuan), Model 3 included additional adjustment for cigarettes consumption (never, occasional, former, and current), alcohol consumption (never, occasional, former, and current), physical activity (continuous), meat consumption (daily and non-daily), fruit consumption (daily and non-daily), BMI (continuous), WC (continuous), snoring (never, occasional, and habitual snoring), sleep duration (continuous) a Without adjustment for sex vanilloid type-1 (TRPV1), in turn improving endothelium-dependent vasorelaxation and lowering blood pressure [29]. Second, activation of TRPV1 could reduce vascular lipid accumulation and attenuate atherosclerosis [30]. Third, activation of TRPV1 may prevent adipogenesis and obesity [31]. Lastly, enjoyment of spicy food may significantly reduce individual salt preference, daily salt intake, and blood pressure by modifying the neural processing of salty taste in the brain [32].
Frequency of spicy food consumption
The findings of the current study are of potential clinical and public health importance. Firstly, spicy food might be a valuable dietary intervention for prevention of hypertension in both healthy populations and high-risk groups, especially in regions with typically low consumption of spicy food, such as Zhejiang. Secondly, spicy food should not be considered a dietary taboo for patients with hypertension. However, further evidence from prospective and randomized studies will be important in establishing this potential clinical relevance.
Strengths and limitations
The strengths of this study include a large sample size, use of standardized data collection procedures, and strict control for established and potential risk factors for hypertension. Several limitations merit mention, however. First, the cross-sectional design restricts establishment of the temporal relationship of spicy food consumption with hypertension. Second, assessment of spicy food consumption in the current study was self-reported and subject to measurement error. Third, although multiple established and potential risk factors for hypertension were adjusted for in different models, residual confounding by other unmeasured or unknown biological and social factors is still possible.
Conclusions
In conclusion, the current study shows that frequency of spicy food consumption is inversely associated with hypertension in females, but not in males. Table 4 Unadjusted and adjusted β coefficients for diastolic blood pressure associated with frequency of spicy food consumption among adults in Zhejiang In model 1, β coefficients were adjusted for age (continuous) and sex. Model 2 included additional adjustment for education level (no formal school, primary school, middle school and high school or above), household income (≤ 19,999 yuan, 20,000-34,999 yuan, ≥ 35,000 yuan), Model 3 included additional adjustment for cigarettes consumption (never, occasional, former, and current), alcohol consumption (never, occasional, former, and current), physical activity (continuous), meat consumption (daily and non-daily), fruit consumption (daily and non-daily), BMI (continuous), WC (continuous), snoring (never, occasional, and habitual snoring), sleep duration (continuous) a Without adjustment for sex
|
2021-07-06T13:41:13.002Z
|
2021-07-06T00:00:00.000
|
{
"year": 2021,
"sha1": "9138064232b99481380d861f98062375f67e4558",
"oa_license": "CCBY",
"oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/s12986-021-00588-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9138064232b99481380d861f98062375f67e4558",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18087239
|
pes2o/s2orc
|
v3-fos-license
|
Innate immunity and hepatitis C virus infection: a microarray’s view
Hepatitis C virus (HCV) induces a chronic infection in more than two-thirds of HCV infected subjects. The inefficient innate and adaptive immune responses have been shown to play a major pathogenetic role in the development and persistence of HCV chronic infection. Several aspects of the interactions between the virus and the host immune system have been clarified and, in particular, mechanisms have been identified which underlie the ability of HCV to seize and subvert innate as well as adaptive immune responses. The present review summarizes recent findings on the interaction between HCV infection and innate immune response whose final effect is the downstream inefficient development of antigen-specific adaptive immunity, thereby contributing to virus persistence.
Hepatitis C virus (HCV) is a Hepacivirus of the Flaviviridae family, mainly involved in hepatic disorders, including chronic hepatitis which may progress to cirrhosis in about 10-20% of cases and further to hepatocellular carcinoma (HCC) in 1-5% of cirrhotic patients [1]. Furthermore, HCV has also been implicated as one of the major etiologic factors for type II Mixed Cryoglobulinemia (MC), an autoimmune disease that may evolve into an overt B-cell non-Hodgkin's lymphoma (NHL) in about 10% of MC patients [2][3][4].
HCV is an enveloped positive-strand RNA virus. Six major HCV genotypes and more than 100 subtypes have been so far identified [5][6][7]. HCV genomic RNA contains a single open reading frame flanked by 5 0 and 3 0 untranslated regions (UTRs) [8,9], which encodes for a single large polyprotein, processed by cellular and viral proteases to produce structural as well as non structural proteins [9,10].
HCV entry into hepatocytes is mediated by one of the following putative receptors, such as the CD81 tetraspanin [13], the scavenger receptor class B type I [14], the tight junction proteins claudin [15,16] and occludin [17], the latters conferring species specificity.
Innate immune response to HCV
As for all microbial infections, innate immune response plays a critical role in the control and resolution of HCV infection providing signals for the efficient priming of the adaptive branch of immune response [18,19].
In particular, the innate immunity is important in HCV infection to control viral dissemination and replication in order to allow an adequate downstream development of antigen-specific humoral as well as cellular responses [20].
During the early phase of HCV infection, the viral RNA load increases in the first few days and remains high throughout the incubation period, which lasts for up to 10-12 weeks post-infection [21,22]. In such early stage, large amounts of type I interferons (IFN-α, IFN-β) may be produced by HCV-infected hepatocytes as well as dendritic cells (DCs) to control viral replication [23].
Besides producing type I IFN, DCs represent the key cell compartment of innate immunity, orchestrating the quality and potency of downstream adaptive immune response. They are professional antigen presenting cells (APCs) able to uptake and process viral antigens, as well as release cytokines to efficiently prime both CD4+ helper T cells and CD8+ cytotoxic T lymphocytes (CTLs) [24].
In particular, the subset of plasmacytoid DCs (pDCs) is considered the front line in antiviral immunity owing to their capacity to rapidly produce high amounts of type I interferon in response to viruses, upon recognition of viral components and nucleic acids through Toll-like receptor (TLR) 7 and TLR9 [25,26]. To further support such role, several reports show a decreased frequency of pDCs in peripheral blood of patients with chronic HCV infection and impaired production of IFN-α by pDCs from HCV patients [27][28][29]. Also, myeloid or conventional DCs (cDCs) are programmed to produce IFN-α in response to viral infection upon interaction between viral doublestranded RNA-like molecule polyinosinic:polycytidylic acid (poly I:C) and TLR3 [30]. Moreover, cDCs produce high amounts of cytokines, such as IL-12, which has been shown to play an important role in stimulating IFN-γ production from activated T cells, inducing the development of type 1 (Th1) protective immune response [31,32]. Indeed, a recent study showed that an increased number of cDCs during acute HCV infection may be associated with viral clearance, whereas a loss in the number of cDCs may increase the risk for development of chronic HCV infection [33,34].
Also natural killer (NK) cells are potent antiviral effectors due to their contribution to virus elimination via direct killing of infected cells and cytokine production [35]. Genetic factors appear to contribute to the level of NK cell responsiveness, as shown by the presence of individual killer cell Ig-like receptor/human leukocyte antigen (KIR/HLA) compound genotypes correlated with HCV clearance [36]. In particular, given that the interaction between KIRs expressed on NK cells and HLA expressed on target cells plays a key role in NK cell activation, it has been suggested that such genotypes are characterized by a higher sensitivity of NK cells with a faster degranulation and IFN-γ release in vitro [37].
Nevertheless, innate immune response to HCV may also be detrimental inducing immunopathological effects on the liver. NK-mediated killing of HCV-infected hepatocytes and secretion of proinflammatory cytokines may cause liver damage, stimulating cDCs to produce high amount of IFN-γ which, subsequently, activates hepatic macrophages to enhance local inflammation [38]. All such cascade of events contributes to the pathogenesis of liver disease [39].
The overall data demonstrate the complex, contradictory and evolving equilibrium between HCV and host innate immunity, whose result will lead to completely different clinical outcomes ranging from resolution to chronic viral infection.
Pattern recognition receptors in sensing virus infection
The innate immune response to virus infection is activated when conserved motifs of microbial origin, known as pathogen-associated molecular patterns (PAMPs) are recognized by cell pattern recognition receptors (PRRs) [40].
The 3 major classes of PRRs include Toll-like receptors (TLRs), RIG-I-like receptors (RLRs), and nucleotide oligomerization domain (NOD)-like receptors (NLRs) [41][42][43]. Viral engagement of TLRs and RLRs leads to the activation of transcription factors, such as the IFN regulatory factors (IRFs) and NF-κB, which in turn may lead to the activation of IRF3 target genes, type I IFN, and proinflammatory cytokines [44].
So far, the role of NLRs in sensing RNA viruses is still unclear and they are primarily thought to be activated by intracellular stress signals (i.e. damage-associated molecular patterns, DAMPs) [45]. In this regards, DAMPS derived from HCV infected hepatocytes may play a critical role in promoting liver inflammation with immunopathological effects.
Innate immune cells (i.e. monocytes, neutrophils, dendritic cells) are rapidly activated upon recognition of infecting agents by a wide range of PRRs. Among them, the Toll-like receptors are members of the interleukin-1 receptor (IL-1R) superfamily [46,47], characterized by a leucine-rich repeat (LRR) domain in the extracellular region and an extracellular Toll/IL-1R (TIR) domain [48].
So far, 11 TLRs have been identified. TLR1, TLR2, TLR6 and TLR10 are closely related to the TLR2 subfamily, whereas, TLR7, TLR8 and TLR9 are closely related to the TLR9 subfamily [49]. Each TLR has specific ligands, which allow the host to sense a wide diversity of pathogens [50]. As a result of TLR stimulation, proinflammatory cytokines are released that activate the host immune response [51][52][53].
On the contrary, TLR3 signalling is independent of MyD88 and is mediated by TIR-domain containing adaptor inducing IFN-b (TRIF)_toll-like receptor adaptor molecule I (TICAM I) with induction of interferonregulatory factor-3 (IRF-3) transcription factor and subsequent production of IFN-β [59].
The RIG-I-like receptors (RLRs) are sensors of viral RNA, consisting of three members: RIG-I, melanoma differentiation antigen 5 (MDA5) [60] and laboratory of genetics and physiology-2 (LGP2) [61][62][63][64]. RLRs are expressed in the cytoplasm of most cells, including hepatocytes, representing good candidates as primary intracellular sensors of HCV infection. Both RIG-I and MDA5 contain 2 N-terminal caspase activation and recruitment domains (CARD) [65]. Moreover, all the RLRs have a DExD/H RNA helicase domain and bind to RNA ligands [66]. In addition, RIG-I has a repressor domain that interacts with the CARD domains to maintain the receptor in a non-active conformation in the absence of infection [67].
LGP2 lacks the CARD domains and may function as a regulator of RLR signalling [68]. RIG-I senses non-self double-stranded RNAs (dsRNAs) with free 5 0 -triphosphates and is recruited to the mitochondrial surface, where interacts with MAVS (mitochondrial antiviral signalling protein; also known as IFN-β promoter simulator 1 (IPS-1), virus-induced signaling adapter (VISA), and CARD adaptor inducing IFN-β (Cardif)) on the outer mitochondrial membrane. MAVS is a CARD protein and an essential adaptor for RLR signalling [69]. The interaction between RIG-I and MAVS results in the activation of the transcription factors, interferon regulatory factor-3 (IRF-3) and nuclear factor-kB (NF-kB) with subsequent transcription of IFN-β [70].
Overall, the data show that, regardless the class of PRRs engaged upon viral recognition, the activated pathways in cells of the innate immunity converge to induce the production of type I IFN with highly effective antiviral activity.
IFN signalling during HCV infection
The production of IFN-β resulting from HCV infection leads to activation of the JAK (Janus kinase)/STAT (signal transducer and activator of transcription) signalling pathway with the expression of interferon-stimulated genes (ISGs) (Figure 1) [71,72].
Among others, an increased expression has been described for the 2 0 -5 0 -oligoadenylate synthase 1 (OAS1)/ RNAse L system, which degrades viral and cellular RNA [73], and the RNA-specific adenosine deaminase acting on RNA 1 (ADAR1), which converts adenosine residues into inosine residues in dsRNA [74], thereby mutating and destabilizing secondary viral RNA structures [75]. Similarly, induction of other ISGs, such as P56 [76] and protein kinase R (PKR) [77], which inhibit translation of viral and host RNAs [78], has been reported. Such induction of ISGs will ultimately amplify the IFN response, in a loop fashion, given that some pattern recognition and signalling molecules are ISGs per se, such as RIG-I, TLR3 and TRIF, whose final outcome is the production of IFN-β ( Figure 1) [79].
In this complex framework, HCV establishes a chronic persisting infection when is able to disrupt the host immune response and to evade antiviral defenses. A major strategy employed by HCV to subvert the host innate immune response is to undermine IFN antiviral activity [80] as well as functions of innate immune cells.
Regarding the IFN activity, the main targets of HCV are represented by the PAMP signalling pathways leading to IRF-3 activation, the IFN-α/β receptor signalling pathway and the ISG effector proteins (Figure 1). In particular, HCV NS3/4A protein cleaves the adaptor molecules TRIF and IPS-1, thereby blocking TLR3 and RIG-I signalling pathways [81]. Moreover, the HCV core protein interferes with JAK/STAT signalling and ISGs expression by several strategies, including 1) inhibition and degradation of STAT1 [82]; 2) induction of the suppressor of cytokine signaling 3 (SOCS3) and protein phosphatase 2A (PP2A), which are an inhibitor of the JAK/STAT pathway [83] and a reducer of the transcriptional activity of ISG factor 3 (ISGF3) [84], respectively; 3) inhibition of ISGF3 binding to IFN-stimulated response elements (ISRE).
Furthermore, several HCV proteins directly interfere with the function of ISGs. Indeed, functional genomics analyses have shown that the NS5A protein induces a general attenuation of ISG expression via an increased secretion of IL-8 [85] and a subsequent modulation of IFN functions. In support of such inhibitory mechanism, serum levels of IL-8 have been found elevated in patients with chronic hepatitis C. In addition, NS5A inhibits 2 0 -5 0 oligoadenylate synthetase (20-50 OAS) and PKR function [86] and E2 acts as decoy target to PKR [87].
Even though such escape strategies still need to be validated in vivo, the available data strongly suggest that HCV has established redundant means to cope with the host IFN response.
Interactions between HCV and cellular compartments of innate immunity
More recently, several reports have suggested that HCV itself may actively suppress the host immune response by inhibiting the function of innate immune cells.
NK cells
NK cell activation, during the early phase of HCV infection, is involved in viral eradication, whereas direct suppression of NK cells may be implicated in HCV chronic persistence [88]. It has been reported that the binding of HCV E2 protein to the CD81 receptor on NK cells inhibits their function and IFN-γ production [89,90]. Such results, however, are still controversial and their biological significance in vivo needs to be verified, considering that HCV E2 does not efficiently crosslink CD81 on NK cells when it is part of infectious virions, and NK cell function is not impaired after in vitro exposure to high concentrations of cell culture-produced HCV [91].
The HCV core protein induces an up-regulation in the expression of major histocompatibility complex (MHC) class I molecules on the surface of hepatocytes by increasing the expression of transporter associated with antigen processing 1 (TAP1). The resulting suppression of NK cells activation and cytotoxic activity significantly contribute to HCV persistence [92].
In addition, it has been reported that NK cells from chronic HCV-infected individuals show an increased expression of CD94/NKG2A inhibitory receptor, as well as production of immunosuppressive IL-10 and TGFβ. The sum of these effects is the functional impairment of NK cells to activate DCs and, ultimately, to generate Th1 CD4+ T cells [93].
DC cells
DC response to HCV in the early stage of infection is crucial in determining the outcome of the disease and several studies indicate that chronic HCV-infected individuals show an impaired function of DC subsets.
The frequency of circulating pDCs [94], as well as their ability to produce IFN-α upon in vitro stimulation [95] are reduced in chronic HCV patients and different mechanisms have been proposed. HCV core and NS3 proteins have been shown to activate in vitro monocytes via TLR2 to produce TNF-α, which in turn inhibits IFN-α production and induces pDCs apoptosis [96]. Alternatively, HCV may directly inhibit IFN-α production by pDCs in vitro [97].
Similarly, maturation and functional differentiation of cDCs are altered in HCV infection, with decreased IL-12 and increased IL-10 production in vitro [98,99], possibly resulting in insufficient T cell priming and delayed HCVspecific T cell responses. However, the impaired allostimulatory capacity of cDCs in chronic HCV patients is still contradictory, being described in some [100][101][102] but not all studies [103].
For both subsets of DCs, indeed, functional defects have been observed in vitro upon stimulation with individual HCV proteins but do not reflect an immune compromised status in chronic HCV patients, who show normal responsiveness to other viruses or recall antigens (reviewed in [104,105]). Therefore, the impaired efficacy of DCs compartment (i.e. pDCs) in chronic HCV patients would not be due to a "primary" dysfunction of such cells in producing type I IFNs but, more likely, to a "secondary" non-responsiveness of target HCV-infected hepatocytes, given the capacity of HCV to inhibit the IFN-stimulated signal pathway.
Overall data suggest that HCV interacts with and affects the function of different actors of the innate immunity. HCV interference is diverse with regard to cellular levels, targets and outcomes, however, the overall disruption of the coordinated activity of the innate immune response results in deficient adaptive immune response and prevention of pathogen elimination (Figure 2).
HCV disease: a microarray's view
The interplay between HCV and innate immunity can nowadays be addressed and studied by systems biology approaches which provide detailed level of investigation to better and fully analyze the network of interactions within virus and innate immunity. Conversely to traditional "reductionist" approach, indeed, the paradigm of systems biology is to look at a biological system as a whole, evaluating interactions among biological elements and their relationship with the surrounding environment. Systems biology has been increasingly applied to oncology [106][107][108], autoimmunity and infections [109,110] and only recently to vaccinology [111][112][113].
Microarray analyses of gene transcriptional profiles have been performed to identify molecular signatures of the innate immunity compartment related to HCV infection (Table 1).
It has been recently shown that specific immune genes are significantly increased in HCV cirrhotic liver as compared to control normal tissue [114]. Such genes include IRF1, tripartite motif-containing 22 (TRIM22), and multiple leukocyte immunoglobulin-like receptors (LILRA1, LILRA4, LILRA5, LILRB2, LILRB3 and LILRB4), which have been reported to play a role in the virushost interaction.
IRF1 is a critical transcriptional regulatory factor that modulates ISG expression and has been shown to regulate HCV subgenomic replicon activity in cultured hepatoma cells [115]. Interestingly, polymorphisms in the IRF1 promoter have been associated with a better response to IFN-α therapy in patients with chronic HCV infection [116].
TRIM22 belongs to the tripartite motif family of proteins which have been associated with innate immune response to viruses, inhibiting viral replication [117].
Moreover, multiple leukocyte immunoglobulin like receptors (LILRs) are known to be expressed on myelomonocytic cells and can influence both the innate and acquired immune response. In particular, LILRB2 has been previously reported to be up-regulated also in HIV patients and may impair the antigen presentation function of monocytes [118], whereas, LILRB4 has been shown to impair antigen presentation and T cell recruitment modulating the expression of proinflammatory cytokines [119].
A different study showed a total of 524 genes differentially expressed in "advanced HCV" as compared to non viral hepatitis, with 466 up-regulated genes and 58 down-regulated genes [120]. The most affected biological functions observed in "advanced HCV" include the canonical pathways of calcium signalling, hepatic fibrosis/stellate cell activation and actin cytoskeleton signalling. Moreover, many differentially expressed genes involved in the pathways of immune system, fibrosis, proliferation, cell growth, and apoptosis have been found to be up-regulated according to previously published data [121][122][123]. The majority of such genes are involved in the pathway of the immune and inflammatory response, including: class II major histocompatibility complex HLADQa1, HLADRa1, chemokines and chemokine receptors [124]. Moreover, a microarray analysis performed on Huh7 hepatocarcinoma cell line demonstrated that infection with HCV JFH-1 viral particles alters the expression of host genes involved in cellular defense mechanisms that protect the cell against infection and oxidative stress which, in turn, determine the fate of cellular survival [125]. Furthermore, HCV JFH-1 infection is able to stimulate the expression of proinflammatory antiviral response genes, including those involved in type I and II interferon responses (e.g. IRF1, IRF9, and myxovirus (influenza virus) resistance 1, interferon-inducible protein p78 (mouse) (MX1) genes), the complement cascade (e.g. mannosebinding lectin (MBL) 2 and mannose-binding protein associated serine protease (MASP) 1 genes), and the production of proinflammatory chemokines and cytokines (e.g. IL-8 and CXC chemokine ligand (CXCL) 1,−2,−3,−5, −6, and−16 genes) [126]. Increased expression of genes encoding for negative regulators of the interferon response has been also observed, including several members of the SOCS gene family (e.g., SOCS2 and−3 genes) [127].
In this framework, our group evaluated differential gene expression by microarray analysis on liver biopsies obtained from chronic HCV and control negative patients [128]. Unique gene signatures were identified and, in particular, the HCV infected liver tissue showed strong up-regulation of genes involved in antigen presentation, protein ubiquitination, interferon signaling, IL-4 signalling, bacteria and viruses cell cycle and chemokine IL-4 signalling pathways.
Data analysis focused on the expression levels of specific genes related to the innate immunity pathway showed a strong up-regulation of genes involved, at multiple levels, in the pathway of Type I IFN signalling, including the STAT1 transcription factor and the downstream regulated genes (ISGs), in agreement to studies from other groups [129].
Moreover, MHC components of antigen processing and presentation such as HLA-F, Beta-2-microglobulin (B2M), CD7 and TAP1, have been found up-regulated in HCVpositive liver tissue. In particular, TAP1 is involved in the transport of antigens from the cytoplasm to the endoplasmic reticulum for association with MHC class I molecules [134]. Indeed, it is well known that HCV core protein enhances MHC class I molecule function, by increasing the expression of TAP1, thus contributing to HCV persistence by suppressing the cytotoxic activity of NK cells [135].
Furthermore, data analysis focused on the expression levels of specific genes related to the innate immunity pathway show a relevant activation trend of the flagellindependent TLR5, associated with the activation of IRAK1 (variant 3) and decreased level of IL-10 and IL-1b (submitted for publication).
Such overall data shed light on specific pathogenetic mechanisms and gene signatures involved in HCV-related disease and suggest the relevant role of innate immunity in progression of HCV infection. Furthermore, the analysis of relevant pathways and specific genes involved in HCV progression to cancer may have a relevant impact on the early identification of "progressors" to select for appropriate therapeutic actions.
Indeed, despite advances have been made in the treatment of HCV chronic infection with the combination of pegylated interferon (PEG-IFN) and ribavirin, treatment failure still occurs in about half of the patients and prediction of treatment response would be of great value.
In this perspective, several studies employed gene expression profiling analysis to investigate the molecular basis for treatment failure in HCV chronic infection.
In particular, a systems biology approach using highthroughput technologies, such as complementary DNA microarrays combined with mathematical modeling, was applied to identify a liver gene signature to predict sustained virological response to PEG-IFN plus ribavirin in patients with HCV chronic hepatitis [136,137]. To this aim, expression profiling analysis was performed on liver biopsy specimens taken before therapy and gene expression levels were compared among 15 nonresponders (NR), 16 responders (R), and 20 healthy subjects.
Eighteen genes were differentially expressed between responders and nonresponders. Up-regulation of a specific set of IFN-responsive genes predicted poor response to therapy, suggesting a possible rationale for treatment resistance.
In particular, upregulation of the following 8 genes showed the most consistent ability to classify NR and R subjects: ISG15, activating transcription factor 5 (ATF5), IFN-induced protein with tetratricopeptide repeats (IFIT1), MX1, ubiquitin-specific protease 18 (USP18), dual specificity phosphatase 1 (DUSP1), cyclin E binding protein (CEB1) and 40S ribosomal protein S28 (RPS28). Overall, the study showed that different innate IFN response to HCV infection may significantly impact on the responsiveness to PEG-IFN plus ribavirin therapy and identify NR and R patients [137].
Conclusions
The host immune response plays a critical role in HCV infection because of its potential to contribute not only to viral clearance but also to liver injury. HCV attenuates both innate and adaptive immune responses, thereby reducing the viral clearance as well as the degree of immunemediated liver injury, allowing coexistence of both virus and host. Key questions for future studies remain for nearly every aspect of the host immune response; so far, the pathogenetic mechanisms involved in progression to distinct HCV-related malignant tumors are still ill defined.
However, the analysis of the innate immune pathways involved in HCV chronic infection would help elucidating the possible mechanisms leading to HCV related cancers, such as HCC or B-cell NHL.
Future studies focused on the analysis of relevant pathways and specific genes involved in HCV infection and progression to cancer would have a relevant impact on the understanding of HCV-related carcinogenesis (HCC and/or B cell NHL) as well as on the management of HCV-infected subjects, making easier the identification of "progressors" to select for appropriate preventive/therapeutic actions.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions LB and AP drafted the manuscript. MLT participated in the draft of the manuscript. FMB conceived of the study and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
|
2016-05-14T19:38:20.549Z
|
2012-03-26T00:00:00.000
|
{
"year": 2012,
"sha1": "9fe46c2c7b2747537cd76a2c4dba8dfde7a50ea1",
"oa_license": "CCBY",
"oa_url": "https://infectagentscancer.biomedcentral.com/track/pdf/10.1186/1750-9378-7-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fe46c2c7b2747537cd76a2c4dba8dfde7a50ea1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
245479856
|
pes2o/s2orc
|
v3-fos-license
|
Do We Need a New Approach to Cancer Biology Education for Radiation Oncology Residents?
Traditional radiation oncology biology courses largely focus on radiation biology and oncology as needed for passing the boards. Changes in the landscape of oncology necessitate a broader scope. Radiotherapy is an important component of cancer care. Approximately 70% of all cancer patients receive radiotherapy during the course of their disease. With the revolution in precision medicine that is unfolding, genomics, proteomics, metabolomics, and microbiomics are being ever more integrated into the treatment of cancer. Comprehensive knowledge of cancer biology beyond traditional radiation biology is essential for future advances in radiotherapy and unavoidable for radiation oncology trainees. The importance of a newly designed curriculum to impart broader knowledge to future radiation oncologists is emphasized in this report. A paradigm shift in the approach to the traditional radiation biology course is required to train residents for the future of oncology.
Editorial Background
The impact of cancer on all our lives emphasizes the need for continuous training to pursue research into its cure and prevention. Radiation oncology has an excessive degree of benefit to cancer patients, but it is very important to understand the effect of radiation as a potent modulator of the genetic and cellular activity of cancer. The main objective of this commentary is to describe the requirement in having a profound impact on how cancer biology education is valued for radiation oncology residents and by providing cancer-focused educators with the ability to develop a robust cancer education and comprehensive career development programs.
Discussion
Oncology has entered a new era. Technical advances in collecting and analyzing data in the fields of genomics, proteomics, metabolomics, and microbiomics are fundamentally redefining our understanding of cancer and how to treat it. Massive amounts of genetic data procured through such initiatives as the Cancer Genome Atlas [1], the 100,000 Genomes Project [2], and the American Association for Cancer Research (AACR) Project Genomics, Evidence, Neoplasia, Information, Exchange (GENIE) [3], have become increasingly annotated with clinical data. These Herculean efforts have resulted in a quantum leap in our understanding of the genetic drivers of diverse cancers [4] and illuminated a myriad of fundamental cancer biology concepts [5][6][7] right down to the individual level. This has led to the development of new targeted agents, and patients now regularly undergo genetic sequencing to help guide their treatment [8]. We are in the age of precision medicine. A detailed and comprehensive understanding of cancer biology will become essential to the practice and future advancement in all fields of oncology.
Gone are the days when faculty could focus on radiobiology only for basic science didactic purposes. We must move beyond focusing solely on radiation's effects on a cell and start considering the impact that an individual cancer's cell biology and its tumor microenvironment will have on radiation therapy. Most residents enter radiation oncology programs with limited knowledge about cancer biology and may be somewhat unprepared to reap the benefits of the world of big data, which emanated from the Human Genome Project [9].
If "genomically guided radiation therapy is a necessity that must be embraced in the coming years" [10], we must prepare now. How can faculty provide an essential educational foundation without overwhelming radiation oncology residents? How can our residency programs better prepare our trainees for a future in which information technology constantly pushes the frontiers of precision medicine [11]? How can we arm our residents with the ability to discriminate between experimental radiation and pharmacological combination therapies which are based on sound scientific principles from those based on wishful thinking?
We recently attempted to address these and related concerns through a new Cancer Biology for Radiation Oncologists course. The course structure is designed to embed a lasting framework of fundamental principles in the minds of budding radiation oncologists so that they may be better able to apply guaranteed future advancements in technology, biology, and pharmacology to their medical practices. The course also addresses topics of immediate relevance to radiation oncologists, such as DNA repair mechanisms and molecular pathways providing radiation resistance.
Many uncertainties contribute to anxiety about the future of radiation oncology, and the specter of taking the American Board of Radiology Initial Certification Exam looms large in the minds of trainees at many institutions. Our new course devotes considerable attention to content which is unlikely to appear on this or other national exams, yet this course has been enthusiastically embraced by our residents.
In Figure 1, traditional radiation biology courses (shown on the left in red) emphasize themes that allow trainees to fully understand the biomedical responses of tumors and patients to various doses of ionizing radiation under every conceivable dose delivery modality. Cancer biology courses tailored to future radiation oncologists (shown on the right in blue) emphasize themes that allow trainees to fully understand the various processes of neoplastic transformation and progression, the roles of critical genes and gene products in these processes, and their relationships to personalized medicine. Certain concepts (shown at the intersection in purple) are common to both courses, although their associated learning objectives may differ due to divergent relationships to other concepts taught in these courses (arrows).
Conclusions
We believe our enhanced approach to cancer biology education can help solve some of the challenges facing the radiation oncology community at large, but more importantly, improve health outcomes for the patients we all serve. It is important to note that cancer biology courses tailored to future radiation oncologists are not intended to replace radiation biology courses; rather, they are intended to supplement such courses.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2021-12-26T16:03:08.600Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1de0ed6ff2f34ed3d9e43941659e24b523703178",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/80999-do-we-need-a-new-approach-to-cancer-biology-education-for-radiation-oncology-residents.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b61ece13c1df6cd31c0b8b6ae31a42bfd9ad4e44",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
145022968
|
pes2o/s2orc
|
v3-fos-license
|
Distribution of Polysulfide in Human Biological Fluids and Their Association with Amylase and Sperm Activities
Intracellular polysulfide could regulate the redox balance via its anti-oxidant activity. However, the existence of polysulfide in biological fluids still remains unknown. Recently, we developed a quantitative analytical method for polysulfide and discovered that polysulfide exists in plasma and responds to oxidative stress. In this study, we confirmed the presence of polysulfide in other biological fluids, such as semen and nasal discharge. The levels of polysulfide in these biological fluids from healthy volunteers (n = 9) with identical characteristics were compared. Additionally, the circadian rhythm of plasma polysulfide was also investigated. The polysulfide levels detected from nasal discharge and seminal fluid were approximately 400 and 600 μM, respectively. No correlation could be found between plasma polysulfide and the polysulfide levels of tear, saliva, and nasal discharge. On the other hand, seminal polysulfide was positively correlated with plasma polysulfide, and almost all polysulfide contained in semen was found in seminal fluid. Intriguingly, saliva and seminal polysulfide strongly correlated with salivary amylase and sperm activities, respectively. These results provide a foundation for scientific breakthroughs in various research areas like infertility and the digestive system process.
Introduction
Thiols are one of the most important targets of posttranslational modification via redox reaction due to its nucleophilicity. When thiol is exposed to reactive oxygen species (ROS) or reactive nitrogen species (RNS), it is oxidized to SO x H or SNO [1,2]. Reversible modification of thiols forms disulfide bonds.
Some electrophilic compounds, such as reactive aldehyde, can be modified to thiol irreversibly [3,4]. These modifications sometimes change protein structure and/or activities, thus thiols could be one of the important modulators of protein functions [5]. Recently, the investigation of reactive sulfur species (RSS) provided a paradigm shift in the redox biology of thiols. RSS is defined as sulfur-abundant molecules, such as cysteine persulfide (CysSSH) and glutathione persulfide (GSSH) [6,7]. These sulfur-bound sulfur atoms are called "sulfane sulfur" [8], which renders a stronger nucleophilicity effect to thiol by α effect [9]. For example, pKa of CysSSH is 4.3, while that of cysteine (CysSH) is 8.4 [9]. Thus, hydropolysulfide is a better target for posttranslational modification than thiol. Cysteine polysulfide is commonly measured by alkylation and reduction methods [10,11]. In this method, polysulfide and thiol are capped by weak alkylation agents like iodoacetamide, then treated with reductants including dithiothreitol (DTT) or 2-mercapto ethanol (2-ME). Thiol will not be reduced after alkylation, whereas polysulfide will be reduced by the reductants.
The levels of low molecular RSS including CysSSH, GSSH, and CysSSSCys were detected in blood, heart, liver, brain, and lung [6]. Akaike et al. reported that CysSSH bound to tRNA preferentially rather than CysSH [12]. They also proved that cysteine polysulfides (CysSS n H) account for about 70% of total cysteine in protein during translation [12]. A good proportion of CysSS n H remains while the CysSS n H incorporated to protein is reduced by thioredoxin (Trx)/Trx reductase (TrxR) systems. These observations mean that CysSS n H is a natural component of proteins. Consequently, measuring the whole amount of polysulfides is therefore important for assessing redox balance.
On the other hand, little is known about the existence of polysulfide in secretory extracellular proteins. One of the reasons is that oxidative environment of fluids converts reduced form (CysSS n H) to oxidized form (CysSS n Cys) of polysulfides that cannot be detected by the alkylation method. Serum albumin constitutes approximately 60% of the serum proteins, hence being the most abundant protein in plasma. Many mammalian serum albumins have 35 residues of cysteine and only one of them exists in reduced form. Using alkylation and reduction method, P. Nagy et al. demonstrated that serum albumin acquired the reduced form of polysulfide after being treated with sodium hydrogen sulfide [11].
In a previous study, we have successfully developed a novel analytical method for quantifying the oxidized form of polysulfide, named as elimination method of sulfide from polysulfide (EMSP) [13]. This EMSP assay enables measurement of the polysulfide in biological fluids like plasma. The assay also revealed that human serum albumin (HSA) carries the oxidized form of polysulfide. Some of these biological fluids contain proteins that are common to plasma [14][15][16], so it is possible for them to also contain polysulfide, potentially demonstrating positive correlation with plasma polysulfide levels.
In this study, we examined the polysulfide content of biological fluids such as semen and nasal discharge in healthy subjects. By collecting various biological fluid samples from the same subjects, polysulfide levels in the biological fluids could be compared to that in plasma. We also examined the circadian rhythm of plasma polysulfide.
Determination of Polysulfide Level in Biological Fluids
We previously demonstrated that polysulfide in plasma exists mostly in HSA [13]. HSA is present not only in plasma but also in several biological fluids. In this study, polysulfide levels of plasma, tears, saliva, nasal discharge, and semen were measured.
These biological fluids were collected from 9 healthy subjects, including 5 males. Semen was collected from 4 of the 5 male subjects, because only four informed consents for semen collection were obtained. The subject characteristics are summarized in Table 1. The average age was 28.44 years, and the average body mass index (BMI) was 20.85. The polysulfide level in plasma, tears, saliva, nasal discharge, and semen was determined using the EMSP method. As previously reported, the polysulfide level was about 7.5 mM for plasma, about 1 mM for tears, and about 41 µM for saliva [13].
The polysulfide of nasal discharge and seminal fluid was quantitated for the first time and it was about 400 µM and 600 µM, respectively. The correlation between plasma polysulfide levels and these biological fluids was examined. Results showed that there was no correlation between plasma and tears, saliva, or nasal discharge (Figure 1a-c). Interestingly, only semen showed a positive correlation with plasma ( Figure 1d). The protein content of each biological fluid was examined to shed light on the correlation results. polysulfide level was about 7.5 mM for plasma, about 1 mM for tears, and about 41 μM for saliva [13]. The polysulfide of nasal discharge and seminal fluid was quantitated for the first time and it was about 400 μM and 600 μM, respectively. The correlation between plasma polysulfide levels and these biological fluids was examined. Results showed that there was no correlation between plasma and tears, saliva, or nasal discharge (Figure 1a-c). Interestingly, only semen showed a positive correlation with plasma ( Figure 1d). The protein content of each biological fluid was examined to shed light on the correlation results.
Analysis of Protein Content of Biological Fluids
Because the component of tear fluid is similar to plasma, autologous plasma eye drops are widely used for the treatment of dry eyes [17]. However, a comparison of the high speed Triple TOF system analysis showed that the proportion of main proteins was different between plasma and tear
Analysis of Protein Content of Biological Fluids
Because the component of tear fluid is similar to plasma, autologous plasma eye drops are widely used for the treatment of dry eyes [17]. However, a comparison of the high speed Triple TOF system analysis showed that the proportion of main proteins was different between plasma and tear fluids [18]. The most abundant proteins in tear fluids are lysozyme, lactoferrin, and lipocalin 1, whereas those of blood plasma are albumin and immunoglobulins. No correlation between the polysulfide in tear fluids and the plasma polysulfide level could be determined, due to the difference in protein compositions.
Saliva is an ideal specimen for clinical diagnosis because of the noninvasive sampling method. Proteomics analysis using two-dimensional gel electrophoresis (2D gel) revealed that some saliva proteins are also present in plasma [15]. This evidence indicates that those proteins may have transferred from plasma to saliva, while the other saliva proteins are produced locally by the salivary gland. Serum albumin is one of the abundant proteins in saliva besides salivary α-amylase. Another plasma protein, prolactin, is also found in saliva. These proteins are reported to come from gingival crevicular fluid (GCF). The protein composition of GCF is almost similar to that of plasma [19]. Despite this similarity in protein content, there is no correlation between saliva and plasma (Figure 1b), which may be caused by the ordinal oxidative stress in oral fluid [20].
Nasal discharge constituted of interstitial fluid, plasma, mucus, and nasal secretion [16]. Polysulfide levels, as well as protein levels, in one of the healthy subjects were 3-10 times higher than others, however, the polysulfide/protein molar ratio in nasal discharge did not correlate with plasma. This result suggests that the polysulfide in nasal discharge may be from other sources except plasma.
The positive correlation of polysulfide levels between semen and plasma suggests that the protein composition of seminal fluid may be similar to that of plasma. Human seminal fluid is a secretion from the seminal vesicle, epididymis, prostate, and the urethral gland. The seminal fluid accounts for 95% of total semen [21]. Previous studies demonstrated that HSA constituted approximately 17.7% to 22.7% of the total protein in semen, while that of plasma is about 64% [14]. In contrast, immunoglobulins (alpha, beta, and gamma) occupy a higher ratio of the protein content in semen [14,22].
In addition to protein content, the difference of redox environment among these fluids may also contribute to the polysulfide level in each biological fluid examined. Oral environment is exposed to ROS produced by oral bacteria [23]. The eye is also exposed to ROS caused by wearing contact lens [24,25] or inflammations [26]. Compared to those two environments, ROS levels in seminal vesicle or testis might be very low in a healthy subject. Therefore, polysulfide in semen reflected the oxidative stress of plasma.
Effect of Age, Gender Difference, and BMI on Polysulfide Levels in Biological Fluids
Relationships among age, gender difference, BMI, and plasma polysulfide level were assessed. There is no statistically significant difference (p = 0.052), however, Figure 2a showed that plasma polysulfide tends to be higher with increasing age within the range investigated (22-43 years old). Aging is known as one of the risk factors of oxidative stress [27]. Therefore, we predicted that aging would decrease the amount of polysulfide, but the results indicated otherwise. This may be due to the age range examined in this study being rather narrow. Previous reports showed that aging decreases the ratio of the antioxidant glutathione/glutathione disulfide (GSH/GSSG) for the age range of 40 to 90 [28]. For the age range of below 40, the ratio was getting higher with increasing age. Further studies are required to understand the overall relationship between aging and polysulfide levels. On the other hand, there was no association between plasma polysulfide with gender difference or BMI (Figure 2b,c).
Amylase activity was measured as previously described [29] (Figure 2d). Interestingly, it was shown that as the polysulfide level in the saliva increases, the amylase activity increases as well. Physical or psychosocial stress is known to increase the activity of amylase [30,31]. JL Kroll et al. reported that the level of hydrogen sulfide (H 2 S) in saliva increases with psychological stress [32]. Thus, polysulfide levels might associate with salivary amylase activity. Intriguingly, oxidation of cysteine residue on bacterial α-amylase is known to decrease its activity [33]. Further investigation is required to investigate whether polysulfide controls the activity of α-amylase.
Relationship Between Sperm Activity and Polysulfide Levels in Semen
The polysulfide levels in semen showed a strong positive correlation with the amount of alive sperm measured by WST-8 ( Figure 3a). Conversely, there was no correlation between polysulfide level and the semen volume or age (Figure 3b,c). Semen was centrifuged to separate seminal fluid from sperm, so that the polysulfide levels in seminal fluid and sperm could be determined. The results showed that most polysulfide was contained in seminal fluid (Figure 3d). These data suggested that the correlation between polysulfide levels in seminal fluid associated with sperm activity (Figure 3a) was most likely due to the redox activity of polysulfide. Several studies have reported that ROS damages sperm DNA and decreases sperm motility [34]. The presence of cysteine or glutathione improved the motility by suppressing ROS [35]. Hydrogen sulfide (H2S) also has been reported to prevent sperm from oxidative stress [36]. The present study on healthy volunteers showed that the ROS level in semen did not change by age, however, ROS of infertile men (>40 years) was significantly higher than that of men under 40 years of age [37]. Further study of the effect of semen polysulfide on these age-related ROS levels should lead to development of effective diagnostic tools of infertility.
Relationship Between Sperm Activity and Polysulfide Levels in Semen
The polysulfide levels in semen showed a strong positive correlation with the amount of alive sperm measured by WST-8 ( Figure 3a). Conversely, there was no correlation between polysulfide level and the semen volume or age (Figure 3b,c). Semen was centrifuged to separate seminal fluid from sperm, so that the polysulfide levels in seminal fluid and sperm could be determined. The results showed that most polysulfide was contained in seminal fluid (Figure 3d). These data suggested that the correlation between polysulfide levels in seminal fluid associated with sperm activity (Figure 3a) was most likely due to the redox activity of polysulfide. Several studies have reported that ROS damages sperm DNA and decreases sperm motility [34]. The presence of cysteine or glutathione improved the motility by suppressing ROS [35]. Hydrogen sulfide (H 2 S) also has been reported to prevent sperm from oxidative stress [36]. The present study on healthy volunteers showed that the ROS level in semen did not change by age, however, ROS of infertile men (>40 years) was significantly higher than that of men under 40 years of age [37]. Further study of the effect of semen polysulfide on these age-related ROS levels should lead to development of effective diagnostic tools of infertility.
The Circadian Rhythm of Polysulfide Level in Plasma
Plasma hydrogen sulfide (H2S) concentration has been reported to exhibit diurnal fluctuations [38]. To investigate if the timing of sampling has any effect on the polysulfide level, we examined the circadian rhythm of plasma polysulfide levels. This is the first report to investigate circadian rhythm of plasma polysulfide. The plasma polysulfide level, measured by EMSP, tended to increase slightly from 12:30 pm to 21:30 and tended to decrease again by noon, but there was no statistically significant difference (Figure 4a). In addition, we also measured the polysulfide level in plasma using SSP4, which is a fluorescent probe of polysulfide. Figure 4b showed that the mean fluorescence intensity (MFI) significantly increased until 15:30 and fell by 3:30 at night. The polysulfide level at the time of 15:30 was significantly higher than that of 0:30 and 3:30. Next, the antioxidant activity of plasma was evaluated using the AAPH radical elimination method. The antioxidant activity increased around 15:30 (Figure 4c), but the plasma thiol level fell slightly around midnight (Figure 4d). The eliminated radical level at 15:30 was significantly higher than at 9:30 (Figure 4c).
The Circadian Rhythm of Polysulfide Level in Plasma
Plasma hydrogen sulfide (H 2 S) concentration has been reported to exhibit diurnal fluctuations [38]. To investigate if the timing of sampling has any effect on the polysulfide level, we examined the circadian rhythm of plasma polysulfide levels. This is the first report to investigate circadian rhythm of plasma polysulfide. The plasma polysulfide level, measured by EMSP, tended to increase slightly from 12:30 pm to 21:30 and tended to decrease again by noon, but there was no statistically significant difference (Figure 4a). In addition, we also measured the polysulfide level in plasma using SSP4, which is a fluorescent probe of polysulfide. Figure 4b showed that the mean fluorescence intensity (MFI) significantly increased until 15:30 and fell by 3:30 at night. The polysulfide level at the time of 15:30 was significantly higher than that of 0:30 and 3:30. Next, the antioxidant activity of plasma was evaluated using the AAPH radical elimination method. The antioxidant activity increased around 15:30 (Figure 4c), but the plasma thiol level fell slightly around midnight (Figure 4d). The eliminated radical level at 15:30 was significantly higher than at 9:30 (Figure 4c).
The correlation of each parameter is shown in Figure 4e. The fluorescence intensity of SSP 4 and the radical scavenging activity of AAPH showed a positive correlation. Polysulfide level measured by EMSP increased from 12:30, reaching a maximum at 21:30. On the other hand, the SSP4 intensity decreased between 15:30 to 3:30 followed by an increase between 12:30 to 15:30. The discrepancy between EMSP and SSP4 results might be due to the reactivity of each reagent. SSP4 would attack cysteine residues on the protein surface only due to steric hindrance, whereas EMSP could react with polysulfide at all locations in a protein. In fact, the activity of AAPH radical elimination may have a similar rhythm to polysulfide measured by SSP4 because polysulfide on the surface may scavenge ROS easier than intramolecular polysulfide (Figure 4b-e). It is reported that H 2 S binds to HSA expeditiously, however, H 2 S levels did not affect plasma polysulfide level in this study. A previous report has shown that the plasma H 2 S level of mice at 7:00 is lower than 19:00 via 3-mercaptopyruvate sulfurtransferase activity [37]. Mice are nocturnal animals, thus plasma H 2 S levels of humans in the morning is predicted to be higher than in the evening.
Sample Collection
Plasma was collected by pricking fingertips (second to fourth finger) using OneTouch ® . Plasma, saliva, and tear fluid samples were collected in the morning of the sampling day, between 8:30 am to 11:30 am. Saliva was obtained with a cotton swab placed on the hypoglottis for 1 min after brushing teeth without using toothpaste for 3 min. The cotton was then placed into a Salivette ® tube and centrifuged at 2000× g for 5 min. Saliva at the bottom of the tube was collected and used for experiments. Nasal discharge was blown into Kimwipes ® and centrifuged at 2000× g for 5 min in an empty Salivette ® tube. Semen was collected to 50 mL of a Falcon ® tube and incubated at room temperature until Liquefaction (about 30 min to 1 h). One percent of antimicrobial agent was mixed into seminal fluids. Studies involving human fluid collection were approved by the Ethics Review Committee for Human Experimentation of our institution (Tokushima University, TU, Tokushima, Japan), and informed consent was obtained from all subjects (TU-No. 3351).
Measuring Polysulfide by EMSP
3× EMSP solution was made by mixing ascorbic acid (792.54 mg) with 1.5 mL of water and 5 N KOH (3 mL). Samples were diluted in water and 3× EMSP was added. Then, samples were incubated at 37 • C for 4 h. After the reaction, they were mixed with 600 µL of 1% sodium acetate and centrifuged at 2300× g for 5 min to recover the released sulfide as a precipitate. Supernatants were removed gently and washed by 1 mL of water 3 times to remove completely peptides and proteins contained in the supernatant. After the last round of supernatant removal, water (500 µL) was added and vortexed. Protein contamination was checked using a protein determination assay. Twenty millimolar of DPDA (50 µL) in 1.2 N HCl and 30 mM of FeCl 3 (50 µL) in 7.2 N HCl were mixed into the solution and vortexed well. The samples were centrifuged at 2300× g for 5 min and 200 µL of each solution were transferred into 96 well plates and absorbance was measured at 665 nm. A standard curve was constructed by using Na 2 S (15.6 to 250 µM).
Measuring Activities of Sperm in Semen
Twenty microliters of semen were diluted in 160 µL of 67 mM sodium phosphate buffer (pH 8.0) and mixed with 20 µL of WST-8. After the 1 h incubation at 37 • C, samples were centrifuged at 10,000× g for 5 min. Absorbance at 450 nm was measured on a 96 well plate.
Determination of Thiol Contents in Plasma
Twenty microliters of plasma were mixed into 100 µL of 5 mM DTNB in 100 mM of potassium phosphate buffer/1 mM DTPA (pH 7.0). After incubation for 60 min at room temperature, absorbance at 412 nm was measured by a plate reader (BioTek, Winooski, VT, USA). GSH (31.3 to 1000 µM) was used for constructing a standard curve.
Measuring Anti-Oxidative Activity Against AAPH Radical
Anti-oxidative activity was analyzed by AAPH radical method as previously described [39]. Sixteen millimolars of linoleic acid solution was prepared by mixing 5 mL of borate buffer (50 mM, pH 9.0), 250 µL of linoleic acid, 1 mL of sodium hydroxide, and 250 mL of tween20 and diluting in a measuring cylinder to 50 mL by borate buffer (50 mM, pH 9.0). AAPH was dissolved in cold water on ice. Nine hundred and twenty microliters of phosphate buffer saline (PBS) preheated at 37 • C, 20 µL of plasma was mixed, and 10 µL of linoleic acid solution (16 mM) was added. Fifty microliters of AAPH solution (50 mM) was added and incubated for 1 h at 37 • C. After the reaction, the sample solution was dispensed into 96 wells of ultraviolet plate and absorbance, read at 234 nm. Radical elimination activity was calculated as follows: Eliminated radical (%) = (Abs. at 234 nm of sample with AAPH − Abs. at 234 nm of sample without AAPH) × 100/(Abs. at 234 nm of PBS with AAPH − Abs. at 234 nm of PBS without AAPH)
Detection of Sulfane Sulfur by a Fluorescence Probe
Sulfane sulfur was detected by a fluorescence probe, SSP4, according to a previous report [40]. Plasma was diluted in 1 mL of 1 mM CTAB/PBS. 2 mL of 1 mM SSP4 in DMSO was added and incubated for 10 min at room temperature. Fluorescence intensity was measured at ex/em = 457 nm/514 nm.
Statistical Analysis
The statistical significance of collected data was evaluated using the ANOVA analysis followed by the Newman-Keuls method for more than 2 means. Differences between groups were evaluated by the Student's t test. p < 0.05 was regarded as statistically significant.
Conclusions
We succeeded in detecting the presence of polysulfide in various biological fluids, including semen and nasal discharge, for the first time. Each polysulfide level had no co-relationship among themselves except those between plasma and semen. These results suggest that polysulfide in each type of biological fluid was surrounded by different independent environments comprised of different protein compositions. Therefore, optimum fluid should be selected and analyzed for monitoring redox balance via measuring polysulfide. Furthermore, the effect of circadian rhythm on plasma polysulfide level warrants further investigation.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2019-05-05T13:03:07.512Z
|
2019-04-30T00:00:00.000
|
{
"year": 2019,
"sha1": "b384d6408cbdc5a67b5b7033df1fbe04151d47b0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/9/1689/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b384d6408cbdc5a67b5b7033df1fbe04151d47b0",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
233518792
|
pes2o/s2orc
|
v3-fos-license
|
Global patterns and drivers of alpine plant species richness
Aim: Alpine ecosystems differ in area, macroenvironment and biogeographical history across the Earth, but the relationship between these factors and plant species richness is still unexplored. Here, we assess the global patterns of plant species richness in alpine ecosystems and their association with environmental, geographical and historical factors at regional and community scales. Location: Global. Time period: Data collected between 1923 and 2019. Major taxa studied: Vascular plants. Methods: We used a dataset representative of global alpine vegetation, consisting of 8,928 plots sampled within 26 ecoregions and six biogeographical realms, to estimate regional richness using sample- based rarefaction and extrapolation. Then, we evaluated latitudinal patterns of regional and community richness with generalized additive models. Using environmental, geographical and historical predictors from global raster layers, we modelled regional and community richness in a mixed- effect modelling framework. Results: The latitudinal pattern of regional richness peaked around the equator and at mid- latitudes, in response to current and past alpine area, isolation and the variation in soil pH among regions. At the community level, species richness peaked at mid-latitudes of the Northern Hemisphere, despite a considerable within- region variation. Community richness was related to macroclimate and historical predictors, with strong effects of other spatially structured factors. Main conclusions: In contrast to the well- known latitudinal diversity gradient, the alpine plant species richness of some temperate regions in Eurasia was comparable to that of hyperdiverse tropical ecosystems, such as the páramo. The species richness of these putative hotspot regions is explained mainly by the extent of alpine area and their glacial history, whereas community richness depends on local environmental factors. Our results highlight hotspots of species richness at mid- latitudes, indicating that the diversity of alpine plants is linked to regional idiosyncrasies and to the historical prevalence of alpine ecosystems, rather than current macroclimatic gradients.
| INTRODUC TI ON
More than 200 years after the attempt by Alexander von Humboldt to formulate a unified theory of the natural world, understanding the global patterns of diversity remains one of the greatest challenges in biogeography and macroecology (Brummitt et al., 2020;Keil & Chase, 2019;Kier et al., 2005;Kreft & Jetz, 2007;Kreft et al., 2008;Weigelt et al., 2016). In particular, mountains have been revealed as centres of biodiversity, with a disproportionately high species richness in comparison to their corresponding lowland regions (Antonelli et al., 2018;Muellner-Riehl et al., 2019; Rahbek, Borregaard, Antonelli, et al., 2019). Along the elevational gradient of mountains, the compression of life zones brings different biomes into proximity, with the alpine belt representing the outpost for plant life above the climatic tree line. Alpine ecosystems, governed by low-temperature regimes, cover c. 3% of land outside Antarctica and are distributed across all continents and latitudes Testolin et al., 2020). Despite ongoing efforts to monitor changes in the biota of mountain summits in the face of climate change Pauli et al., 2012;Steinbauer et al., 2018), we still lack a picture of the global patterns of plant diversity in alpine habitats, let alone an understanding of its major drivers.
According to the general latitudinal diversity gradient, biodiversity is expected to peak at the equator (Hillebrand, 2004).
Among the possible explanations for this pattern (Lomolino et al., 2017), latitude is normally interpreted as a proxy for climatic conditions and available metabolic energy, which might have an effect on speciation rates (Wang et al., 2009). Whether this general rule also applies to alpine ecosystems, however, is still a matter of debate. Regardless of their latitude, alpine ecosystems are determined by low-temperature conditions, hence low energy input. Therefore, lowland and alpine thermal conditions from polar to equatorial latitudes are increasingly decoupled from one another (Testolin et al., 2020). Besides having a lower energy input compared with the lowlands, alpine ecosystems are also highly heterogeneous in their topoclimates (Quinn, 2008), which might weaken the correlation between latitude and primary productivity (Testolin et al., 2020). For these reasons, plant diversity in alpine areas might decouple from major climatic gradients.
Alpine areas are also isolated from each other, forming fragmented systems of "sky islands" surrounded by lowland environments that limit species dispersal (McCormack et al., 2009).
Following the ecological principle of the species-area relationship (Lomolino, 2000b) and its application to the theory of island biogeography (MacArthur & Wilson, 1967), the extent of alpine habitats and their isolation could have affected rates of colonization, speciation and extinction of plants (Heaney, 2000;Steinbauer et al., 2016).
These processes might have resulted in biodiversity patterns linked to the historical and current abundance of alpine habitats at the global scale. Although it has been reported widely that the biogeographical history of mountains has shaped diversity patterns of cold-adapted plant species in alpine regions Harris, 2007;McGlone et al., 2001;Sklenář et al., 2014), a major unresolved question is the extent to which the interplay of ecological drivers and historical contingencies dictates the patterns of alpine plant diversity at the global level (Nagy & Grabherr, 2009). The significance of these drivers might shift from global to local spatial scales and can reveal new patterns and relationships that are not evident at regional scales at which alpine plant diversity patterns have been studied so far (Jiménez-Alfaro et al., 2014;Lenoir et al., 2010;Moser et al., 2005;Vonlanthen et al., 2006).
Here, we compiled a dataset of 8,928 vegetation plots with 5,325 vascular plant species sampled by botanical experts in alpine ecosystems over the past 100 years and representative of global alpine vegetation. By analysing the data at both regional and community levels, we investigate: (a) the global latitudinal patterns of alpine plant species richness; and (b) the relative influence of environmental, geographical and historical factors in driving such patterns. We also evaluate how those patterns and drivers change between regional and community levels and how they relate to hotspots of alpine plant diversity recognized at the global scale.
| Study system and data collection
We considered as zonal alpine vegetation any plant community dominated by graminoids, forbs and dwarf shrubs above the climatic tree line (Körner, 2003). In addition to strictly zonal habitats, snow-patch vascular plant communities and vegetation on rocks and screes are also found ubiquitously in the alpine belt and were included in our study. We did not consider vegetation from polar climates owing to the absence of elevational tree lines and their distinct environments (Quinn, 2008;Walter & Box, 1976). Therefore, the alpine vegetation included in the present study corresponds to the "mid-latitude alpine tundra" and the "tropical alpine biome" groups as defined by Quinn (Supporting Information Table S1). Datasets from different sources were standardized by identifying a minimum common set of plot attributes, including size, elevation and geographical coordinates. When the geographical coordinates were missing for small, clearly delimited areas, we estimated plot locations from maps (i.e., Mount Jaya; Hope et al., 1976) or by randomly assigning the coordinates of raster cells with the same elevation (±10 m) as the plots in that area (i.e., Mount Wilhelm and Drakensberg; Brand et al., 2015;Wade & McVean, 1969), using the SRTM-3 digital elevation model at 30 m resolution (Farr et al., 2007;NASA & JPL, 2013). Species cover values with discrete scales were transformed to the mean value of the corresponding percentage interval. Species names were harmonized using the Taxonomic Name Resolution Service (Boyle et al., 2013) online tool (https://tnrs.biend ata.org/) with default settings, updating the names to the most recent nomenclature and merging subspecies and varieties to the species level by summing their respective cover values.
The initial dataset, consisting of 10,408 plots, was filtered further by removing plots with tree species or incomplete taxonomic identification. When taxa identified to the genus level or higher taxonomic rank represented ≥ 10% of the plot vegetation cover, the corresponding plot was discarded; otherwise, we removed those taxa from the plot record (3,086 plots from which at least one taxon was removed; median number of taxa removed = 1). Each plot was then assigned to a region based on its location. Regions were defined based on the approximate extent of ecoregions (Olson et al., 2001), which represent an ecologically meaningful framework for identifying distinct geographical units at the global scale. Given that the names of some ecoregions did not reflect the presence of an alpine vegetation belt, we renamed these regions after the main mountain ranges where the plots were located, following Körner et al. (2017) (Supporting Information Table S2). For the analyses, we retained only regions with ≥ 60 plots and removed extremely small or large plots (<.25 or >400 m 2 ). To filter out compositional outliers, we performed a detrended correspondence analysis (DCA) on each regional dataset, excluding those plots whose score on the first axis (DCA1) was larger or smaller than 10 times the width of the interquartile range from the median. After removing the outliers, the gradient length of DCA1 ranged from 3.6 to 9.9 standard deviation units of species turnover within different regions (Supporting Information Table S3), indicating different, yet high, degrees of regional beta diversity. Finally, to assess the representativeness of our dataset, we compared the climatic space of the plots against the climatic envelope of global alpine areas (Testolin et al., 2020;Supporting Information Figure S1). The
| Diversity measures
Given that the number of samples differed considerably among regions, we estimated regional species richness using samplebased rarefaction and extrapolation (Chao et al., 2014) with the software R (R Core Team, 2020) and the package iNEXT (Hsieh et al., 2016). This technique allows a statistically sound comparison of diversity across groups with different sample sizes through the construction of sampling curves for species richness. These curves can be rarefied (interpolated) to smaller sample sizes or extrapolated (predicted) to larger sample sizes (Chao et al., 2014;Hsieh et al., 2016). Here, we estimated the regional richness for a unique sample size of 180 plots, corresponding to approximately three times the smallest regional sample (Figure 1b,c). We chose 180 plots as a trade-off between the loss of data in intensively sampled regions versus the inclusion of all regions in the analyses. As such, these estimates should not be interpreted as representing the total regional species pools, but rather as comparable estimates of regional richness. Given that our global dataset comprised plots of different sizes, we evaluated the effect of plot size on the species richness estimates. To do this, we compared the same estimates using three subsets of different plot sizes (small, <10 m 2 ; medium, ≥ 10 and <100 m 2 ; and large, ≥ 100 m 2 ). For those F I G U R E 1 Overview of the alpine vegetation dataset and regional species richness. (a) Spatial distribution of alpine vegetation plots. (b) Number of plots collected in this study (N) and estimated species richness (S est ) for a comparable number of 180 plots in 26 alpine regions and six biogeographical realms. (c) Rarefaction curves of species richness for each region. Dashed lines indicate extrapolated values beyond the available number of plots. Continuous lines indicate that regional estimates were interpolated from larger sample sizes. The shaded areas represent the 95% bootstrap confidence intervals [Colour figure can be viewed at wileyonlinelibrary.com] regions where ≥ 60 plots of each of the three different sizes were available (Alborz Mountains, Central and Eastern Alps, Colombian and Ecuadorian Andes, Eastern African Mountains, South Central Rocky Mountains and Western Carpathians), we compared richness estimates obtained from the different subsets. We found that, regardless of the subset used, the relative differences among regions were largely preserved, especially for those datasets comprising large numbers of plots (e.g., Central and Eastern Alps and Western Carpathians). An exception was the region of the Colombian and Ecuadorian Andes, where regional richness estimates were highly dependent on plot size (Supporting Information Figure S2). However, large and small plots both resulted in lower species richness estimates compared with medium-sized plots, suggesting that the differences are driven by different vegetation types being sampled with differently sized plots.
For each plot, we calculated community richness as the total number of species. We evaluated latitudinal patterns of regional and community richness using generalized additive models (GAMs), with a smoothing term for latitude. Our alpine regions were characterized by very different extents, and plot size varied widely. To account for different regional extents and plot sizes in the evaluation of species richness patterns, we also fitted GAMs on the residuals from ordinary least square regressions of ln(regional richness) as a function of ln(current local alpine area) and ln(community richness) as a function of ln(plot size). The procedure for calculating local alpine area is described in section 2.4.
| Environmental predictors
To analyse the drivers of species richness, we retrieved a set of environmental variables linked to plant diversity in the alpine belt from online sources. We calculated several climatic predictors at the plot level using digital sources at c. 1 km resolution. We used data from CHELSA (Karger et al., 2017) within the time frame of the growing season, defined as days with mean temperature > .9°C (Paulsen & Körner, 2014). Given that daily temperature data were not available, we estimated the growing season using monthly averages, including the months with a mean temperature > .9°C. Although this might have resulted in a sharper delimitation of season lengths, it probably had little effect on our global analyses. We included the mean temperature, precipitation, growing degree days and mean potential evapotranspiration of the growing season, all of which have been reported to have positive effects on photosynthetic activity and species richness in alpine areas (Körner, 2003;Moser et al., 2005;Nagy & Grabherr, 2009). Growing degree days (i.e., the sum of monthly temperatures > .9°C multiplied by the total number of days in such months) were calculated using the "growingDegDays" function of the R package envirem (Title & Bemmels, 2018). The mean potential evapotranspiration of the growing season was estimated with the "hargreaves" function of the R package SPEI (Beguería & Vicente-Serrano, 2017), using maximum and minimum monthly values of temperature and monthly precipitation. The monthly values of potential evapotranspiration obtained were then averaged across months with a mean temperature > .9°C.
Together with climate, soil pH is known to be a significant driver of species richness in the alpine belt (Vonlanthen et al., 2006) and is a good surrogate for the dominant bedrock, effectively differentiating calcareous and siliceous substrates Lenoir et al., 2010). We derived estimates of soil pH from the SoilGrids database, averaging the values estimated at 5 and 15 cm depths (Hengl et al., 2017). When the pH value was missing for a plot location (45 plots), we assigned the value of the closest pixel to the plot. Despite the limitations posed by the use of global datasets to estimate fine-scale soil properties (Hengl et al., 2017), the obtained values covered a wide span of soil pH variation in alpine environments (4.40-8.35) and were, therefore, useful to distinguish dominant bedrock types.
Regional values of the predictors computed at the plot level were then estimated as the average of all vegetation plots within a region. For climatic predictors and soil pH, we also calculated the standard deviation of the predictor to test for the effect of environmental heterogeneity.
| Geographical and historical predictors
In addition to environmental variables, large-scale geographical factors, such as area and isolation, are known to influence the current diversity of island-like ecosystems (MacArthur & Wilson, 1967;Whittaker et al., 2008Whittaker et al., , 2017, as do their historical changes caused by climatic fluctuations Weigelt et al., 2016). We delimited alpine area as the portion of land with a mean temperature of the growing season between 3.5 and 6.4°C or with a length of the growing season between 1 and 3 months (Paulsen & Körner, 2014). We did this both for current climatic conditions and considering climate during the Last Glacial Maximum (LGM) (Supporting Information Figure S3).
Alpine areas were calculated at two scales, reflecting the spatial extents of each regional sample (local area) and the total continuous alpine area extending beyond the samples (total area). We defined the local area as the extent of the alpine area contained within the convex hull formed by all plots of each region. In some cases, the coarse resolution of the climatic datasets used to estimate the alpine areas failed to detect any alpine patch within the hulls. Therefore, we applied a 5 km buffer around each hull to include at least some alpine area patches for all regions. The total alpine area for each region was estimated as the continuous extent of all alpine patches intersected by the hulls, reflecting the total extent of alpine habitats available to species dispersal (Supporting Information Figure S4). Calculating alpine areas at two scales also allowed us to estimate the completeness of the regional samples by calculating the percentage of the local alpine area encompassed by the samples over the total alpine area (Supporting Information Table S3). Given that the local and total ln-transformed areas were highly correlated (Pearson's r > .8), only the former was retained in the subsequent analyses.
In addition, we estimated the current and LGM isolation as the minimum distance from the centroid of each alpine region to the boundary of the nearest alpine area ≥ 1,000 km 2 . We set a threshold of 1,000 km 2 to exclude smaller alpine patches that could still be part of the same alpine region, that is, islands of the same archipelago . If an alpine region had a total area ≥ 1,000 km 2 , isolation was set to zero. Current and LGM alpine areas and isolation were ln-transformed.
Given that past climatic changes could have affected current diversity patterns (Graham et al., 2014), we also calculated the velocity of climate change since the LGM as a measure of regional climatic instability (Loarie et al., 2009;Sandel et al., 2011), using the "gVoCC" function of the VoCC package (Molinos et al., 2019) with current and LGM mean annual temperatures. The latter was calculated as the average of the two PMIP3 climatic datasets derived using the CCSM4 and MIROC-ESM climate models (Sandel et al., 2011;Weigelt et al., 2013Weigelt et al., , 2016. Finally, we included biogeographical realms (Keil & Chase, 2019) as a proxy for differences in evolutionary history. Owing to the lack of regional data, we did not account for differences in the geological history of mountains. However, we acknowledge that this could influence speciation and partly explain species richness (Whittaker et al., 2008).
| Statistical analyses
We tested the influence of environment, geography and history on estimated regional richness by fitting individual Poisson generalized linear mixed-effects models (GLMMs) to each predictor with the "glmer" function of the R package lme4 (Bates et al., 2015). Initially, we tested univariate relationships to select a set of significant variables to be used in subsequent multivariate modelling. To account for uncertainties in regional richness estimates, we weighted the observations by the inverse of their 95% confidence interval width. We also scaled the predictors by subtracting their mean and dividing by their standard deviation across the regions to ensure model convergence.
To control for overdispersion and reduce the risk of type I errors, we added an observation-level random effect (OLRE) to the models, that is, a unique level of a random effect for each data point that models the extra-Poisson variation present in the data (Harrison, 2014). The ratios between the sum of squared Pearson residuals and the residual degrees of freedom of the fitted models with OLRE indicated no additional overdispersion. Next, we analysed the correlations among significant predictors with the Pearson correlation coefficient. We found that some of our regional variables were strongly correlated with one another (Supporting Information Figure S5), limiting our ability to distinguish partial contributions. However, we built alternative multivariate models by retaining only the significant, uncorrelated predictors. Finally, we checked for the presence of spatial autocorrelation in model residuals with the Moran's I test implemented in the "testSpatialAutocorrelation" function of the DHARMa package (Hartig, 2020) and found none (Supporting Information Table S4). We also fitted a null (intercept-only) model to compare the goodness of the fits. Models were compared using the Akaike Information Criterion corrected for small sample sizes (AICc) in addition to marginal and conditional R 2 (mR 2 and cR 2 ), calculated with the "r.squaredGLMM" function of the MuMIn package (Barton, 2019). Given that the only random effect in the models was an OLRE, mR 2 = cR 2 .
We modelled community richness by fitting Poisson GLMMs including the environmental, geographical and historical predictors. Growing degree days and precipitation of the growing season were highly correlated with temperature and evapotranspiration of the growing season, respectively (Pearson's r > .6). Likewise, area and isolation-related variables were highly correlated with one another. Thus, to avoid multicollinearity issues, we retained temperature and evapotranspiration of the growing season, in addition to current and LGM local areas, and excluded the other variables. We also accounted for different plot sizes by adding their ln-transformed values to the model and controlled for overdispersion by adding an OLRE. Given that the plot-level predictors were derived from digital datasets at 1 km resolution, we randomly selected one plot for each .01° cell (c. 1 km). We repeated the process 999 times and obtained as many random subsets of 2,534 plots, that is, one plot for each .01° cell. Before modelling, all predictor variables were scaled to ensure model convergence. We then fitted the GLMMs to each of the 999 subsets. Given that the Moran's I tests highlighted strong spatial autocorrelation of the models' residuals, we re-fitted the models including random intercepts for .05 (≈5 km) and .1 (≈10 km) degree cells, which largely resolved the issue (Moran's I ≈ 0; p > .05 for 75% of model fits). We also tested for regional effects by fitting another model to the 999 subsets, with an additional random intercept for regions. Finally, we averaged the fixed effect coefficients of the resulting models using weights based on their AICc with the "model. avg" function of the MuMIn package. The two resulting averaged models (with and without the random intercept for region) were compared using mean AICc, mR 2 and cR 2 , obtained by calculating the weighted average of the respective indices for the 999 fits.
| RE SULTS
According to sample-based rarefaction and extrapolation of regional richness (estimated for 180 plots), the richest alpine regions in this study were the Colombian and Ecuadorian Andes (Neotropics; 543 (Figure 3a). The same patterns emerged even when different regional extents and plot sizes were accounted for, with an additional peak of regional richness at F I G U R E 2 Latitudinal patterns and drivers of estimated regional species richness. (a) Regional plant species richness estimated for 180 plots (S est ). The scatterplot on the right represents the latitudinal trend. The three horizontal grey lines on the map and the scatterplot represent the equator and the tropics. The black line represents a generalized additive model (GAM) fit. (b) Single-predictor models of regional species richness. The dots represent the regional plant species richness estimated for 180 plots (S est ). The error bars represent the 95% bootstrap confidence intervals of the richness estimates. Black lines represent the individual Poisson generalized linear mixed-effects model (GLMM) fits. The grey bands are the 95% bootstrap confidence intervals. Marginal R 2 (mR 2 ) and model significance are reported. Significance codes: <.001 (***); <.01 (**); <.05 (*). The numbers of the main coldspot and hotspot regions are reported according to Figure S6a,b).
At the community scale, species richness was positively related to the evapotranspiration of the growing season (β = .13 ± .05; p < .001), velocity of climate change (β = .10 ± .05; p < .001) and LGM alpine area (β = .19 ± .05; p < .001), whereas it was negatively related to soil pH (β = −.12 ± .05; p < .001). Nearctic plots were generally poorer in species than plots in other realms (β = −.44 ± .22; p < .001; Figure 3b; Supporting Information Table S5). Overall, the fixed effects explained 22% of the variance, and the inclusion of the random effects controlling for the spatial aggregation of plots at 5 and 10 km increased the explained variance to 58%. The inclusion of regions as an additional random effect increased the total explained variance further to 65% and left as significant fixed effects the mean temperature of the growing season (β = .04 ± .03; p < .05) and soil Table S5).
| Regional patterns and drivers
Our results, based on a representative sample of global alpine vegetation, showed a latitudinal pattern of plant species richness, with peaks at mid-latitudes and around the equator. The highest estimate of regional richness was detected in the Colombian and Ecuadorian Andes (Neotropics). This region is home to the páramo ecosystem, a centre of plant diversity within the tropical biodiversity hotspot known to host the richest alpine flora in the world (Madriñán et al., 2013;Myers et al., 2000). At higher latitudes, we also found that the Pamir and Altai Mountains (Eastern Palaearctic) exhibited regional richness comparable to the páramos, representing hotspots of alpine plant diversity outside the tropics. This is consistent with previous studies that highlighted the high plant diversity of the Altai, Pamir, in addition to other Central Asian mountain systems (Agakhanjanz & Breckle, 1995;Brummitt et al., 2020;Kier et al., 2005;Körner, 1995;Xing & Ree, 2017). Nevertheless, these putative mid-latitude alpine hotspots are generally excluded from global centres of biodiversity, despite their importance as refugia for cold-adapted plant species . When accounting for the extent of the local alpine area, the Drakensberg (Afrotropics) and the Australian Alps (Australasia) emerged as centres of alpine plant richness of the Southern Hemisphere. Indeed, the high-elevation plateau of the Drakensberg has been widely recognized as a continental hotspot of botanical diversity (Brand et al., 2019;Carbutt, 2019), and the Australian Alps have been listed among the main national areas of plant species richness (Bell et al., 2018;Crisp et al., 2001). Other Africa (Anthelme & Dangles, 2012) and is an active volcano, with eruptions that limit development of vegetation (Nagy & Grabherr, 2009).
Moreover, the Northern Scandes were completely glaciated during the Pleistocene glacial maxima and located far from the Southern European glacial refugia further south (Lenoir et al., 2010).
Our models showed that the regional richness of alpine ecosystems is mostly independent of macroclimatic gradients. An analogous decoupling pattern has also been reported for the global diversity of grasses as a response to biogeographical history and the adaptation of certain lineages to cold and arid environments (Visser et al., 2014). Indeed, global alpine areas are climatically constrained toward low-temperature conditions (Körner, 2003;Nagy & Grabherr, 2009;Paulsen & Körner, 2014;Testolin et al., 2020).
Thus, although alpine plants respond to changes in temperature and light because of topography, large-scale richness patterns of alpine vegetation seem to be largely independent of energy gradients that determine species diversity at lower elevations (Hillebrand, 2004).
Moreover, global alpine areas are subjected to different amounts of precipitation and are differentiated along a gradient of humidity (Körner, 2003;Nagy & Grabherr, 2009;Testolin et al., 2020).
Although our dataset encompasses a large portion of the variation in water availability of global alpine areas (Supporting Information Figure S1), the effect of precipitation on regional richness was not significant. This suggests that the association of water availability with plant species richness might be restricted to local scales and especially to arid regions, where precipitation is the main factor limiting plant growth (Palpurina et al., 2017).
Contrarily to macroclimate, we found a positive effect of the extent of current alpine area and a negative effect of isolation. The importance of these factors is consistent with the predictions of the theory of island biogeography (MacArthur & Wilson, 1967), which posits that larger, less isolated islands are characterized by lower extinction rates and greater chances of being colonized by new species. Nevertheless, the historical legacy of the extent and isolation of alpine areas during the LGM also left a strong imprint on regional richness patterns that is independent of their current geographical characteristics. The extent of alpine areas during the LGM was the second strongest predictor of regional richness and, together with the current area, explained almost 80% of the variance. This is consistent with recent refinements of the theory of island biogeography that incorporate the effect of Late Quaternary climate oscillations on oceanic islands Weigelt et al., 2016).
Pleistocene glacial-interglacial cycles acted like a "historical sieve" (Körner, 1995) on alpine plant diversity. During glacial periods, downslope shifts of the alpine belt resulted in increased surface area and connectivity of tropical alpine archipelagos, in addition to colonization and diversification processes in mid-latitude mountain ranges that favoured in situ speciation (Flantua et al., 2020). The high species richness found in the Andes is probably the result of multiple contingencies related to South American tropical diversity and strong past connectivity of these mountains , which are not co-occurring in any other tropical region. Moreover, in Central Asian mountains, the emergence of habitat corridors during glacial periods resulted in extensive long-distance dispersal, with the consequent admixture of previously isolated floras (Agakhanjanz & Breckle, 1995;Agakhanyantz & Lopatin, 1978). Indeed, the Pamir Mountains are a continental hub for floristic migrations (the Pamir Knot) that connects south-central Asian ranges to the northern Siberian mountains (Agakhanjanz & Breckle, 1995), whereas the Altai Mountains connect diversity between Euro-Siberian and Central Asian floristic regions (Chytrý et al., 2012). Our results also show a positive relationship between regional richness and soil pH variability (a surrogate for bedrock heterogeneity) largely driven by the Pamir and Altai Mountains.
This finding confirms the effect of habitat heterogeneity on species richness inherent to larger areas (Lomolino, 2000a) through the occurrence of more diverse bedrock types (Moser et al., 2005).
| Community patterns and drivers
At the community scale, the latitudinal pattern of species richness was less pronounced than at the regional scale, with a single peak at mid-latitudes of the Northern Hemisphere, but a wide range of values within all regions. While controlling for plot size, we found a positive effect of evapotranspiration of the growing season and a negative effect of soil pH on community richness. The former is consistent with the species-energy hypothesis, which states that more productive communities (i.e., where higher temperatures and solar radiation support greater photosynthetic rates) are also richer in species (Wright, 1983). The latter could be explained by the absence of strongly acidic soils (pH < 4) in our dataset. Furthermore, soils with high pH values can be linked to reduced nutrient availability in harsh conditions and the confounding effect of reduced precipitation (Chytrý et al., 2007;Palpurina et al., 2017), explaining the lower species richness in our dataset. Despite the underlying causes of these effects, our results are in line with the role of energy-driven processes and bedrock mineralogy as determinants of vascular plant species richness in alpine communities (Lenoir et al., 2010;Moser et al., 2005;Vonlanthen et al., 2006). Nevertheless, evolutionary and historical factors might also affect current patterns of community richness (Ricklefs & He, 2016). The Nearctic realm exhibited lower community richness than any other biogeographical realm, possibly owing to limited evolutionary radiation of the North American temperate flora (Qian & Ricklefs, 2000). In addition, the velocity of climate change and the LGM extent of alpine area were both positively related to community richness, indicating that the greater availability of alpine habitats in the past influences plant species richness at the community scale (Pärtel & Zobel, 1999).
Large-scale environmental factors, however, explained only a limited proportion of the variation in community richness compared with regional and sub-regional effects, suggesting that dispersal-related processes and other spatially structured factors strongly influence local richness patterns (Dormann et al., 2007). In alpine landscapes, these effects are regulated by elevational and meso-topographical gradients that affect microclimatic conditions and local plant diversity (Bruun et al., 2006;Jiménez-Alfaro et al., 2014;Scherrer & Körner, 2011). A weak influence of global macroclimatic gradients on local communities has also been detected for functional diversity across plant formations (Bruelheide et al., 2018), but it had not been tested before on a single ecosystem. Our results, therefore, support the dominant role of within-region effects linked to postglacial spatial configuration and historical contingencies, rather than macroclimatic factors, when explaining the global variation of alpine local communities.
| Data constraints and assumptions
Despite including several alpine regions across all continents and latitudes, our dataset lacked information about some outstanding centres of alpine plant diversity, such as the Himalayas and Hengduan Mountains (Ding et al., 2020;Favre et al., 2015;Muellner-Riehl et al., 2019;Xing & Ree, 2017) or the Caucasus (Agakhanjanz & Breckle, 1995;Körner, 1995), owing to the limited availability of community data from these areas. Therefore, our results related to the latitudinal patterns of regional and community richness could be refined further by the future addition of data from currently missing regions. Nevertheless, our aim was not to present a complete census of global alpine regions, but to assess their richness patterns and the corresponding drivers using a representative sample. In this respect, the collection of georeferenced vegetation plots presented here encompasses all continents and a wide range of latitudes; it represents regions with markedly different biogeographical history and vegetation types growing on different substrates, and covers a large portion of the climatic envelope of global alpine areas (Supporting Information Figure S1). Despite the lack of some remarkable alpine regions, our dataset allowed us to highlight the presence of extratropical alpine diversity centres and the importance of historical factors in shaping the current alpine plant richness patterns.
We also note that the use of heterogeneous surveys from different collectors might create issues related to different sampling effort among regions. We controlled for sampling effort in two ways. First, we used rarefaction and extrapolation techniques that assumed that the spatial distribution of plots in each region was representative of the regional diversity. Although this assumption is difficult to prove without additional data, we note that our regional samples were selected to capture the local heterogeneity of vegetation types and covered a wide range of elevations in all regions (Supporting Information Table S3), thus increasing the probability that our regional richness estimates were correlated with regional species pools. Second, we explicitly quantified the proportion of the alpine area sampled in each region, thus allowing inter-regional comparisons even when the samples covered very different extents or only a small fraction of the total available alpine area (e.g., Ladakh Range, Pamir Mountains or Southern Cordillera Occidental Peru).
In addition, we note that taxon concepts might not be applied consistently across all datasets, that is, they are the result of "lumping" and "splitting" of taxa delimitations that change with time and from place to place (Rouhan & Gaudeul, 2014;Wiser, 2016).
Although a harmonized species nomenclature cannot account fully for this taxonomic bias (Wiser, 2016), it still represents the most effective tool to address taxonomic inflation in macroecological studies (Isaac et al., 2004). Indeed, by correcting misspelt names and merging synonyms, we assume that the main sources of error relevant to the estimation of species richness in different regions were removed, and remaining issues about potential pitfalls in the geographical distribution of species (Boyle et al., 2013) are not pertinent to the present study.
| Conclusions
Overall, we found that the latitudinal distribution of plant species richness in alpine ecosystems is decoupled from the general latitudinal diversity gradient and that it relates to regional idiosyncrasies, rather than macroclimatic gradients. Although our results are conclusive enough to support that current and historical effects of area, isolation and environmental heterogeneity exert an overarching influence on vascular plant richness in global alpine ecosystems, we are still far from understanding the processes behind such effects.
Future alpine research should, therefore, consider local information about soil biotic and abiotic composition, topographical features and microclimatic variation at the regional scale. Additionally, further efforts should be oriented toward the collection of plant community data from underrepresented regions. Indeed, this work is the starting point for defining global hotspots of alpine plant diversity, and further investigations including patterns of endemism, functional variation and phylogenetic diversity are still needed. This type of information, together with dynamic regional diversity models accounting for spatio-temporal connectivity, will provide a better understanding of the patterns we have found here and a tool for the effective conservation of alpine biodiversity in response to climate change.
|
2021-05-04T22:06:32.163Z
|
2021-03-31T00:00:00.000
|
{
"year": 2021,
"sha1": "4efa73bd8de47dfe297041066647d79ddbb51b60",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/geb.13297",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f4307accd2a3c68f55d5dc400bb26a615a544a68",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Geography"
]
}
|
13383515
|
pes2o/s2orc
|
v3-fos-license
|
Reverting Antibiotic Tolerance of Pseudomonas aeruginosa PAO1 Persister Cells by (Z)-4-bromo-5-(bromomethylene)-3-methylfuran-2(5H)-one
Background Bacteria are well known to form dormant persister cells that are tolerant to most antibiotics. Such intrinsic tolerance also facilitates the development of multidrug resistance through acquired mechanisms. Thus persister cells are a promising target for developing more effective methods to control chronic infections and help prevent the development of multidrug-resistant bacteria. However, control of persister cells is still an unmet challenge. Methodology/Principal Findings We show in this report that (Z)-4-bromo-5-(bromomethylene)-3-methylfuran-2(5H)-one (BF8) can restore the antibiotic susceptibility of Pseudomonas aeruginosa PAO1 persister cells at growth non-inhibitory concentrations. Persister control by BF8 was found to be effective against both planktonic and biofilm cells of P. aeruginosa PAO1. Interestingly, although BF8 is an inhibitor of quorum sensing (QS) in Gram-negative bacteria, the data in this study suggest that the activities of BF8 to revert antibiotic tolerance of P. aeruginosa PAO1 persister cells is not through QS inhibition and may involve other targets. Conclusion BF8 can sensitize P. aeruginosa persister cells to antibiotics.
Introduction
It is well documented that a small portion of a bacterial population can form metabolically inactive persister cells [1], which are not mutants with drug resistance genes, but rather phenotypic variants of the wild-type strain [2] due to unbalanced production of toxins/anti-toxins [3,4,5,6] and other mechanisms related to stress response and translation inhibition [1,7]. This subpopulation can survive the attack of antibiotics at high concentrations, and when the treatment is stopped, they can reestablish the population with a similar percentage of cells as persisters, leading to high levels of antibiotic tolerance [2]. Such intrinsic tolerance can cause chronic infections with recurring symptoms after the course of antibiotic therapy and facilitates the development and wide spread of acquired multidrug resistance through genetic mutations and horizontal gene transfer [2]. For example, high persistence mutants have been isolated from cystic fibrosis patients with lung infections [8,9] and from patients with candidiasis [10]. Persister phenotypes have also been found in Mycobacterium tuberculosis, the bacterium causing chronic tuberculosis [11]. Thus, targeting persister cells may help improve infection control and prevent the development of multidrug resistant bacteria [12]. However, controlling persister cells is still an unmet challenge.
Conceivably, one approach to eliminating persister cells is to wake up this dormant population and render them to return to a metabolically active stage. These awakened cells are expected to become sensitive to antibiotics. In Gram-positive bacteria, a 17-kDa protein, named resuscitation-promoting factor (Rpf) has been discovered as a potential agent to wake up dormant cells [13]. However, a full wakeup call may cause rapid growth of a bacterial pathogen, which can lead to adverse progression of infection if the antibiotics are not admitted during the right window.
Recently, sugars such as mannitol, glucose, fructose and pyruvate have been shown to generate proton-motive force and promote the uptake of aminoglycosides by persister cells of Escherichia coli and Staphylococcus aureus, which led to enhanced susceptibility of persister cells to this class of antibiotics. The effects were observed within 1 h of incubation, less than what is required for resumption of full growth [14]. However, this approach requires relatively high concentrations of sugar (e.g. 10 mM) and is limited to aminoglycosides, but not the b-lactam antibiotic ampicillin and the fluoroquinolone ofloxacin. In addition, sugar molecules can only wake up persister cells, but cannot reduce persistence during growth (see below).
Compared to these approaches, non-metabolites that can potentiate multiple classes of antibiotics and also reduce persistence during bacterial growth may be advantageous. It is well documented that the absolute number of persister cells in a culture increases significantly when the culture enters stationary-phase and when cells form surface-attached highly hydrated structures known as biofilms [15,16,17]. Recent research has demonstrated that quorum sensing (QS), bacterial cell-cell signaling by sensing and responding to cell density, promotes persister formation in Pseudomonas aeruginosa PAO1; e.g., acyl-homoserine lactone 3-OC 12 -HSL and phenazine pyocyanin, QS signals of P. aeruginosa, can significantly increase the persister numbers in logarithmic phase cultures of P. aeruginosa PAO1 but not E. coli or S. aureus [18]. Thus, we were motivated to test if targeting such pathways may reduce persistence during bacterial growth and/or revert the antibiotic tolerance of persister cells. We found in this study that the QS inhibitor BF8 has potent activities in persister control, although our data suggest that these activities may not be through QS inhibition and BF8 may have other targets in P. aeruginosa (below).
BF8 is a QS inhibitor
A wide variety of molecules have been discovered as quorum sensing inhibitors [19]. We reported recently that several new synthetic brominated furanones (derivatives of natural brominated furanones) are inhibitors of biofilm formation [20] and quorum sensing [21] in Gram-negative bacteria. Among these compounds, (Z)-4-bromo-5-(bromomethylene)-3-methylfuran-2(5H)-one (BF8, Figure 1A) is the most effective biofilm inhibitor of E. coli and P. aeruginosa at growth non-inhibitory concentrations [20,21]. It is also a potent inhibitor of quorum sensing based on AI-2 [21]. In this study, the effects of BF8 on AI-1 mediated QS were studied using the reporter strain Vibrio harveyi BB886 (ATCC# BAA-1118) [22]. By monitoring the bioluminescence and colony forming units (CFU) of the reporter strain, BF8 was found to inhibit QS at concentrations not inhibitory to the viability of the reporter strain. For example, 10 mg/mL BF8 completely inhibited AI-1-mediated QS with no effects on the viability of V. harveyi BB886 ( Figure 1B). To specifically test if BF8 is also an inhibitor of QS in P. aeruginosa, the expression of the QS-controlled toxin gene, lasB, in the presence of different concentrations of BF8, was characterized using the reporter P. aeruginosa PAO1 mini-Tn5-based PlasBgfp(ASV) by following the procedure described previously [23]. As shown in Figure 1C, expression of lasB in stationary phase cultures (all around OD 600 of 2.7) was significantly inhibited by BF8, confirming that BF8 is also an inhibitor of QS in P. aeruginosa.
BF8 reduced persistence of PAO1
To test if BF8 can control persister cells of P. aeruginosa PAO1 (henceforth PAO1), we studied the effects of BF8 (up to 100 mg/ mL) on the viability and persistence of PAO1 during 5 h of growth in Luria Bertani (LB) medium [24]. As shown in Figure 2A, the total number of viable cells at the end of incubation was around 3.5610 9 /mL for all the samples (one-way ANOVA, p = 0.122). Thus, BF8 did not affect the viability of PAO1 directly. Consistently, the MIC (minimum concentration that prevent growth overnight) of BF8 against PAO1 in LB medium was found to be higher than 200 mg/mL ( Figure S1A). Interestingly, at the growth non-inhibitory concentrations, the persistence of PAO1 was significantly reduced by BF8 in a dose-dependent manner; e.g., BF8 at 100 mg/mL reduced the number of persister cells by 63 times (98.4% reduction) compared to the untreated control (one-way ANOVA, p = 0.0006). The reduction of persistence could lead to better efficacy of antibiotics [e.g., ciprofloxacin (Cip) as shown in Figure 2A] and help prevent the development of antibiotic resistance. To our best knowledge, this is the first compound known to reduce bacterial persistence during normal growth.
Sugars have been reported to sensitize persisters to antibiotics [14] and Wang et al. [25] reported that relatively high concentrations of fructose and glucose reduced the expression of QS-related gene pqsA and the production of extracellular proteases and pyocyanin in P. aeruginosa. To test if sugars can also reduce persistence of PAO1 under our experimental condition, we repeated the above experiment using 10 mM D-glucose and Dmannitol instead of BF8. It was found that, unlike BF8, incubation with neither of these sugars affected persistence ( Figure 2B, one- To study the effects on QS in V. harveyi BB886 reporter, an overnight culture of V. harveyi BB886 were diluted 1:5000 in AB medium and supplemented with different concentrations of BF8 after 5.5 h of incubation. The QS activity of each sample was characterized by normalizing the bioluminescence of the reporter V. harveyi BB886 with its colony forming unit (CFU) after another 1.5 h of incubation. Figure 1B shows that QS was inhibited by BF8 in a dose dependent manner. To study the effects on QS in PAO1, the reporter strain PAO1 mini-Tn5-based PlasB-gfp(ASV) was cultured till an OD 600 of 0.8 was reached and then BF8 was added at different concentrations. The green flouresence was measured when the cultures reached stationary phase (OD 600 around 2.7). The results show that QS in PAO1 was inhibited by BF8. doi:10.1371/journal.pone.0045778.g001 way ANOVA, p = 0.43). These data suggest that persister control by BF8 is through a different mechanism than that by sugars.
BF8 reverted the antibiotic tolerance of PAO1 persister cells
In addition to reducing persistence during PAO1 growth, BF8 was also found to revert the antibiotic tolerance of isolated persisters. As shown in Figure 3A, treatment with BF8 at all tested concentrations (0.1, 0.5, 1, and 2 mg/mL) increased the susceptibility of persister cells to Cip. For example, although BF8 at 0.5 mg/mL did not affect the viability of persister cells, the antibiotic tolerance of persister cells was reverted since 74.1 61.1% of persister cells became sensitive to Cip compared to the untreated control (One-way ANOVA, p = 0.0005). The effects on persistence reduction increased to 89.861.4% when BF8 was added at 2 mg/mL (one-way ANOVA, p = 0.0013) ( Figure 3A). At higher concentrations, however, BF8 was found to be cidal to PAO1 persister cells. For example, treatment with 10 mg/mL BF8 led to significant killing of PAO1 persister cells (data not shown), suggesting that a threshold concentration may exist between growth non-inhibitory reversion of persistence and cidal effects on persister cells. Consistently, BF8 at 2 and 5 mg/mL did not affect the viability of regular PAO1 cells in stationary phase (one-way ANOVA, p = 0.7975 and p = 0.8572, respectively, Figure S1B). It appeared to be cidal to regular cells at 10 mg/mL or higher concentrations ( Figure S1B); while the MBC (the minimum concentration that reduces viability by 99.9% [26,27]) was found to be higher than 30 mg/mL (the highest concentration tested). Overall, the above finding shows that BF8 can revert persistence at concentrations that do not affect the viability of both persister and Persisters were isolated and treated with or without 5 mg/mL BF8 in 0.85% NaCl solution for 2 h. The treated cells were then incubated with different antibiotics for 3.5 h to test antibiotic susceptibility. PAO1 persisters were found to be sensitized to Tob and Cip. The start number of persisters was 2.3610 6 65.7610 4 /mL. (C) The QS signal 3-oxo-C 12 -HSL also sensitized PAO1 persisters to Cip. The same procedure as shown in Figure 3A was followed except that 3-oxo-C 12 -HSL was tested instead of BF8. The start number of persisters was 2.0610 6 64.0610 5 /mL. doi:10.1371/journal.pone.0045778.g003 regular cells of PAO1 (up to 5 mg/mL under our experimental condition).
We chose 0.85% NaCl solution rather than LB medium to test the effects on isolated persisters because NaCl solution itself does not contain carbon source, allowing the effects on viability to be tested specifically. The concentrations of BF8 that exhibited activities were significantly lower in 0.85% NaCl solution than those in LB medium (to test persistence during growth as described above), presumably because LB medium contains proteins and other large molecules that may bind to BF8 and decrease its effective concentration. It is also worth noticing that the persister numbers are higher in Figures 3 (start CFU/mL as 2.1610 6 63.1610 5 in 3A, 2.3610 6 65.7610 4 in 3B, and 2.0610 6 64.0610 5 in 3C) than those in Figures 2 (5.0610 5 61.7610 5 /mL for the control) because the persister cells in Figure 3 were isolated from overnight cultures (known to have higher persistence [28,29]) and those in Figure 2 were isolated from growing cultures.
It is interesting that, unlike sugars which can only potentiate aminoglycosides [14], BF8 was found to restore susceptibility of PAO1 persister cells to both ciprofloxacin and tobramycin (from two different classes of antibiotics). In total, five antibiotics were tested to evaluate the effects on antibiotics with different targets including protein synthesis [tetracycline (Tet), gentamicin (Gen) and tobramycin (Tob)], cell wall synthesis [carbenicillin (Cab)], and functions of DNA gyrase (Cip). In addition to Cip (t test, p = 0.0095), BF8 at 5 mg/mL was also found to potentiate Tob (t test, p = 0.0271), while the effects on Tet (t test, p = 0.4096), Gen (t test, p = 0.0771), and Car (t test, p = 0.1976) were not statistically significant ( Figure 3B).
Since QS is known to stimulate persister formation in PAO1 [18] and BF8 is a QS inhibitor, we further tested if persister controlled by BF8 can be relieved by the QS signal. It was interesting to find that addition of 3-oxo-C 12 -HSL (Sigma-Aldrich, St. Louis, MO, USA) was not able to reduce the inhibitory effects of BF8 ( Figure 3C). Instead, 3-oxo-C 12 -HSL was also found to sensitize isolated persisters to Cip in a dose dependent manner. For example, after treatment with 30 mg/mL 3-oxo-C 12 -HSL for 2 h, nearly all the isolated persisters were killed by 200 mg/mL Cip ( Figure 3C). Interestingly, this AHL was found previously to promote PAO1 persister formation in exponential phase (different experimental condition than described here) [18]. Thus, this QS signal may have different effects on PAO1 persisters under different conditions. These findings suggest that, although BF8 is a QS inhibitor, the activities of BF8 to sensitize PAO1 persisters to antibiotics is not through QS inhibition and there are other targets of BF8 in PAO1 persister cells.
Effects of BF8 on PAO1 biofilms and associated persister cells
Compared to planktonic cells, surface-attached bacterial biofilms are more challenging to microbial control since they are up to 1000 times more tolerant to antibiotics than planktonic cells and are known to harbor a high percentage of persister cells [1,30]. To understand if BF8 can also control persisters in biofilms, we treated 18-h PAO1 biofilms formed on 304L stainless steel coupons with different concentrations of BF8 for 24 h. Both the planktonic (detached cells) and biofilm populations that remained attached were analyzed to evaluate the viability and persistence of PAO1 with and without BF8 treatment. As shown in Figure 4A, BF8 dispersed established biofilms and reduced the number of persister cells in both biofilm and detached population. For example, the number of viable cells remained attached after treatment was reduced by 5 mg/mL BF8 from 3.3610 8 61.7610 8 /cm 2 to 7.1610 7 61.4610 7 /cm 2 (one-way AN-OVA, p = 0.0025). Among the cells that remained attached, the number of persisters was reduced from 9.6610 5 69.1610 4 /cm 2 to 7.0610 5 61.1610 5 /cm 2 (one-way ANOVA, p = 0.002). At concentrations up to 10 mg/mL, BF8 did not exhibit cidal effects but reduced the percentage of persister cells (0.1460.01% without BF8 vs. 0.01360.002% with 10 mg/mL BF8, one-way ANOVA, p = 0.0002) in the detached population (the total number of cells in suspension increased compared to the control due to detachment); while at high concentrations, BF8 appeared to be cidal to both regular and persister cells. For example, treatment with 60 mg/mL BF8 for 24 h led to 94.265.1% reduction of viable persister cells remained on the surface (one-way ANOVA, p = 0.0004), although the persisters/regular cells ratio in biofilms was not reduced by BF8 ( Figure 4A). In addition to the effects on established biofilms, BF8 at 60 mg/mL added at inoculation was also found to inhibit PAO1 biofilm formation (incubated for 18 h) by 99.160.2% (t test, p = 0.0001) and reduced the number of biofilm-associated persisters by 99.261.3% (t test, p = 0.001) ( Figure 4B).
DNA microarray analysis
It was an interesting finding that BF8 can render persisters sensitive to the antibiotics targeting 30S ribosome RNA (Tob), and topoisomerase (Cip). The capability to sensitize persister cells to antibiotics that target both DNA replication and protein synthesis suggests that BF8 may have made the cells leave the persister stage. To obtain a deeper insight at the genetic level, we investigated the effects of BF8 on sensitization of PAO1 persister cells using DNA microarrays. The gene expression profiles of PAO1 persister cells treated with and without BF8 at 1 mg/mL for 1 h were compared in triplicate. We chose this effective, but relatively low, concentration of BF8 (as shown in Figure 3A) so that the most important genes induced by BF8 can be identified. The persister cells were isolated by killing regular cells with 200 mg/mL Cip for 3.5 h. Because average half-life of bacterial mRNA is only a few minutes [31], we expect that the mRNA in dead cells should be degraded when the cells were harvested. Consistently, we found that 85.5% of the mRNA of the housekeeping gene proC was degraded in the persister sample compared to the sample before Cip treatment ( Figure S3). Furthermore, since the identical persister cell samples were used for both the control (no BF8) and test (with BF8), only the differentially expressed genes in live cells are expected to be seen in the microarray data.
In total, 28 genes were consistently induced by BF8 by more than 2 fold compared to the control in all three biological replicates (see Table S1 for the full list). In comparison, although a relatively small set of repressed genes was seen in each set, no gene was significantly repressed in all three sets mostly due to low expression ratios in some dataset(s) (test/control ,2.0). This is possibly because persister cells only have low level expression of essential genes due to their dormant nature [32,33]. To validate the DNA microarray results, we conducted RNA slot blotting for five representative genes including one unchanged gene (PA4943) and 4 induced genes (PA3523, PA2931, PA0182 and PA4167). The results of all blots were consistent with the microarray data (Table S2). The consistently induced genes encode oxidoreductases (PA4167, PA1334, PA0182, PA2932, PA2535, PA3223, PA1127), transcriptional factors (PA4878, PA1285, PA3133, PA2196), and hypothetical proteins (PA4173, PA0741, PA1210, PA3240, PA2575, PA0565, PA2580, PA2610, PA2839, PA0422, PA1374, PA2691). Since many reductases are involved in metabolism, our DNA microarray data indicate that some cellular activities or membrane functions of PAO1 persisters can be induced by low concentrations of BF8. In addition, the gene PA2931 was induced by 11 times. This gene encodes a repressor of Cif, a P. aeruginosa toxin that causes degradation of the cystic fibrosis transmembrane conductance regulator (CFTR) in mammalian cells [34,35]. The induction of PA2931 indicates that BF8 can potentially repress the pathogenicity of PAO1. No QS genes were found to be differentially expressed by BF8. This is not surprising since persister cells are relatively dormant and are not expected to have QS activities. This finding further supports that persister control by BF8 involves other pathways and confirms that the mRNAs of differentially expressed genes were indeed from persister cells.
Discussion
In this study, we show that BF8 can act synergistically with antibiotics to enhance killing of P. aeruginosa PAO1 persister cells. Although more work is needed to reveal the exact mechanism, the restoration of antibiotic susceptibility of PAO1 persister cells at growth non-inhibitory concentrations by BF8 is nevertheless interesting. The DNA microarray data suggest that some reductases and proteins for small molecule transfer were induced by BF8. We hypothesize that interaction between BF8 (at growth non-inhibitory concentration) and cell membrane proteins can interrupt specific cellular functions, which led to increase in activities of transport proteins and reductases. Such response should require energy and thus may influence the physiological stage of persister cells and restore their susceptibility to antibiotics. Such effects may be mechanistically different from natural wakeup when the persister cells are supplied with a new medium. Further study on bacterial membrane potential and metabolism with and without BF8 (at growth non-inhibitory concentrations) can help test this hypothesis. In an earlier work, Shah et al. [33] compared gene expression in regular cells and persisters of E. coli and found that around 5% of genes are differentially expressed between these two populations. A number of genes involved in toxin-antitoxin module proteins rather than stationary-phase-specific functions were induced in persisters compared to regular cells. In our PAO1 microarray data, however, only a short list of genes was induced by BF8, which is different from that of regular cells vs. persister cells [33]. These data confirmed that treatment with BF8 was not leading to a full wakeup. Because the cells only activate certain functions, such treatment may act as a partial wakeup and can be advantageous compared to a full wakeup that leads to normal cell growth and potentially higher virulence. Molecules with such activities may have a good opportunity to be applied either before or together with antibiotics to clean infections, without a specific window required for antibiotics to be administered.
To be applied for disease control, it is important to evaluate the safety and efficacy of BF8 in vivo. This is part of our ongoing work. Nevertheless, some other brominated furanones have been shown to be safe and effective in animal models such as shrimps [36] and mice [37]. For example, furanone C-30 has been shown to reduce the virulence of P. aeruginosa and help clear infection from the lungs of mice [37]. The activities of persister control found in the present study bring new opportunities to develop more effective therapies based on this class of compounds.
In summary, the results described above indicate that BF8 can reduce persistence during the growth of PAO1 and can also restore the susceptibility of isolated persister cells to antibiotics. This appears to be a promising advantage of BF8 for persister control. The exact targets of BF8 and the chemical nature of such interaction are unknown and are a goal of our ongoing work. It is important to understand if there are a set of specific membrane proteins, activation of which can lead to higher antibiotic susceptibility; and if a subset of such proteins is sufficient for the observed activities. Better understanding of the underlying mechanism will help develop more effective methods to control bacterial persistence and associated chronic infections.
Furanone synthesis
(Z)-4-bromo-5-(bromomethylene)-3-methylfuran-2(5H)-one (BF8) was synthesized as described previously [20], dissolved in absolute ethanol as 60 mg/mL, and stored at 4uC until use. Briefly, Br 2 (6.22g, 38.9 mmol) in dichloromethane (20 mL) was added dropwise into a flask containing 2.53 g (19.5 nmol) alphamethyllevulinica acid in 20 mL dichloromethane. The mixture was stirred at 35,40uC till all the alpha-methyllevulinica acid reacted (based on TLC test); and then the reaction was interrupted by adding ice (,200 mL). The mixture was extracted with dichloromethane three times (80 mL each), washed with Na 2 S 2 O 3 (1 M, 100 mL) to remove residue Br 2 , dried with anhydrous sodium sulfate (30 min), filtered with cotton, and then purified by removing solvent using a rotary evaporator. The crude bromo keto acid was added with concentrated H 2 SO 4 (98%, 10 mL) and the mixture was heated in an oil bath at 110uC till all the crude keto reacted (by checking on TLC plates). The raw product was poured into a beaker with 200 mL ice to stop the reaction. The mixture was extracted with dichloromethane three times (50 mL each), washed once with 80 mL H 2 O and dried using a rotary evaporator. BF8 was further purified from other impurities using column chromatography (dichloromethane: hexanes = 1: 4). The structure of BF8 was confirmed using 1 H-NMR by comparing with reported data [20].
Persister isolation
Treatment with Cip up to 50 mg/mL for 3.5 h has been used to isolated PAO1 persister cells previously [18]. We confirmed recently that treatment with 50 mg/mL Cip for 3.5 h is also sufficient to kill regular cells of our PAO1 strain since no additional killing was observed with Cip concentration up to 200 mg/mL (the highest concentration tested [40]. To further confirm that the treatment time is sufficient, we also tested the killing with 200 mg/mL Cip during 6.5 h of incubation. As shown in Figure S2, no additional killing was observed with incubation beyond 1.5 h. Given these results, we chose incubation for 3.5 h with 200 mg/mL Cip to ensure the complete elimination of regular cells. After Cip treatment (200 mg/mL, 3.5 h) of 18-h PAO1 overnight cultures, the surviving persister cells were washed twice with 0.85% NaCl solution to remove residual antibiotics, and then resuspended in 0.85% NaCl solution. The isolated persister cells were then used for different treatments as described below. The cells after each treatment were further treated by supplementing with 200 mg/mL Cip and incubating for 3.5 h. Then the samples were washed three times with 0.85% NaCl solution to quantify the number of cells that remained as persisters. The drop plate method described by Chen et al. [41] was followed to count colony forming units (CFUs).
Effect of BF8 on AHL-mediated QS in the reporter strain V. harveyi BB886
A V. harveyi BB886 overnight culture was used to inoculate subcultures in AB medium [22]. BF8 was added at different concentrations (0, 0.1, 0.5, 1, 10, 30, 60 mg/mL) after 5.5 h of growth at 37uC with 200 rpm shaking. The incubation continued for another 1.5 h. Then the bioluminescence was measured using a luminometer (20/20n, Turner Design, Sunnyvale, CA, USA). Meanwhile, the CFU of reporter cells was determined using drop method with LM agar plates [22,38] after washing the cells with 2% NaCl solution. This experiment was performed with two biological replicates and 6 replicates on drop plates were counted for each CFU data point.
Effect of BF8 on QS in PAO1
A overnight culture of the QS reporter strain PAO1 mini-Tn5based PlasB-gfp(ASV) [23] was used to inoculate subcultures in modified LB medium [23]. When the subcultures reached OD 600 of 0.8, BF8 was added at different concentrations (0, 5, 10, 15, and 30 mg/mL). Green fluorescence and OD 450 was measured when OD 600 reached around 2.7 by following the previously described protocol [23] to evaluate the effects on QS in PAO1. This experiment was conducted in duplicate.
Effects of BF8 on persistence of PAO1
A PAO1 overnight culture was used to inoculate subcultures (each contained 5 mL LB medium) to an OD 600 of 0.05, which were then supplemented with different concentrations of BF8 (0, 5, 10, 30, 50 and 100 mg/mL). The amount of ethanol (solvent of BF8 stock solutions) was adjusted to be the same for each sample to eliminate any solvent effect. Samples were taken after 5 h of incubation at 37uC with shaking at 200 rpm to count CFU. Meanwhile, the remaining portion of each sample was added with 200 mg/mL Cip and incubated for 3.5 h at 37uC. The samples were then analyzed to quantify the number of persister cells by counting CFU. This experiment was performed with two biological replicates and 6 replicates on drop plates were counted for each CFU data point.
Effects of D-glucose and D-mannitol
P. aeruginosa PAO1 subcultures were inoculated with an overnight culture to an initial OD 600 of 0.05 in LB medium. The subcultures were supplemented with 10 mM D-glucose, 10 mM D-mannitol or without sugar (control). The total number of viable cells and the number of persister cells were quantified as described in the experiment of BF8 above. This experiment was conducted with two biological replicates and 5 replicates on drop plates were counted for each CFU data point.
Effects of BF8 on antibiotic susceptibility of isolated persister cells
Persisters were isolated from overnight cultures as described above. After dilution by 50 times with 0.85% NaCl solution, the persisters were challenged with different concentrations of BF8. Ethanol (the solvent used for making BF8 stock solutions) was adjusted to be the same in all samples to eliminate any solvent effect. After incubation for 2 h at 37uC with shaking at 200 rpm, 1 mL of each sample was taken and washed three times with 0.85% NaCl to quantify the total number of viable cells by counting CFU. The remaining portion of each sample was further tested to quantify the number of cells that remained as persisters as described above. This experiment was conducted with two biological replicates and 5 replicates on drop plates were counted for each CFU data point.
Synergy with other antibiotics
Persisters were isolated from overnight cultures as described above, and then incubated in 0.85% NaCl for 2 h at 37uC with shaking at 200 rpm in the absence or presence of 5 mg/mL BF8. The amount of ethanol was adjusted to be the same in all samples to eliminate any solvent effect. After incubation, 1 mL of BF8 treated persister samples and BF8-free controls were added with and without different antibiotics [25 mg/mL tetracycline (Tet), 25 mg/mL gentamicin (Gen), 25 mg/mL tobramycin (Tob), 500 mg/mL carbenicillin (Car), 25 mg/mL ciprofloxacin (Cip)] and incubated for another 3.5 h at 37uC with shaking at 200 rpm. The antibiotic treated persisters were then washed three times with 0.85% NaCl solution to remove antibiotics and plated on LB plates to evaluate the killing by antibiotics by counting CFU. This experiment was conducted with two biological replicates and 5 replicates on drop plates were counted for each CFU data point.
Effects of N-(3-Oxododecanoyl)-L-homoserine lactone (3oxo-C 12 -HSL)
This experiment was conducted by following the same protocol as that for the effects of BF8 on isolated persister cells described above. The QS signal 3-oxo-C 12 -HSL was tested at 0, 1.5, 3, 6, 15, and 30 mg/mL. This experiment was conducted with three biological replicates and 5 replicates on drop plates were counted for each CFU data point.
Effects of BF8 on persister cells in established biofilms
P. aeruginosa PAO1 overnight cultures in LB medium were used to inoculate subcultures in M63 medium to an OD 600 of 0.05 in glass petri dishes containing 2 cm 61 cm 304L stainless steel coupons. After 18 h of incubation, the coupons with established biofilms were transferred to a 12 well plate (Becton Dickinson, Franklin Lakes, NJ, USA). Each well contained 4 mL of 0.85% NaCl solution supplemented with different concentrations of BF8 (0, 5, 10, 30, 60 mg/mL). The biofilm samples in 12 well plates were incubated at 37uC for 24 h without shaking. One mL of medium with detached cells was then sampled from each well, washed three times with 0.85% NaCl solution and plated on LB agar plates to determine the viability of PAO1 cells by counting CFU. Meanwhile, 1 mL of medium with detached cells was sampled, added with 200 mg/mL Cip, and incubated for 3.5 h at 37uC to isolate persister cells. Then the samples were washed three times with 0.85% NaCl solution and plated on LB agar plates to determine the number of persister cells by counting CFU. To collect the biofilm cells, the coupons were transferred to 15 mL falcon tubes, each containing 5 mL 0.85% NaCl solution. The biofilm cells were collected by vortexing the coupons for 1 min and sonicating (Ultrasonic cleaner Model No B200, Sinosonic Industrial Co., Ltd, Taipei Hsien, Taiwan) for 1 min (repeat once) [42]. Collected biofilm cells were plated on LB plates to count CFU and the rest of the samples were treated with 200 mg/mL Cip for 3.5 h at 37uC for persister isolation. The isolated biofilmassociated persister cells were washed three times and plated on LB agar plates to count CFU. This experiment was conducted with three biological replicates and 5 replicates on drop plates were counted for each CFU data point.
Effects of BF8 on PAO1 biofilm formation
Biofilms were formed on 2 cm 61 cm 304L stainless steel coupons in M63 medium. The biofilm cultures with and without 60 mg/mL BF8 (but with the same amount of the solvent ethanol) were inoculated with an overnight culture to an initial OD 600 of 0.05. After 18 h of incubation at 37uC without shaking, the coupons were gently washed with 0.85% NaCl solution three times to remove unattached planktonic cells. The total number of biofilm cells and the number of persisters were quantified as described above. This experiment was conducted with three biological replicates and 5 replicates were counted for each CFU sample using drop plate method.
DNA microarray analysis
Persister cells were harvested from 18-h cultures of PAO1 (100 mL each) using the same methods as described above. The isolated persister cells were resuspended in 0.85% NaCl solution supplemented with 1 mg/mL (3.7 mM) BF8 or with the same amount of ethanol (4.17 mL, to eliminate the solvent effects). After incubation at 37uC for 1 h, treated persister cells were collected by centrifugation at 10,000 rpm for 5 min at 4uC, transferred to 2 mL pre-cooled microcentrifuge tubes and frozen instantly in an ethanol-dry ice bath. The cell pellets were stored at 280uC until RNA isolation.
To isolate the total RNA, the harvested PAO1 cells were lysed by beating at 4,800 oscillations/min using a mini-bead beater (Biospec Products Inc., Bartlesville, OK, USA) after adding 0.5 mm glass beads, 900 mL RLT buffer and 1% 2-Mercaptoethanol. The total RNA was extracted using RNeasy Mini Kit (Qiagen, Austin, TX, USA) with on-column DNase treatment (RNase-Free DNase Set, Qiagen). The RNA samples were sent to the DNA microarray Facilities at SUNY Upstate Medical University for microarray (P. aeruginosa Genome Array, Affymetrix, Santa Clara, CA, USA) hybridization. A total of three biological replicates were tested. Using the GeneChip Operating Software (MAS 5.0), genes with a pvalue of less than 0.0025 or greater than 0.9975 were considered statistically significant based on Wilcoxon signed rank test and Tukey Byweight. To ensure the significance of microarray data, an additional criterion was applied to only select the genes with an expression ratio of 2 or higher from this group as induced and repressed genes. Microarray data has been deposited in Gene Expression Omnibus (GEO: GSE36753), compliant with Minimum Information About a Microarray Experiment (MIAME) guidelines.
RNA slot blotting
A total of five genes were tested including PA3523, PA2931, PA0182, PA4167 and PA4943. Primers were designed to include only small inner regions, varying from 368 bp to 448 bp, of these genes. Hybridization probes were labeled with DIG-dUTP (PCR DIG Probe Synthesis Kit, Roche, Mannheim, Germany) in PCR reactions by following the manufacturer's protocol. Total RNA was isolated as described in the DNA microarray section above. The blotting and signal detection were conducted as we described previously [43].
Q-PCR analysis
To verify if killing of PAO1 cells by Cip led to mRNA degradation in the dead cells, the expression levels of the housekeeping gene proC were quantified using Q-PCR. Total RNA was extracted from overnight PAO1 cells before and after 3.5 h of treatment with 200 mg/mL Cip. Then, 200 ng total RNA was taken from each sample to perform cDNA synthesis by using iScript cDNA synthesis kit (Bio-Rad, Hercules, CA, USA). Two primers were used in Q-PCR including the forward primer CGTGGTCGAGTCCAACGCCG and the reverse primer GCGTCGGTCATGGCCTGCAT. Relative expression ratios were calculated from triplicate reactions.
Minimal inhibitory concentration (MIC) of BF8
Subcultures of PAO1 were inoculated from an 18-h overnight culture to an OD 600 of 0.05. BF8 was added at different concentrations (0-200 mg/mL) and OD 600 at this time point was measured. After 24 h incubation at 37uC, the presence and absence of growth were checked by comparing the OD 600 before and after incubation. The experiment was performed with six biological replicates.
Minimal bactericidal concentration (MBC) of BF8
An 18-h overnight culture of PAO1 was washed and diluted with 0.85% NaCl solution to an OD 600 of 0.05 supplemented with different concentrations of BF8 (0-30 mg/mL). After 2 h of incubation at 37uC in culture tubes, the treated cells were washed and diluted with 0.85% NaCl solution to count CFU using drop plate method. The experiment was performed with 2 biological replicates.
Effects of Cip treatment time on PAO1 killing
An overnight culture of PAO1 was incubated with 200 mg/mL Cip at 37uC with 200 rpm shaking. At different incubation time point (1.5 h-6.5 h), Cip treated cells was sampled, washed by three times, diluted and plated on LB agar plates to determine CFU. The experiment was performed with 2 biological replicates. Figure S1 Effects of BF8 on growth and viability of P. aerugionsa PAO1. (A) Effects on growth. LB medium was inoculated with overnight P. aeruginosa PAO1 cultures to an OD 600 of 0.05. BF8 was added at different concentrations (0-200 mg/mL) and the presence and absence of growth were followed after 24 h of incubation at 37uC. The results indicate that none of the tested concentrations was sufficient to inhibit growth completely. Therefore the MIC was found to be higher than 200 mg/mL in LB medium. (B) Effects on viability. An 18-h overnight culture of PAO1 was washed and diluted with 0.85% NaCl solution to an OD 600 of 0.05 supplemented with different concentrations of BF8 (0-30 mg/mL). After 2 h of incubation, the number of viable cells was determined by counting CFU. The results indicate that none of the tested concentrations was sufficient to kill more than 99.9% of PAO1 ( Figure S1B). Therefore the MBC (minimum concentration that reduce viability by 99.9% [26,27]) in 0.85% NaCl solution was found to be higher than 30 mg/mL. (TIF) Figure S3 Transcription level of the housekeeping gene, proC, from total cells (before Cip treatment) and persister cells quantified with Q-PCR. The persister cells were isolated following the same procedure as described in the manuscript. The cells before and after Cip treatment were used to isolate total RNA and compare the transcription levels of proC. The persister cell sample was found to have 85.5% less proC compared to that of total cells before Cip treatment. (TIF)
Supporting Information
Table S1 List of BF8-inducded genes in PAO1 persister cells. A total of three biological replicates were tested. The genes induced by more than 2 fold in all three data sets are listed below. (DOCX) Table S2 The primers used in RNA slot blotting and the blotting results. PA4943 was unchanged based on DNA microarray data. All the other 4 genes were induced by BF8 based on microarray results. DOCX
|
2016-05-12T22:15:10.714Z
|
2012-09-20T00:00:00.000
|
{
"year": 2012,
"sha1": "dfbdde8df4eadfb6828fa3e40fa77c9b64cbaf58",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0045778&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfbdde8df4eadfb6828fa3e40fa77c9b64cbaf58",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
234086867
|
pes2o/s2orc
|
v3-fos-license
|
Silica Fertilizer ( Si ) Enhances Sugarcane Resistance to The Sugarcane Top Borer Scirpophaga excerptalis
As the important industrial crop, sugarcane (Saccharum officinarum L.) is cultivated on 20 million ha and that produce sugar in the world (Smiullah, Khan, Ijaz, & Abdullah, 2013). This plant accounts for 70% of world sugar production (Butterfield, D’Hont, & Berding, 2001). As a plantation commodity, sugar cane is widely cultivated in the territory of Indonesia. Sugarcane is a producer of vegetable oil which is renewable an important industrial raw material such as plywood, paper, industrial enzymes, and animal feed (Diederichs, Ali Mandegari, Farzad, & Görgens, 2016). The main problems in sugarcane are pests and diseases, more than 100 pests and 80 diseases can attack sugarcane. In Indonesia, sugarcane top borer Scirpophaga excerptalis (Lepidoptera: Crambidae) is the main pest of sugar cane (Goebel, Achadian, & McGuire, 2014). Scirpophaga excerptalis has been reported as one of the most destructive insects of sugarcane in most parts of the world (Srivastava & Rai, 2012). The larvae usually penetrate along the midrib of the leaf into the heart of the plant and the top shoot becomes withered and stunted, whereas the internodes beneath may produce new leaves (Shobharani, Rachappa, Sidramappa, & Sunilkumar, 2018). Scirpophaga excerptalis attacks can reduce the productivity of sugarcane up to 34% (Goebel, Achadian, Kristini, Sochib, & Adi, 2011; Sushil et al., 2020). Some controls have been carried out such as the release of parasitoids, planting resistant varieties, use pheromone traps, and increasing soil nutrients, especially silicate (Si). In the soil, Silicon (Si) is one of the most abundant nutrient for crop. The application of silicate fertilizer to sugarcane can increase land productivity (de Camargo, Rodrigues Gomes Júnior, Wyler, & Korndörfer, 2010; Meyer & Keeping, 2000; Nikpay, Nejadian, Goldasteh, & Farazmand, 2017). Silica is not only useful for increasing crop productivity but also increases resistance. It is also part of biological ARTICLE INFO
INTRODUCTION
control by attracting predators and parasitoids from insect pests (Alhousari & Greger, 2018;de Oliveira et al., 2020). The resistance of plants to various stresses can be stimulated by increasing the hardness of the cell walls through deposition of Silicon in plant tissue. Silicate is known as a nutrient associated with induction of resistance to biotic and abiotic stresses (Massey, Ennos, & Hartley, 2006;Meyer & Keeping, 2000;Savant, Korndörfer, Datnoff, & Snyder, 1999). In addition, One of the nutrient that have role to enhance the resistance to various stresses (abiotic and biotic) is Silicon (Han, Lei, Wen, & Hou, 2015). There is positive relationship between the protective effect of silica to plants against insect herbivores and its accumulation in-plant issues. This research aimed to determine the effect of the provision of silica fertilizer as part of the Integrated Pest Management strategy.
Compost Production and Analysis of Si Content
This research was conducted in the Plant Protection Department, Indonesian Sugar Research Institute, Pasuruan from June 2019 to March 2020. The compost given as treatment in this research was straw, cane, and corn composts. The decomposition process was carried out by giving the BioCom plus decomposer and water to the material that has been chopped and dried. Si content in compost was analyzed by taking 100 g of each compost.
The Effect of Si Fertilizer on Sugarcane from Sugarcane Top Borer Attack
The analysis was carried out by planting sugar cane seeds variety PS 59 (sugarcane top borer sensitive) in 10-liter plastic pots. Each pot was filled with 2 seeds (bagal) sugarcane. The experimental design used was a randomized block design with 4 treatments. The treatments consisted of sugarcane plants fertilized with straw compost, sugarcane leaf compost, corn leaf compost, inorganic Si fertilizer, and control (without fertilizer). The treatment was repeated 3 times with each test consisting of 3 plastic pots. The dosage of each compost used was 5 t/ha while the dose of inorganic Si fertilizer was 250 kg/ha, fertilizing was given at the same time as planting. Three months after planting, sugarcane plants were infested with one larvae (S. excerptalis) larva instar as much as two larvae for each plant.
The percentage of attacks on sugarcane due to sugarcane top borer were made every two weeks (14 days) once by observing the symptoms of attacks on each plant. In the 6 th week, the plants were observed destructively to measure the length of the sugarcane top borer larvae. The analysis of Si content in sugarcane stems and hardness level of sugarcane shoots were extracted using 0.01 M CaCl 2 (Berthelsen et al., 2001). Si content in extracts was measured by a spectrophotometer as well as the molybdosilisic acid method (Galhardo & Masini, 2000).
Data Analysis
The data were analyzed using ANOVA (single factor). Further tests are carried out if there were significantly different treatment effects namely the DMRT (Duncan Multiple Random Test). All the data were analyzed using Microsoft Excel 2016.
Silica Content (Si) of Composts
The result of the laboratory analysis of silica content (SiO 2 ) of rice straw, sugarcane leaves, and corn leaves was listed in Table 1. The analysis showed that compost from sugarcane leaves had the highest silica content, followed by rice straw and corn leaves. Savant, Korndörfer, Datnoff, & Snyder (1999) stated that sugarcane absorbs more Si than any other nutrient, ca. 380 kg/ha, in a 12-months old crop. Responses of sugarcane to silicon fertilization have been documented in some areas of the world based on growth and development. High silica content in leaves can increase the firmness of leaves and stems of plants (Laane, 2018;Meyer & Keeping, 2000). Physically, sugarcane and rice leaves are more rigid and sharp compared to corn leaves.
The C/N ratio of three composts was different, sugarcane leaves was a higher C/N ratio followed by corn leaves and rice straw were a lower C/N ratio. A high C/N ratio indicates that compost material was not completely decomposed, conversely if the C/N ratio was lower, it indicates that the material has been decomposed and nutrients can be available to plants.
Resistance of Sugarcane Plant against Sugarcane Top Borer Attack
The percentage of symptoms of sugarcane top borer larvae in sugarcane plants that have been treated differently was presented in Table 2. The average percentage of attack symptoms ranged from 22.22-50%. There was no significant difference in the percentage of larvae attack symptoms between composts and inorganic silica fertilizer (F = 1.83; df = 2.4; P = 0.18). This result showed that the application of Si fertilizer from organic material (compost) had the same effect as the Si fertilizer from inorganic materials produced by the factory. Thus increasing the sugarcane's resistance to sugarcane top borer S. excerptalis can be done by adding Si fertilizer from organic matter (compost), particularly rice straw. Because compost of rice straw showed the lowest percentage of the symptom of larvae attack compared to other treatments. Han, Lei, Wen, & Hou (2015) stated that rice varieties resistant to the rice leaf folder are generally characterized by high silicon content. In this study, silicon amendment, at 0.16 and 0.32 g Si/kg soil, enhanced resistance of a susceptible rice variety to the rice leaf folder.
The addition of organic material from fallen leaves of plants or other agricultural wastes that have been neglected by farmers turned out to be able to have a positive impact. In addition to adding nutrients to the soil, the organic material also able to increase plant resistance to pests. Thus the use of synthetic fertilizers and pesticides can be reduced and replaced with compost in order to minimize costs as well as degradation of the surrounding environment in the long term. Based on Altieri, Ponti, & Nicholls (2012) shows that soil organic fertility can influence the ability of a crop plant to deal with pest attacks in many ways.
Actually, the application of compost fertilizer can increase the resistance of sugarcane top borer larvae attack. However, the supply of silica from composts or organic matters requires a longer time and process to be absorbed by plants. To overcome this it can be done by adding a dose of compost and applying it before planting with some repetitions. It is expected to be able to meet the needs of the silica element by plants to inducing resistance against sugarcane top borer (S. excerptalis).
The application of compost also were not influence the bore length of the sugarcane top borer larvae in the sugarcane stem (Table 3). Bore length of larvae decreased during the third observation except on corn leaves to compost. Sugarcane plants that were treated with rice straw compost, the infested larvae died at the third observation (Table 4). This result showed that rice straw compost can induce sugarcane resistance to sugarcane top borer even better than inorganic silica fertilizer. Silica element can increase plant resistance to pests and diseases by thickening the cuticle layer and hardening the plant tissue. Thus pests or pathogens become difficult to penetrating plant tissues (Meyer & Keeping, 2000;Tubana, Babu, & Datnoff, 2016). Calatayud, Njuguna, & Juma (2016) stated that deposition of silica in plant epidermal cells provides a physical barrier against insect's probing and feeding or insect's penetration into plant tissues. Compost of sugarcane leaves 33.33 ± 8.33 Compost of corn leaves 50.00 ± 8.34 Inorganic silica fertilizer (Si) 50.00 ± 8.34 Remarks: ns Indicate not significant differences within column
CONCLUSION
Organic silica fertilizer (compost) provides the same effect as inorganic silica fertilizer in increasing the induction of sugarcane resistance to sugarcane top borer (S. excerptalis). Rice straw compost was a compost fertilizer that provides the best sugarcane resistance to sugarcane top borer compared to sugarcane plants that treated with sugarcane and corn leaves to compost. Remarks: Means ± SD followed by the same letters indicate not significant differences within the column (DMRT test, P < 0.05). ns Indicate not significant differences within the column
|
2021-02-04T16:58:54.613Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4cde5334ea94f20e27fd0781c42ee71cf57ed579",
"oa_license": "CCBYNC",
"oa_url": "https://agrivita.ub.ac.id/index.php/agrivita/article/download/2654/1343",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4cde5334ea94f20e27fd0781c42ee71cf57ed579",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
251542312
|
pes2o/s2orc
|
v3-fos-license
|
Psychomotor Skills Activities in the Classroom from an Early Childhood Education Teachers’ Perspective
Psychomotricity is a tool that allows the development of different capacities, skills and corporal abilities of people. Currently, it is included in early childhood education programmes due to its importance in children’s development, but, even so, it is not always given the role they deserve. Thus, this study aimed to evaluate the perceptions of early childhood education teachers towards the needs and current state of psychomotor skills in the educational context of Extremadura schools and compare the information provided by teachers that work in rural and urban areas. A questionnaire was administered using a tablet and a Google Forms application. The sample consisted of 216 teachers, selected using a non-probability sampling method based on coexistence sampling. The Mann–Whitney U test was applied to determine the relationships between the different items and dimensions according to the location of the school, and Spearman’s Rho test was used to find out if there is any relationship between the age of the teachers and their responses in the different dimensions. The results showed that psychomotor skills do not receive the place they deserve, with insufficient material and facilities, inadequate training, few sessions and inappropriate programming, together with the rest of the components of the cycle. Therefore, it can be concluded that it is necessary to include psychomotor skills in the training of teachers and that the centres should be concerned about providing teachers with the necessary material and spaces for their work.
Introduction
Nowadays, it is increasingly common to find children with motor skills deficits, not only at an early age, but also throughout their primary school years, especially those who have never participated in out-of-school physical activity [1]. The result of good or bad motor skills may be due to reasons such as lack of interest of the children or even that the school does not give it enough importance [2].
These problems not only affect motor skills but also have an impact on cognition, impairing the ability of children to process information from everything they perceive. Going hand in hand with the terms cognition and motor skills, psychomotricity arises, which is defined as a discipline that considers the person as a whole and synthesizes, therefore, motor skills and the psyche [3]. Conversely, psychomotor skills are a technique that influences the intentional act to stimulate or modify it by using bodily activity, or are an approach to educational intervention that aims to develop motor, expressive and creative possibilities through the body [4,5]. Psychomotor skills provide benefits, such as facilitating the acquisition of the body schema, addressing different motor patterns, promoting body control, affirming laterality, developing balance and creating learning habits or social integration [3,6]. Hence, psychomotor education guarantees the development of intelligence through motor action, constituting a preventive educational action. Consequently, it is necessary that the teacher has training or knowledge in psychomotor skills and puts them into practice from early childhood education because psychomotor aspects will contribute positively to the student's learning [7]. Alves, Lussac, Fonseca and others [8][9][10] agree that psychomotor skills promote learning and the overall development of the child in a simplified and evolutionary way. Thus, they emphasize the importance of the teacher having training or specialization in psychomotor skills and acting as a facilitator, transmitting his knowledge to the students and putting psychomotor activities into practice from preschool, whilst also considering that the family is of enormous importance.
During early childhood education, educators observe different perceptual and motor possibilities: the identification of sensations, the global and partial knowledge of the body or different expressive possibilities of the body through psychomotricity [11]. Yáñez et al. [12] state that the confusion between this term and Physical Education is caused by the lack of knowledge of the current model of Physical Education and the ambiguity of the meaning of the term "psychomotricity". Another study, focused on the perception of teachers in this educational stage [13], reflects the reality of content with little educational weight, either because of the little importance given to early childhood education or psychomotor skills in society. Many of these teachers understand the importance of this content, despite the fact that in most schools these sessions are relegated to the background, with only one hour per week dedicated to them [14]. To this is added the belief that they do not have objective tools to acquire reliable information on the psychomotor development of their students [15], which is a fundamental aspect of the planning and achievement of objectives in this area [16], since the primary condition for promoting these psychomotor programmes is to allow teachers to determine their own needs and possibilities [17].
Similarly, teachers' training is a major factor in any of the areas that encompass education at any level [18]. Teacher education helps the enrichment of educational processes by immersing students in activities that provide meaningful and constructivist learning, resulting in a holistic and global stimulation for the children [19]. However, previous literature has already pointed out the deficit of content in the area of psychomotor skills during university teaching in early childhood education, as compared to those developed during a primary education degree [4,20]. In addition, postgraduate teacher training is perceived by students as deficient and of low quality [21], so these contents are normally implemented by specialists in physical education at primary school level [22].
Due to all these benefits of psychomotor skills, in populations of different ages and conditions, this study aims to evaluate the perceptions of early childhood education teachers towards the needs and current state of psychomotor skills in the educational context in Extremadura schools. We also aim to compare the information provided by teachers that work in a rural location centre compared to those who work in an urban environment. We choose the first educational stage because psychomotor skills appear in the educational curriculum as important, and we believe it is a crucial stage in the child's development. Based on previous research [23], we hypothesize that early childhood education teachers will not be sufficiently prepared and will not receive adequate resources to teach the contents established in the educational curriculum on psychomotricity.
Participants
Two-hundred and sixteen second-cycle pre-school education (3 to 6 years old) teachers (82.9% females and 17.1% males) from public schools in Extremadura were selected using a non-probabilistic sampling method based on coexistence sampling [24]. Regarding the location of the educational centres, 35.6% and 64.4% were rural and urban schools, respectively. The mean age of the sample was 43.94 (9.80) years, and their mean experience was 18.08 (9.50) years. Table 1 shows the distribution of the participants according to sex, centre location, course taught, age and teaching experience.
Instruments and Measures
Sociodemographic data were obtained through a questionnaire that included five questions: gender, grade in which they teach, school environment, age and teaching experience.
The Questionnaire on Psychomotricity in the Educational Context (CPCE) [23] was used. This instrument is composed of a total of 19 items grouped into six dimensions: (1) Training (items 1 and 2), which measures the training received by the teacher, both previous and current, in child psychomotor skills; (2) Programming (items 3 to 6), which refers to the organization and approach to psychomotor activities, considering key aspects such as the characteristics and daily development of the students, the different motor contents or the perceptions of other teachers; (3) Material (items 7 to 10), which refers to the availability and adequacy of the material available/used in psychomotor classes to the students' characteristics; (4) Personnel (items 11 and 12), regarding the competence of the teachers present in the centre to develop the psychomotor skills content in an adequate manner; (5) Contents (items 13 to 15), defining the fact that the psychomotor teaching programme covers all relevant aspects of psychomotor skills; and (6) Sessions (items 16 to 19), in reference to the characteristics of the classes taught in psychomotor skills in the educational context that allow the psychomotor development of students. The instrument uses a 5-level Likert scale from 1, "totally disagree" to 5, "totally agree". This questionnaire was previously validated on early childhood education teachers in another region of Spain (Murcia) [23]. Nevertheless, the reliability outcomes for each CPCE questionnaire dimension, based on our data, were the following: "formation" = 0.75, "programming" = 0.83, "material" = 0.77, "staff" = 0.85, "contents" = 0.79 and "sessions" = 0.83; which means satisfactory values according to Nunnally and Bernstein (1994) recommendations because all values are above 0.70 [25].
Procedures
It was decided to use the Google Forms application to prepare the sociodemographic and CPCE data questionnaires. It allowed us to save costs and facilitate both the delivery of the questionnaires to the participants and to store the responses in the same database [26]. Data collection was carried out between September and December 2021.
To access the sample, we used the database of public schools in the Autonomous Community of Extremadura belonging to the Department of Education and Employment of the Regional Government of Extremadura and selected the contact details of the schools in which preschool education is taught.
After that, all the selected centres were contacted through an e-mail addressed to the Early Childhood Education teachers informing them about the study and indicating the URL for accessing the form and the informed consent.
Due to the response rate not being sufficient during the first month, it was decided to resend the e-mail and make a telephone call to the centre informing them of the study and the procedures for collaborating on it. Thus, the sample was increased until the necessary data were obtained.
Statistical Analysis
Data collection analysis was performed with the Statistical Package for Social Sciences (SPSS) version 23.0 for MAC. The Kolmogorov-Smirnov test was used to analyse the normality and homogeneity of data. Results indicated that data was not normal-distributed, so non-parametric tests were applied. Thus, the Mann-Whitney U test was executed to analyse the relationships between the different CPCE items and dimensions according to the location of the centre, and Spearman's Rho correlation test was used to check the relationship between each dimension and teaching experience. Correlation thresholds were interpreted as follow [27]: 0.01-0.09, negligible; 0.20-0.29, weak; 0.30-0.39, moderate; 0.40-0.69, strong; and ≥0.70, very strong. Finally, Cronbach's alpha was used to calculate the reliability of each instrument dimension. Alpha level was set at p ≤ 0.05. Table 2 shows the associations between the different items of CPCE based on the centre location. Overall, significant differences (p < 0.01) were observed between rural and urban centres, except in items 10, 11, 14 and 15, which referred to the importance of materials and the safety and space adaptation to children's characteristics, the presence of a psychomotor specialist at school and the general objective of psychomotor skills. Table 3 presents the scores of each CPCE dimension according to the centre location. Results showed significant differences between rural and urban centres in all dimensions (p < 0.01), except for "contents" (p = 0.48). Specifically, urban centre teachers reported higher scores in "training", "programming", "material" and "sessions" dimensions than their counterparts from rural centres. Slightly higher outcomes in the "personal" dimension were also observed in urban centre teachers compared to rural centre teachers. Table 4 shows the correlations between factors and age. Overall, no significant associations were observed between age and dimensions, except for "programming" (rho = 0.27; p < 0.01) and "material" (rho = 0.28; p < 0.01) dimensions, which were weakly and positively associated with age.
Discussion
This research arose from the need to find out whether psychomotor skills are really being given importance in early childhood education. For this purpose, the CPCE questionnaire was used to clarify whether teachers have received adequate training, carry out a correct programme, have the necessary personnel and material and use appropriate content in the planned number of sessions.
The main findings regarding the training received by teachers revealed that they never, or almost never, received the correct training in psychomotor skills. These findings coincide with those shown by Díaz and Sosa [21], who reported that teachers do not have the necessary training in the psychomotor field due to insufficient subjects throughout their careers. Similarly, Solís-Picatto et al. [28] concluded that teachers do not have specific training in psychomotor skills; however, these authors indicated that, despite this, teachers are trained to carry out the sessions. Nevertheless, it can lead to a routine of sessions without specific objectives that may harm the evolution of the child. Other authors [29,30] detected that teachers demanded training in psychomotor skills, as well as the need for training and updating courses in psychomotor skills. If we analyse these findings based on centre location, we observe that rural environments present lower scores than urban environments.
Concerning the programming dimension, our findings highlight that rural environment centres always reported close to never, or almost never, outcomes, in contrast to the urban centres which show always, or almost always, scores. The results obtained are like those presented in previous studies [31,32], in which teachers programmed their activities daily according to individual characteristics, but they did not follow a programme shared with other teachers. This highlights the need to integrate psychomotor education into the curriculum and be programmed systematically like the rest of the areas. Moreover, it would be important to consider that previous literature on rural education revealed inadequacies in programming due to a lack of funding, including fewer specialists, unskilled employees, restricted resources and fewer programme options [33,34].
Regarding personnel and material dimensions, we can affirm that the scores are generally very low in both urban and rural centres. Thus, teachers agree on the need for the safety of the space and the lack of specialists being the only dimensions among which we do not find significant differences, since the availability of materials and space and the need for adequate training differ in the answers according to the type of centre due to the presence of more resources available in urban centres. Our findings coincide with those shown by other authors [35][36][37], among whom some express limitations among teachers when considering a degree necessary and the importance and need for adequate conditioning of materials and spaces. They consider it the main reason for not carrying out psychomotor skills sessions since they play a central role in students' motivation. The lack of adapted infrastructures for psychomotor work was also reported previously by Pons Rodríguez and Arufe-Giráldez [14].
In terms of content, most of the teachers surveyed agree that psychomotricity is physical education in early childhood education, and that its general objective is to develop motor and psychological skills independently of the environment, except for defining psychomotricity as physical education. This conceptual-epistemological confusion between the terms physical education and psychomotricity has been previously documented in the literature [12].
Finally, regarding sessions dimension, the adequacy of the number of sessions, their duration and the work methodology in psychomotor classes is similar in general, although there are differences between rural centres where the number of sessions and their duration are almost never adequate compared to urban centres where they tend to be more adjusted. It could be explained by the previous finding reported by Alonso-Álvarez and Pazos-Couto [31] who highlighted that the importance given to psychomotor skills is low because the work in the classroom is little. Thus, it is necessary to think about how the time dedicated to psychomotor skills practice and its distribution is spent, even more so considering that previous research has shown that distributed practice is more effective in the development of psychomotor skills than massed practice [38][39][40], i.e., short frequent practice sessions are more effective than practice over a long period of time.
To sum up, urban and rural centres differ in all dimensions except for contents. The differences are always in the same way; i.e., rural centres find more difficulties when it comes to having valid materials or facilities, trained personnel, adequate content or sessions than urban centres, as it was previously reported [41,42]. Observing correlations between the different dimensions according to age, the correlations are significant in the programming and material dimensions, so it can be said that the score received by these dimensions will vary according to the age of the respondents. On the contrary, the rest of the dimensions will not vary according to age. It may be due to experienced educators who are more trained and more comfortable and motivated with psychomotricity contents. Moreover, older educators invert on more materials. However, future research is needed for clarifying this point.
Considering all the above, there are evident needs, from the teachers' point of view, in terms of previous and ongoing training as well as in the competence of the teachers themselves, to develop adequate psychomotor skills sessions. In the same way, teachers who carry out their teaching labour in rural centres report scores lower than those in urban environments, except in the last dimension of the questionnaire. Therefore, public and private organizations must determine and develop appropriate educational content to promote the training of teachers at any professional stage, providing them with useful tools so that society can enjoy the innumerable benefits of this content. In the same sense, new strategies should be developed to improve the situation of rural centres since it is evident that they lack resources in terms of psychomotor content, which could be dangerous due to the lower number of stimuli and psychomotor opportunities in these populated areas.
As the main limitations of the present study, we note the lack of previous research on the topic in question, the subjective data and the lack of generalizability because this study focuses on a specific region of Spain. Future studies should be oriented to assess the impact of more training of teachers on the children's development, considering that more psychomotor experience in preschool classes may enhance language development and other aspects of cognition, as is reported in previous literature. Moreover, it should be valued the feasibility to modify the curriculum development, highlighting the importance of play in the learning and practising of new skills beyond the health reasons to make exercise available to children, considering the importance of play and skills in the preschool stage. Moreover, the inclusion of teachers from other regions and countries should be considered to compare the different points of view, depending on the importance that the educational curriculum gives to psychomotricity.
Conclusions
The early childhood education teachers surveyed in the community of Extremadura agree that they do not have the necessary material and facilities for teaching psychomotor skills and do not have adequate staff or training. Moreover, they agree that they programme the classes and adapt them to the motor content and individual characteristics, but this occurs mainly in urban centres, being almost non-existent in rural environments. The only drawback is that the programming is carried out individually and not in an organized manner among the teachers who share a cycle. The number of sessions is considered adequate by teachers in urban centres but deficient in rural centres.
The only dimension that receives high scores and with which teachers agree is the objectives of psychomotor skills, although many confuse it with a type of physical education in early childhood education. The age of the teachers was not a determining factor in most of the responses but it does influence the programming and the material and facilities. Institutional Review Board Statement: Not Applicable. The use of these data did not require approval from an accredited ethics committee, as they are not covered by data protection principles, i.e., they are non-identifiable, anonymous data collected through an anonymous survey for teachers. In addition, based on the Regulation (EU) 2016/679 of the European Parliament and of the Council on 27 April 2016 on the protection of individuals concerning the processing of personal data and on the free movement of such data (which entered into force on 25 May 2016 and has been compulsory since 25 May 2018), data protection principles do not need to be applied to anonymous information (i.e., information related to an identifiable natural person, nor to data of subject that is not, or is no longer, identifiable). Consequently, the Regulation does not affect the processing of our information. Even for statistical or research purposes, its use does not require the approval of an accredited ethics committee.
Informed Consent Statement: Written informed consent has been obtained from the patient(s) to publish this paper.
|
2022-08-14T15:16:25.899Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a0d22fa2442e2e9589c03f4a46442b3d4bc9265d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/9/8/1214/pdf?version=1660271188",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdb1ff57d55008f96b99df19ec7d5ed9c1ae7135",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": []
}
|
119111770
|
pes2o/s2orc
|
v3-fos-license
|
Form factors of twist fields in the lattice Dirac theory
We study U(1) twist fields in a two-dimensional lattice theory of massive Dirac fermions. Factorized formulas for finite-lattice form factors of these fields are derived using elliptic parametrization of the spectral curve of the model, elliptic determinant identities and theta functional interpolation. We also investigate the thermodynamic and the infinite-volume scaling limit, where the corresponding expressions reduce to form factors of the exponential fields of the sine-Gordon model at the free-fermion point.
Introduction
It is a general property of two-dimensional quantum field theories that their symmetries give rise to new local fields, whose correlation functions are nontrivial even if the underlying theory is free. Two paradigmatic examples are given by the disorder variables in the Ising field theory [16,30] and U(1) twist fields in the massive Dirac theory [23,29,31], directly related to the exponential fields in the sine-Gordon model at the free-fermion point.
Correlation functions of twist fields in the Dirac theory, as well as in its generalizations to curved space and non-zero background magnetic field [6,7,18,27], are also interesting from the mathematical point of view. They satisfy nonlinear differential equations [1,8,29], which in the simplest cases can be solved in terms of Painlevé functions. The knowledge of the long-and short-distance behaviour of the twopoint correlators provides solutions to nontrivial asymptotic and connection problems of Painlevé theory. Recently, it has also been observed [19] that such correlators coincide with the gap probabilities for the classical kernels arising in the representation theory of big groups [2].
The aim of this paper is to construct lattice analogs of U(1) twist fields in the Dirac model, satisfying the following properties: (i) they should be defined via the branching of lattice fermion fields, (ii) one should be able to calculate their form factors explicitly and (iii) these form factors should reproduce the known expressions in the scaling limit. Besides full control of the theory, such an integrable finite-lattice regularization can be used for investigations at non-zero temperature and for a mathematically sound derivation of the relative normalization of conformal and infrared asymptotics of the two-point correlator [5,21]. It may also be instrumental in going beyond the freefermion point.
While the Ising field theory possesses a natural lattice regularization, only a few results are available in the Dirac case. First attempt to introduce twist fields on the infinite lattice was made in [25]. The corresponding definition was supported by the computation of the vacuum expectation value (reproducing the expected scaling dimension), and was further strengthened by the analysis of correlations at the critical point [26]. Another, seemingly unrelated definition was used in [5,20] to derive a number of determinant representations for the two-point function of lattice twist fields. The present work is devoted to the computation of their form factors, i. e. matrix elements of the field operators in the basis of transfer matrix eigenstates.
The paper is planned as follows. In Subsections 2.1 and 2.2, we introduce a oneparameter generalization of the lattice Dirac operator considered in [5,20] and explain the definition of twist fields in the functional integral framework. Transition to the operator formalism is performed in the next subsection. Using the coherent states approach, the transfer matrix and the twist field operator are written as exponentials of fermion bilinears, see formulas (9) and (11) below. Subsection 2.4 is devoted to the construction of multiparticle Fock states simultaneously diagonalizing the transfer matrix and the operator of translations.
In Subsection 3.1, it is explained how form factors of twist fields can be found from the linear transformations relating fermionic creation-annihilation operators of different periodicity. In particular, the vacuum expectation value and two-particle form factors are expressed in terms of two square matrices C and D of dimension equal to the lattice size. Essentially, one needs to compute the quantities D −1 , D −1 C and det D, cf. (25)- (32). This task is solved in Subsection 3.2 by first noting that in the elliptic parametrization of the spectral curve of the model C and D are given (up to diagonal factors) by elliptic Cauchy matrices, and then using Frobenius determinant identity and theta functional interpolation along the lines of [13]. The corresponding expressions are further simplified in Subsection 3.3. Finite-lattice two-particle form factors are given by (42), (44) and (45), and the multiparticle ones have the factorized form (46), (49). These formulas represent the main result of the paper. In Subsection 3.4 we analyze the thermodynamic (infinite-lattice) limit. The final answer has a remarkably simple expression in terms of the Jacobi theta functions, see (54)-(56). We remark that the vacuum expectation value (54) reproduces the earlier result of [25]. Field theory limit is considered in Subsection 3.5. It is shown that the scaled form factors coincide with those of the exponential fields of sine-Gordon model at the free-fermion point [1,23,31]. We conclude with a brief discussion of results and open problems.
Fermions
Let ψ,ψ denote two 2-component Grassmann fields on an M ×N square lattice. Consider the standard free-fermion action S[ψ,ψ] =ψDψ, where the lattice Dirac operator is chosen as Here ∇ x,y denote the shifts by one lattice site in the horizontal and vertical directions, so that e. g. ∇ x ψ x,y = ψ x+1,y , ∇ y ψ x,y = ψ x,y+1 . The boundary conditions with respect to x and y are antiperiodic and α-periodic, respectively. This means that i . It will be assumed in the following that K * x < K y . Dirac operator considered in [5,20] is obtained from (1) by setting K x = K y .
Fermion propagator can be found using Fourier transform. One obtains where cosh γ θ = c * x c y − s * x s y cos θ.
Twist fields
It is instructive to start with an example. Choose a closed path P on the dual lattice ( Fig. 1) and make the transformation ψ → e 2πiν ψ,ψ → e −2πiνψ with ν ∈ R at all lattice sites inside this contour. Because of the global U(1)-symmetry, the action S[ψ,ψ] will change only at the edges intersected by P.
If the corresponding changes are made along an open path P AB joining two points A and B on the dual lattice, the resulting functional integral will depend on the positions of these points and homotopy class of the path, but not on its shape. One way to choose P AB is shown in Fig. 2. In this case, the action is modified by where δS A,B correspond to the vertical segments and δS b.c. to the horizontal one. Explicitly, with α ′ = α + ν. Without any loss of generality, we assume that 0 ≤ α, α ′ < 1. Twist fields live on the dual lattice. Their two-point correlation function is defined as the normalized partition function of the Dirac theory with the defect contribution (4): Note that the effect of δS b.c. amounts to the change of the vertical boundary conditions for fermions from the horizontal interval [x, x ′ − 1] from α-to α ′ -periodic ones. More generally, the appearance of twist field O α,α ′ (A) in an arbitrary correlation function means that • the vertical boundary conditions for fermions to the left and right of A are α-and α ′ -periodic, respectively; • the term δS A should be added to the action.
In the heuristic continuum limit, this corresponds to integrating over field configurations having counterclockwise monodromy e 2πiν of ψ (resp. e −2πiν forψ) around A.
Coherent states and the operator formalism
Locality of twist fields becomes manifest in the operator formalism, which also provides a convenient framework for the computation of correlation functions. Let us introduce two sets of fermionic creation-annihilation operators satisfying canonical anticommutation relations with all other anticommutators vanishing. Define in the usual way the vacuum vectors vac| and |vac , normalized as vac|vac = 1, and the corresponding 2 2N -dimensional Fock space F . Further, introduce the coherent states where {ψ i x,y }, {ψ i x,y } denote Grassmann variables anticommuting with all creationannihilation operators. These states satisfy the following standard properties: • For y = 0, . . . , N − 1 one has x,y . • The scalar product of two coherent states is given by • The identity operator can be represented as a 4N-fold Grassmann integral • The trace of any operator O can be written as an integral of its matrix element in the basis of coherent states with a Gaussian kernel: x can be obtained by writing O in normally ordered form, making therein the replacements x ′ ,y , and multiplying the result by (6).
Now consider the operator
where it is understood that Rewrite the quantity Z = Tr V M α using at the first stage (8) with x = N − 1, x ′ = 0 to calculate the trace, and inserting M − 1 resolutions of unity (7) (with x = k − 1 and x ′ = k between the k-th and (k + 1)-th factor of V α ). Then, computing matrix elements of V α in the basis of coherent states, the reader may easily check that Z coincides with the partition function of the Dirac theory described by (1).
In fact V α is the transfer matrix characterizing discrete time evolution of twist fields. Vertical defects of the action associated to them divide the horizontal axis into intervals.
The evolution in different intervals is governed by the matrices V α with appropriate values of α. Twist fields are represented by the operators where y * = y − 1 2 . This can be seen by noticing that and repeating the procedure used above for the computation of Z. For example, twopoint correlator (5) can be expressed as The problem of effective calculation of correlation functions of twist fields therefore reduces to the computation of form factors of the operator (11) between the eigenstates of V α and V α ′ . Observe that e. g. O α,α ′ (0 * ) is given by the identity operator on F . However, its form factors are nontrivial since the corresponding bra and ket states diagonalize different transfer matrices.
Transfer matrix diagonalization
Define Fourier transforms of the creation-annihilation operators: where θ belongs to the set θ α = 2π The only nonvanishing anticommutators are given by where Λ(θ) is a Hermitian matrix with unit determinant, explicitly given by It can be brought to the diagonal form, Λ(θ) = U(θ) e −γ θ 0 0 e γ θ U † (θ), by a unitary transformation. Here γ θ is defined as the positive solution of (3), and the are the eigenvectors of Λ(θ) normalized so that The freedom in the choice of the phase can be used to set where and Note that under the above conventions one has β < α < 1. Root function in (13) is taken on the principal branch.
A new set of the creation-annihilation operators the other anticommutators being equal to zero) and diagonalizes the transfer matrix V α : Introduce the vacua α vac| and |vac α annihilated by all {c † θ }, {d † θ } and {c θ }, {d θ }, respectively, and normalized as α vac|vac α = 1. Left and right eigenvectors of V α are then given by the multiparticle Fock states and the corresponding eigenvalue is equal to cy The states (14)-(15) simultaneously diagonalize the U(1)-charge and the translation operator, The latter satisfies, e. g., under boundary conditions (10).
It is useful to note that the twist field (11) is U(1)-neutral and coincides with the identity operator twisted by translations of different periodicity: where θ, φ ⊂ θ α and θ ′ , φ ′ ⊂ θ α ′ . Such scalar products normalized by the product of the vacua will be denoted by
General setting
Let us first recall a few results from [11,13,28]. Consider two sets of 2L fermionic creation-annihilation operators generating equivalent Fock representations in the same space. Denote the corresponding 2 L × 2 L matrices by {ψ i }, {ψ † i } and {ϕ i }, {ϕ † i } with i = 1, . . . , L and combine them into L-columns ψ, ψ † , ϕ, ϕ † . Suppose there exists a unitary operator σ such that where A, B, C, D are some L × L matrices. The unitarity of σ and canonical anticommutation relations imply that B =C, A =D and Suppose that D is invertible. Then, up to inessential phase factor related to the choice of the vacua, one has General matrix elements of σ between Fock states of different types can be expressed as where the entries of the blocks of (m + n) × (m + n) skew-symmetric matrix R are given by the normalized two-particle form factors In the case of interest here, σ is the identity operator and L = 2N. The creation-annihilation operators in each set are labeled by their U(1)-charges and the corresponding momenta. Thus e. g. ψ and ϕ are given by the 2N-columns c d built from the operators c θ , d θ with θ ∈ θ α and θ ∈ θ α ′ , respectively. A, B, C, D can therefore be seen as block 2 × 2 matrices with block entries indexed by θ ∈ θ α ′ , θ ′ ∈ θ α . To find their explicit form, note that for θ ∈ θ α ′ one has where U(θ) is defined by (12). Therefore, introducing the notation Λ θ,θ ′ = e i(πν+θ)/2 δ θ,θ ′ , θ, θ ′ ∈ θ α , we find that This in turn implies that the vacuum expectation value of twist field and its nonzero two-particle form factors are given by One also has although this is not immediately obvious from (28)-(29). This reduces our task to the computation of determinant and inverse of D and of the product D −1 C. The relations (23)- (24) imply that the elements of C and D remain invariant if one multiplies U(θ) from the left by a unitary matrix independent of θ. Together with (12), this gives The matrices C and D are therefore very simply related to the ones appearing in the Ising model theory, see Lemma 3.3 in [13]; the main and almost only difference between the two cases is the change in the spectrum of quasimomenta. We will now follow [13] to establish elliptic representations for C and D, which then will be used to calculate det D, D −1 and D −1 C.
Elliptic parametrization
with η ∈ − K ′ 2 , 0 determined by sinh 2K x = i sn 2iη, satisfy the relation (3) written in the form The formulas (33)-(34) bijectively map the real interval C u = {u | Re u ∈ [−K, K), Im u = 0} to C θ = (z, λ) = (e iθ , e γ θ )| θ ∈ [0, 2π) . The inverse image of the point (e iθ , e γ θ ) ∈ C θ in C u will be denoted by u θ . Note that for θ ∈ (0, π] one has u θ = −u 2π−θ . It is also useful to define the function x θ = πu θ 2K , which continuously increases from − π 2 to π 2 when θ varies from 0 to 2π. Lemma 4.1 in [13] shows that under the above parametrization matrix elements (31)-(32) can be written as We will also need Jacobi theta functions ϑ 1...4 (z) of nome q = e iπτ , which are related to the elliptic modulus and half-periods by where ϑ i = ϑ i (0) for i = 2, 3, 4. The quantities det D and D −1 can be computed in terms of these functions using Frobenius determinant identity, while D −1 C can be found from a theta functional analog of the Lagrange interpolation formula (see Section 5 of [13] for the details of a similar cumbersome calculation). The result is as follows: with X α = θ∈θα x θ . In the next subsection, the answer (37)-(39) is rewritten in a somewhat different form, which turns out to be more suitable for the analysis of the thermodynamic limit and for the computation of multiparticle form factors.
VEV and form factors
We illustrate the procedure by transforming the expression for (D −1 C) θ,θ ′ . First rewrite the second line of (39) as where the functions G ± θ are defined by .
The remaining products of sinh's can be combined into the function
There exists a simple combination of G ± θ and H θ independent of θ, namely with G π = G ± π . The identity (41) can be proven using the standard addition formulas for theta functions, their relation to the elliptic functions and the formula (4.8) in [13], relating elliptic and trigonometric parametrization.
The determinant of R in (46) can be evaluated in a closed form using Frobenius identity. Given 2L indeterminates z 1 , . . . , z L and z ′ 1 , . . . , z ′ L , it expresses the determinant of the elliptic Cauchy matrix Ω with elements in the product form x θ ′ j−n + πτ 2 for j = n + 1, . . . , n + m ′ , one can check that Ω coincides with R up to diagonal matrix factors. It then follows from (48) that where p = 2 for m − m ′ ∈ 2Z + 1 and p = 3 for m − m ′ ∈ 2Z. Together with (46), this gives our main result -a completely explicit factorized formula for any multiparticle matrix element of twist field.
Thermodynamic limit
Form factors found above simplify in the thermodynamic limit N → ∞. To explain these simplifications, consider e. g. the expression (44) for the vacuum expectation value (25). We need to evaluate the asymptotics of (i) X α ′ − X α and (ii) η θ . Both quantities can be written in the same form where the function f θ ′ is defined for θ ′ ∈ [0, 2π]. In the case (i) one has f θ ′ = x θ ′ , while in the case (ii) f θ ′ is given by The most important distinction between the two situations is that in the first case one has f 2π − f 0 = π, whereas (51) extends to a continuous 2π-periodic function on the real line.
As N → ∞, one has the estimate In the same way one shows that Rigorous mathematical proofs of (52)-(53) and explicit error estimates can be obtained using a kind of Sommerfeld-Watson transform of the sums (50). It then follows from (25), (44) that the thermodynamic limit of the vacuum expectation value of twist field is given by the simple expression where the arguments of theta functions and the half-period ratio are now indicated explicitly for further convenience. As one could expect on general grounds, the r.h.s. of (54) depends on α, α ′ only via the difference ν = α ′ − α. Note that this relation has the same form as the first formula on p. 187 of [25]. Particle quasimomenta in the thermodynamic limit uniformly fill the interval [0, 2π]. Summation over each of them in the form factor expansions of correlation functions transforms into integration: 1 N θ → 1 2π 2π 0 dθ. Although this naive procedure is plagued by the appearance of annihilation poles in the crossed form factors, it suggests to consider The asymptotics (52) implies that the limiting two-particle form factors are given by The limit of multiparticle form factors can be found in a similar way: it suffices to remove the functions η θ from (46) and to replace X α ′ − X α in (49) by πν.
Scaling limit
The gap in the energy spectrum closes as γ 0 → 0. Since γ 0 = 2(K y − K * x ), this corresponds to k → 1 (or, equivalently, τ → i0 or q → 1) in terms of elliptic parameters. Also note that In order to obtain the asymptotics of the vacuum expectation value (54) in the vicinity of the critical point, recall the modular transformation and the product formula Rewrite the theta functions in (54) using (61). The nome of the transformed functions vanishes as k → 1. The representation (62) therefore implies that lim τ →i0 This in turn can be used together with (60) to show that, as k → 1, The second approximation is in fact exact, cf. (57). As the r.h.s. of (54) is periodic in ν with period 1, the critical asymptotics of the vacuum expectation value for any ν can be deduced from (63). Note that the scaling dimension of the twist field for |ν| ≤ 1 2 has the expected value ν 2 .
In the vicinity of the critical point, our initial lattice model becomes equivalent to a field theory of free massive Dirac fermions. Correlation functions of twist fields in this theory are determined by momenta at the scale of inverse correlation length (for convenience, we change the domain of definition of lattice momenta from [0, 2π] to [−π, π]).
More formally, denote ε = 1 − k 2 2c y , set θ = ε sinh ξ and let k → 1. The dispersion relation (3) then implies that γ θ → εs y cosh ξ. One also has Normalized scaled two-particle form factors of twist fields are determined by where the variables ξ, ξ ′ have the meaning of particle rapidities. To compute the corresponding limits, one can adopt the same approach as in the above asymptotic analysis of the VEV. Transform the theta functions in (55)-(56) using Jacobi's imaginary transformations, rewrite the result using product formulas, and then let τ → i0. For |ν| < 1 2 , one finds . This reproduces two-particle (and hence all) form factors of the continuum twist fields in the massive Dirac theory [1,23,29,31], which correspond to the exponential fields in the sine-Gordon model at the free-fermion point. Likewise, it can be deduced from (58)-(59) that for ν = ± 1 2 one has The fact that F 1 2 −0 = F − 1 2 +0 can be understood as follows. Monodromy conditions for fermion fields in the continuum give a system of integral equations for the two-particle form factors [1]. This system has a unique admissible solution for |ν| < 1 2 , and two solutions for ν = ± 1 2 . Each of the two solutions gives rise to a twist field operator. Any linear combination of these operators leads to the required fermion branching.
Concluding remarks
Finite-lattice form factors in the conventional free-fermion models (triangular Ising lattice, XY quantum spin chain, BBS 2 model) can be obtained [12,14] from those of the Ising spin on the square lattice [3,4,9,10,13]. The fields considered in this paper are more general; in particular, their scaling dimension continuously depends on a real parameter ν. We compute their form factors explicitly in terms of the Jacobi theta functions and show that in the scaling limit they reduce to form factors of the exponential fields of the sine-Gordon model at the free-fermion point.
Determinantal form of form factors is also encountered in some of the interacting integrable models, see e. g. Slavnov's formula for scalar products of Bethe states in the spin-1 2 XXZ chain [17,32]. An intriguing question is therefore if it is possible to go beyond the free-fermion point and calculate form factors of twist fields in the integrable lattice regularization of the massive Thirring model [22] (related to the eight-vertex statistical model). We believe that the present work makes a step in this direction.
Another challenge is to complete form factor derivation for Z N -symmetric superintegrable chiral Potts quantum chain. It was recently shown that they have Ising form up to unknown scalar factors labeled by the pairs of the so-called Onsager sectors [15]. Putative free-fermion part of the integrable structure of this model is yet to be elucidated.
|
2011-08-16T17:28:14.000Z
|
2011-08-16T00:00:00.000
|
{
"year": 2011,
"sha1": "10fb523030454011555044331488b886c4c4f51c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1108.3290",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "10fb523030454011555044331488b886c4c4f51c",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
16936830
|
pes2o/s2orc
|
v3-fos-license
|
HIV coinfection influences the inflammatory response but not the outcome of cerebral malaria in Malawian children
Summary Objectives Study of the effect of HIV on disease progression in heterogeneous severe malaria syndromes with imprecise diagnostic criteria has led to varying results. Characteristic retinopathy refines cerebral malaria (CM) diagnosis, enabling more precise exploration of the hypothesis that HIV decreases the cytokine response in CM, leading to higher parasite density and a poor outcome. Methods We retrospectively reviewed data on clinical progression and laboratory parameters in 877 retinopathy-positive CM cases admitted 1996–2011 (14.4% HIV-infected) to a large hospital in Malawi. Admission plasma levels of TNF, interleukin-10, and soluble intercellular adhesion molecule (sICAM-1) were measured by ELISA in 135 retinopathy-positive CM cases. Results HIV-infected CM cases had lower median plasma levels of TNF (p = 0.008), interleukin-10 (p = 0.045) and sICAM-1 (p = 0.04) than HIV-uninfected cases. Although HIV-infected children were older and more likely to have co-morbidities, HIV-status did not significantly affect parasite density (p = 0.90) or outcome (24.8% infected, vs. 18.5% uninfected; p = 0.13). Conclusion In this well-characterised CM cohort, HIV-coinfection was associated with marked blunting of the inflammatory response but did not affect parasite density or outcome. These data highlight the complex influence of HIV on severe malaria and bring into question systemic inflammation as a primary driver of pathogenesis in human CM.
Introduction
In sub-Saharan Africa over 3 million children are infected with the Human Immunodeficiency Virus (HIV). 1 There are in excess of 100 million cases of Plasmodium falciparum infection per year, leading to approximately half a million deaths, mainly in African children. While the overlap between the two diseases is considerable, with many malaria infections occurring in HIV-positive children, 2 determining the effect of HIV on the severity and outcome of malaria has been problematic, leading to variable and apparently contradictory results. 3e6 Some studies have found increased parasite density, an association with more severe malaria and worse outcome, and others have not (See Table 1 for a summary of published literature). We propose that at least in part, the use of insufficiently stringent diagnostic criteria for cerebral malaria (CM), could have led to misclassification of cases and therefore variability in the associations identified.
CM is a prominent severe malaria syndrome defined by the WHO as unrousable coma (Blantyre Coma Score 7 2) in the presence of P. falciparum parasitaemia, with no other cause of coma found. 8,9 In the absence of additional criteria this clinical definition leads to over diagnosis of CM, leaving uncertainty as to whether coma is truly caused by parasitaemia or whether a person has an uncomplicated malaria infection and coma due to another aetiology. This is particularly problematic in high transmission settings where a high proportion of apparently well children in the community are parasitaemic. This was highlighted by a study at our centre in Malawi where a quarter of children diagnosed as having WHO-defined CM were found to have a non-malaria cause of coma and death at autopsy in the context of a peripheral parasitaemia. 9 This mis-classification may be exacerbated by HIV co-infection which may increase the risk of other non-malarial co-morbidities causing coma and thus confound the ability to detect true associations between HIV, CM and outcome (e.g. peripheral parasite density, the inflammatory response or mortality).
Characteristic retinal changes that are indicative of sequestration of P. falciparum-infected red blood cells (iRBC) in the neurovasculature 10 distinguish with high specificity and sensitivity those children with histological evidence of CM, from those with a non-malarial coma. 9 In order to re-examine the impact of HIV on CM, we have therefore used this refined diagnosis to classify a large cohort of Malawian children with CM, with and without HIV co-infection. Following the observation that peripheral blood mononuclear cells from HIV-infected individuals have impaired tumour necrosis factor-alpha (TNF) and interleukin 10 (IL-10) production in vitro in response to iRBC challenge, 11 we addressed the specific hypothesis that HIV-infection results in lower levels of systemic TNF and IL-10 in CM in vivo and that this is associated with a higher peripheral parasite density and a higher mortality.
Methods Location
This study was conducted at Queen Elizabeth Central Hospital (QECH), Blantyre, Malawi. In 2010 HIV prevalence in pregnant women in this region was 18% and overall seroprevalence in Malawian children less than 14 years old was estimated to be 2.7%. 12 Malaria transmission in rural communities around Blantyre occurs year-round peaking during the rainy season (NovembereJune).
Children diagnosed with HIV were followed up in paediatric HIV clinics, received daily preventive cotrimoxazole and, from 2001 and when eligible, combination antiretroviral therapy ([ART] lamivudine, stavudine and nevirapine; Triomune, Cipla). Routine CD4 quantification and WHO staging were introduced in 2006.
Patients
As part of a longstanding clinico-pathological study of CM in Blantyre, 13 Malawian children aged 6-months to 12-years presenting to QECH with clinical CM were recruited and managed on a paediatric research facility during consecutive rainy seasons from February, 1996 to June, 2011.
Management
Patients with CM were treated with intravenous quinine for at least 24 h and then switched to oral drugs (Sulphadoxinepyrimethamine pre-2007 or Lumefantrine-artemether). Ward rounds by experienced clinicians were conducted twice daily.
From 2001 all patients whose HIV status was unknown were tested for HIV after a parent or legal guardian gave consent. Prior to 2001 HIV tests were conducted retrospectively on stored samples. In fatal cases where HIV-status (continued on next page) was unknown, it was done posthumously. Ethical approval was obtained for this retrospective and posthumous testing.
Blood collection and diagnostic tests
Venous blood was collected on admission. Plasma was stored at À80 C for ELISA tests. A full blood count (Coulter Counter, BectoneDickinson, New Jersey), blood culture (BACTEC 9120, BectoneDickinson) and thick and thin blood smears (Field staining) were performed on all patients. Peripheral parasite density was calculated using the patients' individual full blood count. HIV testing was performed with two rapid tests, Determine (Abbott laboratories, Green Oaks, IL) and Unigold (Trinity Biotech PLC, Bray, Ireland). A third test was used to resolve discrepancies (Capillus, Trinity Biotech). For patients <18-months HIV status was determined by PCR (Amplicore, Roche, Pleasanton, CA). Unless contraindicated, a lumbar puncture was performed. Patients with visibly cloudy CSF were excluded from the analysis.
ELISA tests
We determined HRP2 levels from stored plasma of a subset of patients, including all patients admitted in 2009, for a previous study. 14 For patients admitted in 2010 and 2011, TNF, IL10 and sICAM-1 were determined from stored plasma using commercial ELISA kits: (R&D, Minneapolis; DY210, DY217B and DY720).
Statistical analysis
Analysis was performed using Stata software (Version 10.0-Statacorps, Texas USA). Non-normally distributed continuous variables were compared using a ManneWhitney U test and summarized using medians and interquartile ranges. Associations between categorical variables were assessed using the Fisher's Exact test. The Cox proportional hazards model was used to analyse time to death by HIV status and hazard ratios with 95% confidence intervals (CI) reported. The relationship between mortality and baseline variables was assessed using odds ratio (OR). The baseline variables of interest in this assessment were lactate levels, gender, HIV status and age. A logistic regression model was fitted to obtain unadjusted and adjusted OR in this assessment for the four baseline variables. The OR, associated 95% confidence intervals and the p-values have been reported. Graphical summaries have also been presented for the variables of interest. These include Kaplan Meier plots for the time to event data, histograms and dot plots for summarising continuous data between groups. All tests were considered statistically significant at 5% significance level.
Other key symptomatic features of illness prior to presentation to hospital including vital observations (respiratory rate, pulse, blood pressure) and the duration of presenting features (fever, convulsions, coma) were similar between HIV-infected and uninfected children ( Table 2). The number of children who had received either oral or parenteral antimalarial treatment (chloroquine, sulphadoxine pyrimethamine, lumefantrine-artemether or quinine) before arrival at QECH was also not affected by HIV status (HIV-infected, 29.7%, HIV-uninfected, 29.0%, p Z 0.91). Characteristics of retinopathy negative cases are discussed below.
Baseline laboratory findings on admission
Geometric mean peripheral parasite densities were similar between HIV-infected (45,059 parasites/ml, 95% CI 28,098e72,258) and HIV-uninfected children (40,195 parasites/ml, 95% CI 32,771e49,301 p Z 0.68, Table 3), as were geometric mean HRP2 concentrations between the subset of 139 HIV-uninfected (1268 ng/ml, 95% CI 1002e1604) and 24 HIV-infected children (946 ng/ml, 95% CI 393e2279; p Z 0.39; Table 3 and Fig. 3) in whom HRP2 levels were measured. The geometric mean and 95% CI for peripheral parasite density of this subset in whom HRP2 was measured was similar to the overall cohort, suggesting it was representative of the cohort as a whole (Fig. 4). Other laboratory parameters were similar between HIV-infected and HIVuninfected children (Table 3). Laboratory characteristics of retinopathy negative cases are discussed below. to fever clearance (the time in hours from admission until the last recorded temperature >37.5 C; IQR 20.0e74.0 and 20.0e54.0; p Z 0.0689).
Disease progression and outcome
The unadjusted hazard ratio for fatal outcome was not significant (HIV-infected 24.8%, HIV-uninfected 18.5%; Hazard ratio 1.104; 95% CI 0.645e1.887; p Z 0.13). Among retinopathy positive children this remained non-significant when analysis was restricted to children <5 years old (27.2% and 18.1%; p Z 0.07) or children <3 years old (25% and 12.8%; p Z 0.08). There was also no difference in survival profile between the HIV-infected and uninfected children (p Z 0.720, Supplemental Fig. 1).
In view of significant changes to the management of HIV and malaria (e.g. the roll out of ART for HIV in 2007 and lumefantrine-artemether for uncomplicated malaria in 2012) and progression of the HIV epidemic in Malawi, 12,15 we analysed the data in five-year periods (1996e2000; 2001e2005; 2006e2011), looking at patient characteristics, clinical course and mortality (Supplementary Table 2). Each period gave similar results to the overall cohort.
Within a logistic regression model adjusting for age (in months) and sex, among the retinopathy positive children, lactate was the only independent predictor of mortality (OR 1.10 per mmol increase, 95% CI 1.04e1.16, p < 0.001; Supplemental Tables 2 and 3).
Plasma cytokine levels in HIV-infected and uninfected retinopathy positive CM patients
Sufficient plasma was available to measure sICAM-1 for 107 HIV-uninfected and 12 HIV-infected retinopathy positive children. sICAM-1 levels were significantly lower in the HIV-infected (median 350 ng/mL; IQR 289e437 ng/mL) than in the HIV-uninfected children (median 563 ng/mL; IQR 330e841 ng/mL; p Z 0.04; Fig. 4D).
Discussion
We have used a large cohort of well-characterized patients and a stringent definition of CM to explore the effect of HIV It is likely that the lack of systemic inflammatory response in HIV-positive children is at least in part due to impaired CD4 T-cell function. Peripheral blood mononuclear cells from HIV-infected adults have decreased TNF production (T-helper 1 cytokine) in response to challenge with P. falciparum in vitro. 11 Therefore through abrogating the cytokine response to malaria infection in HIV-infected individuals, HIV has provided a 'natural experiment', shedding light on the role of the systemic cytokine response in CM pathogenesis. With regards to the pro-inflammatory T-helper 1 response, it has been long postulated that proinflammatory cytokines, particularly TNF, may provide a double-edged sword in malaria outcome. On the one hand, TNF may play a critical role in the immune control of overall parasite burden which may be an important determinant of disease severity and outcome. 16 On the other hand, high levels of TNF and a cytokine storm have been postulated to be critical in the development and outcome of severe and CM. 16,17 Here we show that HIV-infected children have retinopathy positive CM with similar clinical features, peripheral parasite density, HRP2 levels and outcome, despite a markedly blunted cytokine response, to HIV-uninfected children with retinopathy-positive CM. These findings imply that substantially raised systemic TNF levels and a cytokine storm are not necessary for the development of CM. Hochman et al. found a higher level of platelet and monocyte accumulation in histologic sections of cerebral vessels in HIV-infected cases in association with sites of iRBC sequestration compared to HIV-uninfected cases. 18 Given the localised nature of these pathologies and given our data indicating a lack of significant systemic inflammation in HIV-infected children, these histopathological findings suggest that specific interactions between iRBC and either the endothelium itself or other host cells in close proximity may be important in disease pathogenesis.
Examining the genes expressed by iRBC sequestered in the brain, Tembo et al. demonstrated that different var genes were expressed between HIV-infected and HIVuninfected children. 19 Var genes control the surface proteins expressed on iRBC and thereby the host endothelial receptors with which they bind and interact. Taken together these findings indicate that the local histological differences observed by Hochman and colleagues between HIV-infected and uninfected children may reflect differences in the nature of the iRBC-endothelial interaction. What factors lead to different var gene expression in HIV and how this affects the iRBC-host cell interaction remains to be determined but elucidating this may shed further light on CM pathogenesis in both HIV-infected and uninfected children.
The lack of a significant difference in mortality rate between HIV-infected and uninfected CM cases here seem to contradict a recent publication that found a higher mortality in HIV-infected children admitted to our facility. 18 The principal difference between the analyses is that the earlier publication used a purely clinical definition of CM whereas we used retinopathy status to improve specificity and hence only include cases for which coma is more likely to be caused by malaria. 18 By including all cases, whether true retinopathy positive CM or not, the earlier study had a slightly larger sample size and we cannot exclude that this may have increased the statistical power to detect a significant mortality difference. However it is also possible that the mortality difference associated with HIV in the earlier analysis is unrelated to any effect of HIV on CM but instead due to confounding factors e in particular that HIV is highly associated with death from other comorbidities, such as bacterial infection. The effect of such confounders is likely to be stronger in an analysis that, due to lack of diagnostic specificity, includes a significant proportion of children whose coma is not caused by malaria. Different specificities of the clinical case definitions and different rates of co-morbidities may also be important in explaining different and apparently inconsistent effects of HIV on mortality reported in previous severe malaria studies.
Although our data are derived from a large cohort of CM patients, our study was limited by the relatively small number and small volume of plasma samples available. We were therefore only able to measure one Th1 and one Th2 cytokine on presentation to hospital. Serial blood samples may have provided a more complete picture. We also did not undertake long-term follow up to reliably determine whether HIV affects the risk of subtle or slowly developing neurodevelopmental sequelae following CM.
In conclusion, when CM is defined precisely in African children, HIV has a marked impact on the cytokine response but little effect on either parasite density or the clinical course of CM. Taken with other recent studies these data point towards local iRBC-associated effects rather than systemic inflammation as the primary driver of pathogenesis in human CM.
|
2018-04-03T02:00:18.388Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "8af9d034d2f8ec6540539ce30e2a941b4811aec8",
"oa_license": "CCBY",
"oa_url": "http://www.journalofinfection.com/article/S0163445316300974/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8af9d034d2f8ec6540539ce30e2a941b4811aec8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
51693556
|
pes2o/s2orc
|
v3-fos-license
|
Two Layered Scaffolds ( Loofah / PLLA / Cellulose / Chitin ) for Repair of Osteochondral Defect
Research of tissue engineering and regenerative medicine continues to develop advanced materials that can better mimic the significant architecture and functional properties of native tissues. Treatment of osteochondral injuries by using scaffolds contains the problem of fixation and integration of the engineered tissue to the surrounding one. Therefore, tissue engineered osteochondral graft design must be directed not only to the injured cartilage but also to the subchondral bone for a sufficient osteochondral repair and integration of the neo-cartilage into the osseous surrounding. In this study, we produced a bilayer scaffold and investigated the ability of co-cultures of chondrocytes and osteoblasts to repair articular cartilage in osteochondral defects. For this purpose, fibrin glued loofah+PLLA+cellulose scaffold with MG-63 cells and loofah+PLLA+chitin scaffold with SW-1353 cells were used to promote bone and cartilage regeneration, respectively. Viability tests and morphology images indicated that this bilayer scaffold had good affinity for osteoblast and chondrocytes cells, encouraging their growth, proliferation and attachment. Histological and immune-histochemical staining analyses confirmed that loofah bilayer scaffolds provided a good support for the cells. Based on the preliminary results in vitro, we suggest that the integrated bilayer scaffold consisting of loofah+PLLA+cellulose and loofah+PLLA+chitin, has potential use to repair osteochondral defects, either upon cellular implantation and/or in acellular form. Citation: Cecen B, Kozaci LD, YuksEL M, Kara A, Ersoy N, et al. (2017) Two Layered Scaffolds (Loofah/PLLA/Cellulose/Chitin) for Repair of Osteochondral Defect. J Tissue Sci Eng 8: 210. doi: 10.4172/2157-7552.1000210
Introduction
The osteochondral defects containing cartilage and subchondral bone has meagre regeneration capacity because of the different mechanical properties, composition and biological structures of each tissue. The mechanical strength of newly formed articular cartilage in comparison to the natural tissue may create adverse effects, leading to further degeneration of both repaired and adjacent native tissues with a decline toward osteoarthritic conditions. Recent developments in tissue engineering in the field of orthopedic research have been the design of the relationship between the autologous cells and protein that promotes cell adhering with osteoconductive material in order to generate osteoinductive materials [1].
Critical parameters for the tissue engineering scaffolds are; biocompatibility, biodegradability, optimum mechanical resistance and arrangement of the appropriate cellular activities [2,3]. The 3D porous membranes provide a microenvironment within the scaffold that contains pores large enough (10 µm) for living cells to move throughout the membrane. The porosity allows the nutrient uptake and cellular waste product removal [4].
Several materials, together with cells, have been suggested for bone and cartilage regeneration including ceramics, hydroxyapatite-based materials or ECM derivatives as well as natural and synthetic polymeric materials [5,6]. Cellulose fibers with linear homopolymer of glucose (C 6 H 10 O 5 )n, with n ranging from 500 to 5000, are one these potential reinforcing materials [7,8]. They display good mechanical properties and have low weight, and natural advantages such as biodegradability, low cost and renewability. In addition, there is plenty of them in nature.
PLLA is a commonly used non-toxic biodegradable material used in tissue scaffold building. It has a dense and smooth surface morphology suitable for osteoblast cultures [9]. However, PLLA has a few obvious weaknesses as a scaffold material; rapid biodegradation, acidic decomposition by-products and hydrophobicity [10].
In this study, we aimed to examine the characteristics of these two natural materials in combination with loofah as bi-layered scaffolds, and to determine the morphology, adhesion and proliferation capacities of chondrocytes and osteoblasts seeded on these scaffolds.
Scaffold preparation and characterization
Initially, loofah was soaked to swell and washed with water, then dried with NaOH (2 M). The reason for the wetting and swelling process of the loofah sponges is that the loofah has its own unique yellowish juice. This process was done to replace juice of loofah with water at first stage.
Loofah+PLLA+chitin: Loofah was coated with 4% PLLA solution in chloroform and dried prior to soaking in 4% chitin, instead of cellulose. The liquid is separated and dried at 50°C. All scaffolds were sterilized at 90°C with ethylene oxide prior to use in tissue cultures. Afterward the scaffolds were sticked with fibrin glue (Figure 1).
Cell culture
SW-1353 chondrosarcoma human cells and MG-63 osteosarcoma human cells purchased from the ATCC (Manassas, VA 20108 USA) were cultured in DMEM medium containing 10% FCS, L-Glutamine (200 mM, G7513, Sigma), Pen-Strep-Ampho (03-033-1B Biological Industries) at 37°C in a humidified 5% CO 2 atmosphere. SW-1353 and MG-63 cells were seeded on previously fibrin glued loofah+PLLA+chitin and loofah+PLLA+cellulose scaffolds (Size: 4 × 4 × 3 mm (L × W × H); n=3), respectively to form double layers ( Figure 1). Firstly, chondrocytes were seeded on loofah+PLLA+chitin layer and incubated for 30 min at 37°C in a humi dified 5% CO 2 atmosphere for adhesion of cells. Then, osteoblasts were seeded on loofah+PLLA+cellulose turning the scaffolds upside down. In order to avoid cell loss, the cells were added and cultured carefully on either side of the scaffold, allowing them to adhere. After that, culture medium was increased to cover the scaffold for optimum culture conditions. 1 × 10 6 cells/mL cells were seeded on each layer scaffold. Cells on scaffolds were analyzed at culture days of 3 rd , 5 th and 8 th .
Scanning electron microscopy (SEM)
The scaffolds were washed with PBS three times before the analysis after the culturing period. Then, they were fixed in sodium cacodylate buffer (0.1 M) containing 5% glutaraldehyde (pH 7.2), 7% sucrose and 2% osmium tetraoxide. After being dried in graded ethanol series, samples were covered with gold under 10 kV vacuum (EMITECH K550X) and examined using FE-SEM (FEI Quanta 250 FEG).
Energy dispersive X-ray spectrometry (EDS)
Scaffolds were evaluated by elemental analysis using SEM (Oxford model INCA 300) Energy Distribution Spectrum (EDS) attachment at the end of the culture period. Elemental composition of the scaffold materials was determined along with the cell formation. Ca and P concentrations were calculated.
Cellular viability assay (XTT) of cells
Cellular viability of the cells attached on scaffolds was measured by a commercially available XTT assay kit using a spectrophotometer (Cary 50 UV-Vis).
Lactate dehydrogenase (LDH) activity of cells
Cytotoxicity of the scaffolds was evaluated measuring LDH activity in culture medium by using a commercially available colorimetric assay using a spectrophotometer (Cary 50 UV-Vis).
Analysis of ALP activity of cells
Osteoblastic activity of bone cells was analyzed by a commercially available alkaline phosphatase activity kit. After a 3 min incubation at 37°C ALP activity in conditioned medium was measured according to manufacturer's recommendations at 405 nm spectrophotometrically (Cary 50 UV-Vis).
Histologic staining
Histological examinations were performed on cell seeded scaffolds to determine extracellular matrix (ECM) accumulation and morphological changes in cells. Hematoxylin and eosin (Hand E; for total cellularity), Masson's trichrome (for collagen organization), alizarin red (for calcium depositions), alcian blue and toluidine blue (for proteoglycans) stainings and immunohistochemistry (for type I and II collagens and for chondrocyte and osteoblast phenotype identification) were performed.
The samples were kept in 10% formalin for 48-72 h and then buried in paraffin blocks. 5 µm thick sections were taken from these blocks. Slides were stained with Hand E (01562E, Surgipath, Bretton, Peter Borough, Cambridgeshire) while others stained with Masson's trichrome (2049 GBL, Istanbul, Turkey). De-paraffinized slides were stained with alizarin red solution (ECM815, Chemicon, Germany) after rehydration for 2 min. Excess dye was washed away with acetonexylene (1:1) solution. Slides were cleared in xylene and mounted in a synthetic mounting medium. Calcium deposits stained orange-red.
Similarly, after de-paraffinization and rehydration some slides were stained with Alcian blue (8GX, Merck, Germany) for 30 min. Slides were washed with tap water for 2 min and dehydrated through 95% alcohol, 2 changes of absolute alcohol, 3 min each. They are cleared in xylol and mounted. Slides were also stained with toluidine blue (C152040, Merck, Germany) for 2 min. After being washed with distilled water slides were kept in 96% alcohol, cleared in xylene and mounted.
Immunohistochemistry staining
Immunohistochemistry analyses were performed for collagen Type I (Bioss bs0578-R) and collagen type II (Abcam Collagen II, ab34712) antibodies. Sections were deparaffinized at 60°C in an incubator, washed in xylol three times for transparency process and digested with 0.25% (w/v) trypsin for 15 min at 37°C. The sections were washed with PBS, treated with blocked solution (TA-125-UB, Invitrogen-, Fremont, CA) for 30 min. After incubated with the primary antibody for overnight at 4°C, sections were washed again with PBS and incubated for 20 min with anti-mouse biotin-streptavidin hydrogen peroxide secondary antibodies (Invitrogen-Plus Broad Spectrum 85-9043). The signal was developed using DAB (Roche, Germany). Finally, the sections were stained with Mayer's Hematoxylin and examined with light microscopy.
Statistical Analysis
The data of the XTT, ALP and LDH experiments were analyzed by using parametric repeated measurement ANOVA and non-parametric Kruskal-Wallis tests. p<0.05 was considered statistically significant. Statistical analyses were carried out by using Social Sciences Statistical Package (SPSS), Version 20.
Morphology of Co-Culture cells seeded on scaffolds
Morphology of the loofah+PLLA+cellulose and loofah+PLLA+chitin scaffolds, cultivated with SW1353 and MG-63 cells respectively were characterized by SEM analysis (Figure 2). Cells were observed to adhere to their bi-layered scaffolds and to maintaine their global and the spindle-shaped forms. Furthermore, newly produced extracellular matrix was visible on the scaffolds.
EDS of loofah based scaffolds
Loofah based scaffolds were characterized by EDS to show presence of Carbon (C), Oxygen (O), Nitrogen (N) elements. Due to the main structure of the loofah sponge and its cellulose coating, mainly C and O elements were observed. C and O elements were among the key elements of PLLA. N element, a key element of chitin, was also detected ( Figure 2).
Viability of cells in 3D cultures (XTT Assay)
The viability of cells was not influenced by scaffold structure. Cell numbers continued to increase in a time-dependent manner during culture period ( Figure 3A). XTT assay showed that on the 8 th day, cell numbers were significantly higher compared to the 3 rd and 5 th days (p<0.05). A statistically significant difference was determined in cell viability between culture days 3, 5 and 8 ( Figure 3A, p=0.030, p=0.002, p=0.010).
Cytotoxicity in 3D cell cultures (LDH activity)
LDH levels released from cells slightly increased during culture period. Differences in LDH activity were statistically significant between groups (p<0.05). Increased LDH activity at 5 th day might be the result of release of PLLA polymer which can be toxic during its degradation ( Figure 3B).
ALP activity in 3D cells cultures
ALP activity increased significantly during 8-day assay period (p<0.05). It was the highest in the conditioned media at the 8 th day. Slight decrease in ALP levels on the 5 th day was followed by an increase on the 8 th day ( Figure 3C). This finding might be the result of the temporary decrease of osteoblastic activity of MG-63 cells due to the degrading polymer toxicity.
TEM analysis
TEM were assessed to show the osteoblast and chondrocyte interactions. Ultrastructural morphology showed that cells developed bud-like extensions and were in contact with each other (Figure 7).
Discussion
Cell therapy and tissue engineering is a promising alternative to artificial permanent implants for repair of injured tissue. As a widely accepted discipline in biological and medical sciences, bone and cartilage tissue engineering achieves successful results in growth of human bone and cartilage tissues. This study presents a loofah based bilayer scaffold for chondrocyte and osteoblast co-cultures to mimic and repair osteochondral defects. Previously, Cecen et al. [6] reported biocompatibility and biomechanical characteristics of loofah based scaffolds and suggested possible use of these constructs in bone and cartilage tissue engineering.
Bio-characteristics of implant materials mostly depend on their chemical compositions, composite surface characteristics and porosities. The first objective in our work to use loofah sponge material was to have a biocompatible, natural entity with porosity. This study was a follow up of the previous publication showing no toxic effects of loofah matrix on chondrocytes in which the cellulosic structure of loofah had been aimed especially for the use of cartilage. The purpose of the cartilage tissue engineering is to design biodegradable scaffolds that can be integrated with a porous texture that ensures nutrients and waste products to spread.
On the other hand, bone graft materials are required to be not only biocompatible but also biodegradable, osteoprotective and osteoinductive. As a matter of principle, osteochondral graft design should be directed to sub-chondral bone not only for injured cartilage, but also for sufficient osteochondral repair and to be integrated into the neo-cartilage around the bone [22].
Biocompatibility is one of the important factors in adherence of the cells on the surface of a biomaterial. Cell adherence is determined by experimental methods relying on morphological and biomechanical approaches. According to our EDS analyses of the scaffolds, the C, O and N elements which are in the main structure of PLLA, cellulose and chitin were present as expected (Figure 2). Morphology of scaffold surface is an important factor. While porosity of scaffold material mimics the microstructure of cancellous bone or cartilage, the surface properties of biomaterials play a major role in cellular interactions, such as cell adhesion, infiltration and proliferation [23,24]. In present study, morphological analyses demonstrated that osteoblasts and chondrocytes adhere on the surface and moved within the pores of the bilayer scaffolds of loofah+PLLA+cellulose and loofah+PLLA+chitin. Loofah sponge structure does not have a porous structure because it is in a fiber nature. In our system, after five days of culture, chondrocytes were observed to spread along the scaffolds. Cells were heterogeneously allocated on and in the bilayer scaffolds, penetrating from the seeded side to the opposite side so that they can pass through to the adjacent scaffold. In bilayer scaffolds cell growth was observed at all-time points within extracellular matrix (Figure 2). Our results show that, this novel loofah construct may provide an opportunity for cells to proliferate and penetrate in fiber oriented structures.
The viability analyses on 8 th day showed that cells were viable and continued to proliferate ( Figure 3A). Sung et al. reported that fast degradation of the scaffold might negatively affect the cell viability and migration into the scaffold in vitro and in vivo [25,26]. Indeed, we examined the LDH activities slightly increased ( Figure 3B) at day 5. This simultaneous differentiation might be due to the release of PLLA polymer which can be toxic during its degradation. This result is in agreement with a study by Marques et. al which also reported that starch-based polymers had a higher degree of cytotoxicity [20]. Nonetheless, our results suggest that loofah degradation is slower due to its cellulose content and thus minimized the negative effects of polymer release from degrading scaffold on cell viability. These results fully demonstrate that, there is not only one factor to be considered on its own, from the standpoint of material science, affecting cellular viability and adhesion [24]. Following day 5, there was a decrease in LDH levels suggesting that cells might recover from the toxic effects of degrading polymers if they were kept in cultures longer.
ALP activity by cells indicated osteoblastic phenotype of the cultured cells. ALP levels demonstrated that cell proliferation on the scaffolds significantly improved in eight days' experimental period, suggesting that the loofah+PLLA+cellulose scaffolds are suitable for osteoblast seeding and growth. The slight decrease in ALP activity on day 5 may again be explained by the early degradation of PLLA causing a temporary toxicity.
Heterogeneous round-elongated osteoblast or chondrocyte like cells in construct were observed in various sections of bilayer scaffold structures on days 3 rd , 5 th and 8 th with Hand E staining ( Figures 4A-4C). Cells were in close vicinity of the collagen structures. Masson's trichrome staining indicated increased formation of collagen organization in scaffolds ( Figures 4D-4F). We believe that this staining was the outcome of newly produced collagen by the cultivated cells. This newly produced collagen fibers might be the triggering effect of absorbed fibrin glue during the culture period. Loofah+PLLA+chitin stained red with masson Trichrome staining at the edges due to the presence of keratin in chitin in the scaffold structures. Dense alizarin red staining in the bilayer scaffolds also supported the presence of osteoblast to in cultures. Simultaneously, presence of chondrocytes in bilayer scaffolds construct were confirmed by Alcian blue. Accumulation of chondrocytes in scaffolds were observed as pink staining. Additionally, deep blue and pink coloured staining in scaffolds indicated chondrogenesis. Cells tend to aggregate and to be surrounded by cartilaginous ECM. These results indicated GAG production in loofah based bi-layer scaffolds ( Figure 4). The histological data indicated that both the cartilage and bone sections of the scaffold exhibit homogeneous cell distribution and matrix formation.
Immunohistochemical procedures distinguished collagen types produced by cells in the bilayer scaffolds. Chondrocytes on the loofah+PLLA+chitin site of the scaffolds were observed to produce more type II collagen compared to type I whereas on the loofah+PLLA+cellulose site type I collagen staining was more prominent. This finding might confirm co-existence of chondrocytes and osteoblasts in bilayer scaffold cultures keeping their individual phenotypic characteristics ( Figure 6). TEM analysis confirmed the existence of cells in the loofah based scaffolds. Histological results were consistent with the TEM investigations. Some cells were found to be more rounded whereas others were more in fusiform shape (Figure 7). We think that fusiform shaped cells to be chondrocyte-like cells while round ones were more likely to be osteoblasts. TEM images implied that cells kept their normal morphologies with clear nuclei and nucleoluses. Moreover, cells also formed aggregates as observed in normal in vivo tissues and performed contact with each other via bud-like formations. However, further surface marker analyses are required to determine the exact localization of different cell types in co-cultures.
Loofah can be considered to design double layer osteochondral structures due to its nontoxic, cellulosic structure and the difficulty in dissolving. This is the first study in literature showing that cultivation of chondrocytes and osteoblast on loofah based scaffolds have potential as a novel approach for applications in the field of cartilage tissue engineering to repair articular defects.
|
2019-04-09T13:08:10.166Z
|
2017-11-28T00:00:00.000
|
{
"year": 2017,
"sha1": "327225d950d6d77fdae1b3e5bd561677cd243ea0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2157-7552.1000210",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "75121c9b3eb123984b57c2679ed1cb518613ff94",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
36114261
|
pes2o/s2orc
|
v3-fos-license
|
Immunogenicity of reduced dose priming schedules of serogroup C meningococcal conjugate vaccine followed by booster at 12 months in infants: open label randomised controlled trial
Objective To determine whether the immunogenicity of a single dose infant priming schedule of serogroup C meningococcal (MenC) conjugate vaccine is non-inferior to a two dose priming schedule when followed by a booster dose at age 12 months. Design Phase IV open label randomised controlled trial carried out from July 2010 until August 2013 Setting Four centres in the United Kingdom and one centre in Malta. Participants Healthy infants aged 6-12 weeks followed up until age 24 months. Interventions In the priming phase of the trial 509 infants were randomised in a 10:10:7:4 ratio into four groups to receive either a single MenC-cross reacting material 197 (CRM) dose at 3 months; two doses of MenC-CRM at 3 and 4 months; a single MenC-polysaccharide-tetanus toxoid (TT) dose at 3 months; or no MenC doses, respectively. Haemophilus influenzae type b (Hib)-MenC-TT vaccine was administered to all infants at 12 months of age. All infants also received the nationally routinely recommended vaccines. Blood samples were taken at age 5, 12, 13, and 24 months. Main outcome measure MenC serum bactericidal antibody assay with rabbit complement (rSBA) one month after the Hib-MenC-TT vaccine. Non-inferiority was met if the lower 95% confidence limit of the difference in the mean log10 MenC rSBA between the single dose MenC-CRM and the two dose MenC-CRM groups was >−0.35. Results The primary objective was met: after a Hib-MenC-TT booster dose at 12 months of age the MenC rSBA geometric mean titres induced in infants primed with a single MenC-CRM dose were not inferior to those induced in participants primed with two MenC-CRM doses in infancy (660 (95% confidence interval 498 to 876) v 295 (220 to 398)) with a corresponding difference in the mean log10 MenC rSBA of 0.35 (0.17 to 0.53) that showed superiority of the single over the two dose schedule). Exploration of differences between the priming schedules showed that one month after Hib-MenC-TT vaccination, MenC rSBA ≥1:8 was observed in >96% of participants previously primed with any of the MenC vaccine schedules in infancy and in 83% of those who were not vaccinated against MenC in infancy. The MenC rSBA geometric mean titres induced by the Hib-MenC-TT boost were significantly higher in children who were primed with one rather than two MenC-CRM doses in infancy. Only priming with MenC-TT, however, induced robust MenC bactericidal antibody after the Hib-MenC-TT booster that persisted until 24 months of age. Conclusions MenC vaccination programmes with two MenC infant priming doses could be reduced to a single priming dose without reducing post-boost antibody titres. When followed by a Hib-MenC-TT booster dose, infant priming with a single MenC-TT vaccine dose induces a more robust antibody response than one or two infant doses of MenC-CRM. Bactericidal antibody induced by a single Hib-MenC-TT conjugate vaccine dose at 12 months of age (that is, a toddler only schedule), without infant priming, is not well sustained at 24 months. Because of rapid waning of MenC antibody, programmes using toddler only schedules will still need to rely on herd protection to protect infants and young children. Trial registration Eudract No: 2009-016579-31; NCT01129518; study ID: 2008_06 (http://clinicaltrials.gov).
Introduction
Control of invasive meningococcal C (MenC) disease has been achieved in the United Kingdom, the Netherlands, Canada, and Australia, where MenC conjugate vaccines have been introduced in the routine national childhood immunisation programmes. 1 Three licensed
WhAt Is AlreAdy knoWn on thIs topIc
Different prime and boost schedules for MenC glycoconjugate vaccines are effective in controlling MenC disease Infant protection induced by a single MenC glycoconjugate vaccine dose at 2 or 3 months of age is similar to that induced by two or three doses
WhAt thIs study Adds
Increasing the number of MenC-CRM doses in infancy reduces the subsequent immune response to a MenC glycoconjugate booster When boosting with Hib-MenC-TT vaccine, priming with a single MenC-TT dose in infancy, rather than MenC-CRM, induces more robust bactericidal antibodies that are still persistent at 24 months of age Protection provided by just one MenC glycoconjugate dose at 12 months of age is not sustained and will rely on herd immunity doi: 10.1136/bmj.h1554 | BMJ 2015;350:h1554 | the bmj meningococcal conjugate vaccines are in use: two MenC polysaccharide protein conjugate preparations utilising the cross reacting material 197 (MenC-CRM), a non-toxic mutant of diphtheria toxoid (Menjugate, Novartis Vaccines and Diagnostics, Siena, Italy; and Meningitec, currently marketed by Nuron Biotech, Schaffhausen, Switzerland), and one MenC polysaccharide-tetanus toxoid (MenC-TT) formulation (Neis-Vac-C; currently marketed by Pfizer, New York). The impact of MenC vaccination was evident within a few years, despite differences between countries in MenC vaccination schedules, which include whether or not an infant "priming" dose is used, the number of these priming doses, and whether to accompany introduction with a mass catch up MenC immunisation campaign for older age groups. 2 Starting in 1999 the UK was the first country to launch routine MenC conjugate vaccination for all infants concurrently with staggered catch-up vaccination of individuals aged 1-25 years up until 2002. 3 Since then the MenC schedule has been changed twice. In 2006, the original three dose MenC infant priming schedule administered at 2, 3, and 4 months of age was reduced to a 3 and 4 month schedule with the addition of a Hib-MenC-TT boost at 12 months of age. 3 This decision was based on results of clinical trials that showed that the serological threshold of MenC serum bactericidal antibody assay with rabbit complement (rSBA) of ≥1:8, accepted to be protective against invasive meningococcal disease, 4 was observed in >98% of vaccinated children after two infant MenC priming doses 5 6 and on estimates of rapid waning of MenC vaccine effectiveness after infant vaccination. 7 Subsequently in 2013, MenC infant priming was further reduced to a single dose at 3 months, with retention of the 12 month old booster and the introduction of another booster dose at age 13-14 years to sustain MenC immunity through adolescence. 8 The rationale for a single MenC conjugate vaccine dose in infancy is based on limited data from studies that looked at the immunogenicity after the first 9 10 or a single priming MenC vaccine dose at 2 months of age, 11 or the effect a single priming dose of different MenC glycoconjugate formulations had on the immunogenicity of a Hib-MenC-TT booster dose administered to children aged 12 months. 12 No randomised controlled studies have directly compared the effect on the immunogenicity of a Hib-MenC-TT booster after different reduced MenC infant immunisation schedules. We investigated such differences and assessed the corresponding persistence of MenC bactericidal antibody at 24 months of age.
Participants and recruitment
We enrolled 509 healthy infants, born at 37-42 weeks' gestation and aged between 6-12 weeks, in a phase IV open labelled randomised controlled trial carried out between 5 July 2010 and 1 August 2013 in four centres in the UK (Oxford, Bristol, London, and Southampton) and one centre in Malta. An invitation letter was sent to the parents of all children due for their routine immunisations, and parents who expressed an interest for their child to participate in the study were called to ensure eligibility. Eligible infants were then asked to visit Mater Dei Hospital in Malta or were seen in their homes in the UK. The study was divided into three phases: the primary vaccination phase (from 2-5 months of age); the booster phase (from 12-13 months of age), and the persistence phase (at 24 months of age).
Exclusion criteria included known immunosuppression, a family history of immunodeficiency, administration of blood products, previous vaccination (except with the BCG, hepatitis B, and rotavirus vaccines for the primary phase, and with the combined diphtheria, tetanus, acellular pertussis, inactivated polio, and Haemophilus influenzae type b (DTaP-IPV-Hib), and the 13 valent pneumococcal conjugate vaccines used in the primary phase as well as the hepatitis A, influenza, and varicella-zoster vaccines for the booster phase), previous infection with MenC, allergic reactions to any vaccine components, a history of seizures or any neurological disorder, and severe acute/chronic illness at the time of enrolment. (NeisVac-C) at 3 months of age, while infants in the control group did not receive any MenC vaccine priming doses. All infants were vaccinated with a combined DTaP-IPV-Hib vaccine (Pediacel, Sanofi Pasteur MSD, Lyon, France) at 2, 3, and 4 months of age and with the 13 valent pneumococcal conjugate vaccine (PCV13, Prevenar 13, Pfizer, New York) at 2 and 4 months of age. In the booster phase, infants in all groups were vaccinated with the Hib-MenC-TT vaccine (Menitorix, GlaxoSmithKline Biologicals, Rixensart, Belgium) as well as the routine PCV13 vaccine at 12 months of age. All infants received the combined measles, mumps, and rubella (MMR) vaccine at 13 months. Blood samples were obtained for serologic assays at 5, 12, 13, and 24 months of age (table 1). Participants in each study site were randomised according to a computer generated list produced by the Oxford Vaccine Group in Oxford. Stata version 10.0 was used to generate the randomisation codes with permuted block size of 30 and stratification by centre. Allocation concealment until the point of enrolment was achieved through the use of opaque sealed envelopes. Study staff and parents of participants were not masked to group allocation after enrolment.
visits and vaccines
serologic assays Meningococcal serogroup C antibody was measured as described by Maslanka and colleagues 13 using a MenC rSBA targeting the Neisseria meningitidis C11 (C:16:P1.7-1,1) strain and baby rabbit serum as the complement source (Pel-Freeze Incorporated, Rodgerson, AZ). MenC rSBA titres were expressed as the reciprocal of the final serum dilution giving ≥50% killing at 60 minutes. An rSBA threshold of ≥1:8 was taken as indicative of protection. 4 A threshold of ≥1:128 was also included as a more conservative protective threshold. 14 MenC rSBA assays were carried out at the Vaccine Evaluation Unit, Public Health England, Manchester Laboratory, Manchester Royal Infirmary, Manchester, UK.
safety evaluation
After each immunisation, vaccinated infants were observed for 15 minutes for any immediate reactions. Parents completed a diary card for five days from the day of immunisation and recorded all local and systemic adverse events. Any solicited or unsolicited adverse events as well as serious adverse events occurring from the day of vaccination to the subsequent visit were noted.
statistical analysis
Our primary objective was to show non-inferiority of the MenC geometric mean titres one month after the 12 month dose of the Hib-MenC-TT vaccine between the single infant dose MenC-CRM and the two infant dose MenC-CRM groups. The geometric mean titre expresses the mean (calculated on log transformed data) back in the original units and gives a meaningful expression of central tendency of the antibody response in a study population. Geometric mean titres (and their 95% confidence intervals) were calculated by taking the antilog of the mean (and 95% confidence interval) log 10 transformed MenC rSBA titres. Titres <4 (the lower limit of detection of the assay) were given an arbitrary value of 2 to be able to log 10 transform these values to conduct the analysis. Non-inferiority was met if the lower 95% confidence limit of the difference in the mean log 10 MenC rSBA between the single infant dose MenC-CRM group minus the two infant dose MenC group was >−0.35 (equivalent to a non-inferiority margin of >−10%) at one month after Hib-MenC-TT vaccination. This margin was derived from published data measured one month after a MenAC polysaccharide vaccine challenge was administered to infants aged 12 months who had been primed with MenC-TT at 2 and 4 months of age. 11 Based on this margin of −0.35, with at least 160 participants enrolled in each of the single infant dose MenC-CRM and two infant dose MenC-CRM groups, the power to show the primary objective at 2.5% one sided level of significance was 90% (allowing for a 12.5% dropout rate). Sample size calculations for the additional two arms of the study resulted in an unusual 10:10:7:4 allocation ratio. Details of all calculations are included in appendix table A. The study design was not intended to compare reactogenicity rates between the different schedules and so no sample size calculation was carried out for these secondary outcomes.
The analysis of the outcome variables was based on the intention to treat population. We also performed an analysis on the completers population to complement the intention to treat population analysis. Participants were included in the intention to treat population if they had at least one dose and at least one assessment after baseline and in the completers population if they received all vaccine doses and had all planned assessments. Models used in the analyses contained the terms dose group (four levels) and study centre (five levels).
We performed analysis of variance (ANOVA) of the log 10 transformed rSBA titres at each blood sampling visit and present results as geometric mean titres with 95% confidence intervals. Binary variables were analysed with logistic regression with results of a comparison between two levels of a factor reported as odds ratios (95% confidence interval). P<0.05 was considered significant. We used STATA 13 and StatXact 9 for the immunogenicity analyses and SAS v9.3 for the safety analyses.
results
Out of the 509 infants enrolled in the study, 497 completed the primary immunisation phase, 478 the booster phase, and 453 the persistence phase (fig 1).
Demography
The mean age of the participants at enrolment (n=509) was 8.5 weeks (range 6.9-10.6), 51.7% (263) were boys, and 90.2% were white. At the booster and persistence phases the mean age of the infants was 12.5 months (11.9-13.6) and 24.1 months (22.1-27.3), respectively. immunogenicity Although we planned to adjust for study centre, adjustment made no difference to the primary analysis or to any of the secondary analyses. All results presented are unadjusted.
Primary objective
Our primary objective was met as the immunogenicity of one priming dose of MenC-CRM at 3 months was noninferior (and in fact was superior) to two MenC-CRM doses given at 3 and 4 months of age, when assessed at 13 months of age after the Hib-MenC-TT booster. One month after administration of the Hib-MenC-TT vaccine at 12 months of age, participants in the single infant dose and two infant dose MenC-CRM groups had MenC rSBA geometric mean titres of 660 (95% confidence interval 498 to 876) and 295 (220 to 398), respectively (fig 2 and appendix table B). These corresponded to a difference of 0.35 (0.17 to 0.53) in the mean log 10 MenC rSBA between the single infant dose and two infant dose MenC-CRM groups. Therefore, as the lower 95% confidence limit was >−0.35 (that is, greater than the equivalent non-inferiority margin of −10%), the primary objective of the study was met. The immunogenicity of the Hib-MenC-TT vaccine after the single infant dose MenC-CRM schedule was actually superior to the immunogenicity of the two dose MenC-CRM infant priming regimen as shown by the 95% confidence intervals of the difference, which did not cross 0. , which was also reflected in significantly higher MenC rSBA geometric mean titres between these groups (fig 2 and appendix table B). Although after the Hib-MenC-TT boost we saw no significant differences in the percentage of infants with MenC rSBA ≥1:8 between a single infant priming dose of MenC-CRM and MenC-TT, the percentage of infants with MenC rSBA ≥1:128 (table 2) as well as the MenC rSBA geometric mean titres were significantly higher in those who had been primed with MenC- TT (fig 2).
Persistence phase
Twelve months after Hib-MenC-TT vaccination, only 19.7%, 30.9%, and 27.3% of those who had been primed with two MenC-CRM doses, one MenC-CRM dose, or who had not been primed at all in infancy, respectively, still had MenC rSBA ≥1:8. We found no significant differences in the percentage of infants with MenC rSBA≥1:8 or in MenC rSBA geometric mean titres between the single or two infant dose MenC-CRM groups compared with the control group, though significantly more participants who were primed with one MenC-CRM dose in infancy had MenC rSBA ≥1:8 compared with those who received two MenC-CRM infant priming doses (table 2), a finding that was not reflected in significant differences between the MenC rSBA geometric mean titres persisting in the participants within the MenC-CRM primed groups (fig 2 and appendix table B). In contrast, 82.1% of children who were primed with one MenC-TT dose in infancy had MenC rSBA ≥1:8 and 69.5% had MenC rSBA ≥1:128; values that were significantly higher than those persisting after all other schedules (table 2). In addition, although the MenC rSBA geometric mean titres persisting after a single dose MenC-TT priming and Hib-MenC-TT boosting schedule had declined over 12 months, they were still significantly much higher than after all the other schedules (fig 2 and appendix safety Primary vaccination phase There were no significant differences in the frequency of local adverse events at 3 months of age between any of the MenC priming schedules (all 95% confidence intervals for the odds ratios for group comparisons included 1.0), with local pain, erythema, swelling, and induration being reported in each of the MenC groups, excluding the control group, in up to 22%, 45%, 16%, and 25% of infants, respectively (appendix table C). Similarly we found no significant differences when we compared the frequency of systemic adverse events, with the most frequently reported symptoms being increased sleepiness in up to 50% and irritability in up to 66% of infants in each group (appendix table C). Fever ≥38°C was reported in ≤1% in each group. One participant in the two infant dose MenC-CRM group was admitted to hospital because of a vaccine related serious adverse event consisting of a haematoma at the site of the first MenC vaccine. This infant was subsequently diagnosed with factor VIII deficiency and made a complete recovery but was withdrawn from the study.
booster phase
The priming regimen before Hib-MenC-TT vaccine administration at 12 months of age did not result in any observable significant differences in the frequency of local adverse events between any of the groups (all 95% confidence intervals for the odds ratios for group comparisons included 1.0), with up to 29% having pain, 76% erythema, 23% swelling, and 31% experiencing induration at the Hib-MenC-TT injection site in each group. Similarly, differences in the expected systemic adverse events between the groups were not significant, with drowsiness, irritability, and diminished appetite being the most commonly reported side effects in up to 39%, 61%, and 35%, respectively, in each group (appendix table C). Fever was the least frequent systemic adverse event, occurring in 10% of participants in each group. No vaccine related serious adverse events occurred in the booster phase.
discussion
The number of MenC doses used in the current vaccination schedules in different countries within Europe, as well as internationally, varies from none to one or two infant priming doses followed by another dose in the second year of life, with or without a further boost in adolescence (table 3). [15][16][17][18][19] This randomised controlled study provides data directly supporting a reduction in the routine MenC immunisation schedule from two infant doses to a single infant dose prime plus booster: a change that has been implemented in the UK since June 2013. 8 MenC vaccine efficacy data have shown that MenC rSBA titres ≥1:8, and more conservatively ≥1:128, are correlated with protection against MenC disease on a population level. 14 Priming with two MenC-CRM doses at 3 and 4 months of age does not offer any advantage over priming with a single MenC-CRM or MenC-TT dose at 3 months of age because MenC antibodies wane below these thresholds for most infants in all three groups, and at least 97% of children had MenC rSBA ≥1:8 after a Hib-MenC-TT boost at 12 months of age, irrespective of the number of MenC doses used for infant priming. Our findings are similar to those reported in another study, which showed that 98% of infants had MenC rSBA ≥1:8 in response to a 12 month Hib-MenC-TT booster dose after one MenC infant priming dose. 12 That study, however, made no comparison with the response to a two dose MenC infant priming schedule or a control group.
Intriguingly, priming with a single MenC-CRM dose induced higher post-Hib-MenC-TT rSBA geometric mean titres than two priming doses, suggesting that the administration of a greater amount of MenC antigen during priming reduces the subsequent immune response to the 12 month MenC conjugate vaccine booster dose. The underlying mechanism, which is not reflected in the frequencies of MenC-specific memory B cells in peripheral blood detected at 5, 12, or 13 months, 20 could still be related to differences in numbers of memory B cells if the pool is considered to be resident in lymphoid tissues and therefore inaccessible with peripheral blood sampling. Furthermore, this phenomenon might be the result of dose dependent differences in carrier protein that manifest when different MenC glycoconjugate vaccine formulations are used for priming and boosting. A similar effect has also been observed in children challenged with a MenC pure polysaccharide formulation after infant priming with one dose of MenC-TT, which induced significantly higher post-boost MenC rSBA geometric mean titres than two dose MenC-TT infant priming. 11 The relatively reduced post-booster response seen with an increase in the number of priming doses of MenC conjugate vaccine is not the same as the hyporesponsiveness that occurs in children repeatedly vaccinated with a pure polysaccharide MenC vaccine compared with others who are being vaccinated with the same MenC polysaccharide formulation for the first time. 21 The latter is thought to result from the terminal differentiation of B cells into plasma cells without the formation of memory B cells, which is induced by repeated immunisation with a T cell independent antigen that, as a net result, depletes the MenC specific B cell pool. 22 Two months after infant vaccination, one MenC-TT dose was significantly more immunogenic than one dose of MenC-CRM (fig 2 and table 2); and after a Hib-MenC-TT boost at 12 months of age, the MenC rSBA geometric mean titres were significantly higher in those primed with MenC-TT than all other study groups (fig 2 and appendix table B). Such differences in immunogenicity are known to persist after a MenC boost in the second year of life, irrespective of whether MenC-TT or MenC-CRM are used for boosting. 23 Furthermore, at 2 years of age 82% of vaccinated children primed with MenC-TT, whose MenC rSBA geometric mean titres were significantly higher than the MenC rSBA geometric mean titres measured in those primed with a MenC-CRM schedule and in those who were not primed at all, still had MenC rSBA ≥1:8 in contrast with ≤30% of those primed with other MenC schedules. Despite evidence of immune memory after MenC disease and vaccination, 24 the antibody response after MenC exposure is not rapid enough to prevent disease in those with a MenC rSBA titre <1: 8,7 showing the importance of generating high rSBA geometric mean titres after booster, leading to a higher proportion of children maintaining rSBA titres above the 1:8 threshold through early childhood.
Such results are assumed to indicate differences in immunogenicity of the vaccines that relate to the T cell help induced by the different carrier proteins, though there are other manufacturing differences between MenC-TT and MenC-CRM that make it difficult to formally draw this conclusion. Differences in the persistence of post-boost MenC bactericidal antibody are consistent with observations from other studies of the persistence of MenC bactericidal antibody after priming with different MenC glycoconjugates in infancy. 25 26 Our findings show that it would be more rational to prime infants with MenC-TT, rather than MenC-CRM, and boost with Hib-MenC-TT.
The proportions of infants with a MenC rSBA titre ≥1:8 after immunisation with a single dose of Hib-MenC-TT at 12 months of age, without infant priming, reached up to 83%; a proportion that might be acceptable in countries where MenC disease is currently under control. The introduction of just a single dose of MenC vaccine at 12 months of age might not be appropriate in other countries where herd immunity has not been established through the initiation of the programme with a "catch up campaign" and subsequent adolescent boosting (used to maintain herd immunity). A routine 12 month only MenC immunisation programme, in the absence of such herd immunity, would leave unvaccinated infants as well as vaccinated children, whose immunity has waned over time, at risk. The low titres of bactericidal antibodies in infancy and from 2 years of age onwards observed with just a single MenC vaccination at 12 months of age suggest prevention of breakthrough cases in infants and preschool children depend on herd immunity induced by a catch up vaccination campaign that could then be sustained through adolescent boosting. A single MenC toddler dose was successful in controlling MenC disease in the Netherlands, 27 Australia, 28 and Canada, 29 where infants were protected through herd protection induced by an initial catch up campaign targeting older children and adolescents. An alternative, as in the US, is to provide the first MenC dose in adolescence. 30 A MenC priming dose at 12 months of age, however, might still be important for a robust anamnestic response after a MenC boost in adolescence. 31 Indeed if herd immunity in the UK is maintained through a robust adolescent MenC booster programme, the 3 month infant MenC vaccine might conceivably be dropped from the vaccination programme without any change in the current excellent population protection. Furthermore, the anticipated introduction of a routine MenB vaccination schedule in infancy, with a MenB vaccine that contains relatively well conserved meningococcal subcapsular proteins that might also be common among different meningococcal strains independent of the capsular polysaccharide type, 32 is predicted to protect against other serogroups, including some clones of MenC, in the first 12 months of life, potentially supporting the removal of the infant MenC doses.
strengths and limitations
The inclusion of a control group in this study made it possible to compare the post-boost immunogenicity of the different MenC priming schedules with that induced by administration of the first MenC conjugate vaccine dose in MenC vaccine naive participants at 12 months of age. Follow-up of bactericidal antibody up to 24 months of age helped us to investigate if differences in immunogenicity between the different infant MenC priming schedules are long lasting: an effect that impacts planning of MenC immunisation programmes. We did not look at the differences in antibody that could have been induced by boosting the different MenC-CRM priming schedules with a MenC-CRM booster dose because only Hib-MenC-TT is currently being used as a booster dose in the second year of life in the UK. Future studies looking at the persistence of bactericidal antibodies in children older than 2 years would show if the differences seen between the MenC-TT and MenC-CRM priming schedules or the control group are sustained. The response to a MenC boost in adolescents previously primed with a doi: 10.1136/bmj.h1554 | BMJ 2015;350:h1554 | the bmj single MenC dose at 12 months of age compared with age matched MenC naive controls also merits further investigation.
The lack of significant differences in the local adverse events at the MenC injection site at 3 months as well as at the Hib-MenC-TT site at 12 months of age, and in the corresponding systemic adverse events, between the different schedules could possibly have resulted from our study being underpowered to detect differences in reactogenicity between the groups. Such differences could be investigated in a larger study.
Conclusions
In countries where the incidence of invasive MenC disease in infancy has been controlled or practically eliminated after a routine MenC vaccination programme, two MenC infant priming doses could be reduced to a single priming dose without loss of immediate postbooster immunogenicity and without any effect on reactogenicity. Unlike MenC-TT, priming with MenC-CRM or administering the first MenC dose at the age of 12 months does not result in bactericidal antibody that is sustained at 24 months of age above the accepted protective threshold for most young children. Implementation of MenC vaccine prime and boost schedules with MenC tetanus toxoid conjugates seems more likely to induce sustained protection against MenC disease in early childhood. In the absence of any infant MenC vaccine priming doses, the protection provided by just one MenC vaccine dose administered at 12 months of age would strongly rely on the persistence of herd protection, induced by a previous catch up MenC immunisation campaign, which could then be maintained by a booster in adolescence. Contributors: DP, AmK, AF, SNF, PTH, mDS, and AjP formulated the study design, and performed the data collection, analysis, interpretation, and writing of the manuscript. jm, DC, SAm, CW, Em, and ALK collected data collection and revised the manuscript. Rb carried out the laboratory assays and revised the manuscript. jb and mV analysed the data and revised the manuscript. All authors had full access to all of the data (including statistical reports and tables) in the study and can take responsibility for the integrity of the data and the accuracy of the data analysis. AjP is guarantor.
Funding: The study was funded by the NIHR Oxford biomedical Research Centre, UK, the NIHR medicines for Children Network South West and London (now NIHR Clinical Research Network: Paediatrics), the Southampton NIHR Wellcome Trust Clinical Research Facility and NIHR Respiratory biomedical Research Unit, GlaxoSmithKline biologicals, belgium, and European Society of Paediatric Infectious Diseases. This study was conducted independently from the funders. The funders were not involved in study design, data collection and analysis, interpretation of the data or in the writing of the report or its submission for publication. The final manuscript was reviewed and approved by GlaxoSmithKline biologicals.
Competing interests: All authors have completed the ICmjE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: ALK, AmK, and DP have received travel grants from vaccine manufacturers to attend scientific meetings. AjP has previously conducted studies on behalf of Oxford University funded by vaccine manufacturers but does not receive any personal payments or travel support. AjP chairs the UK Department of Health's (DH) joint Committee on Vaccination and Immunisation (jCVI); the views expressed in this manuscript do not necessarily reflect the views of jCVI or DH. mDS, AF, SNF, and PH act as investigators for clinical trials conducted on behalf of their respective Universities and NHS Hospital Trusts sponsored by vaccine manufacturers and have participated in advisory boards but receive no personal payments from these activities. mDS and SNF have had travel and accommodation expenses paid by vaccine manufacturers to attend international conferences related to paediatric infectious disease. Rb performs contract research on behalf of Public Health England for baxter bioscience, GlaxoSmithKline, Pfizer, Sanofi Pasteur, Sanofi Pasteur mSD, and Novartis Vaccines. PTH has conducted studies on behalf of St George's University of London, funded by vaccine manufacturers but does not receive any personal payments or travel support. AmK and jm have received grants from the NIHR Oxford biomedical Research Centre UK, GlaxoSmithKline biologicals, and the European Society of Paediatric Infectious Diseases.
Ethical approval: The study was approved by the respective research ethics committees and medicinal regulatory agencies in each country (UK NRES REC No 10/H0604/7 and malta HEC No 24/10). Informed written consent was obtained from the parent or legal guardian of all infants before recruitment.
Transparency: DP affirms that the manuscript is an honest, accurate, and transparent account of the study; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.
Data sharing: No additional data available. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC bY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/ by-nc/4.0/.
|
2017-07-23T16:17:39.252Z
|
2015-04-01T00:00:00.000
|
{
"year": 2015,
"sha1": "4c3cb9c6c52027105aac106395bd544b9148cb06",
"oa_license": "CCBYNC",
"oa_url": "https://www.bmj.com/content/350/bmj.h1554.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9dbbb4319da0149d4563916e51cbcdac05d39efe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3193551
|
pes2o/s2orc
|
v3-fos-license
|
Methicillin-Resistant Staphylococcus aureus Biofilms and Their Influence on Bacterial Adhesion and Cohesion
Twenty-five methicillin-resistant Staphylococcus aureus (MRSA) isolates were characterized by staphylococcal protein A gene typing and the ability to form biofilms. The presence of exopolysaccharides, proteins, and extracellular DNA and RNA in biofilms was assessed by a dispersal assay. In addition, cell adhesion to surfaces and cell cohesion were evaluated using the packed-bead method and mechanical disruption, respectively. The predominant genotype was spa type t127 (22 out of 25 isolates); the majority of isolates were categorized as moderate biofilm producers. Twelve isolates displayed PIA-independent biofilm formation, while the remaining 13 isolates were PIA-dependent. Both groups showed strong dispersal in response to RNase and DNase digestion followed by proteinase K treatment. PIA-dependent biofilms showed variable dispersal after sodium metaperiodate treatment, whereas PIA-independent biofilms showed enhanced biofilm formation. There was no correlation between the extent of biofilm formation or biofilm components and the adhesion or cohesion abilities of the bacteria, but the efficiency of adherence to glass beads increased after biofilm depletion. In conclusion, nucleic acids and proteins formed the main components of the MRSA clone t127 biofilm matrix, and there seems to be an association between adhesion and cohesion in the biofilms tested.
Introduction
Since it was first identified in 1961, methicillin-resistant Staphylococcus aureus (MRSA) has been implicated in nosocomial infections worldwide [1]. These infections can complicate treatments involving in-dwelling catheters and medical implants through biofilm formation [2].
Biofilms can be graded based on the activities of the bacteria within them. Distinct subpopulations of the bacteria are located within the biofilm based on their different metabolic states [3]. The cells on the surface of the biofilm are aerobic, whereas those located deeper, where the oxygen concentration is low, are fermentative and dormant [4,5]. Therefore, distinct matrix layers representing subpopulations of bacteria are found in biofilms, resulting in different selective pressures and the emergence of antibiotic-resistant strains [6][7][8]. In most cases, biofilm-associated infections are detected after the biofilms are already formed and can no longer be eliminated because of the tolerance of the biofilm to most antimicrobial treatments [4].
The synthesis of biofilms is influenced by a number of factors. Biofilm production, however, does not appear to be linked to the ica locus. O'Neill et al. [18] observed that although the ica locus is present and expressed in PIAindependent biofilms, the genes do not appear to be involved in their formation. Houston et al. [19] found that deletion of the major autolysin (atl) gene in MRSA strains impaired their ability to form FnBP-dependent biofilms. Some MRSA clinical isolates even produce biofilms of both phenotypes. MRSA strain 132 is able to switch from PIA-dependent to PIA-independent proteinaceous biofilm matrices depending on environmental conditions [15].
The biofilm dispersion is investigated in vitro using enzymatic detachment methods [20] and treatment with chemicals such as periodate (HIO 4 or NaIO 4 ). These conventional methods are used to identify the biofilm matrix components of both Gram-negative and Gram-positive bacteria [21,22]. Moreover, bacteria within biofilms are significantly affected by matrix components that influence adhesion of the cells to solid substrata and cohesion between bacterial cells [23]. Specific matrix components can increase the ability of bacteria to aggregate [24]. The structure of extracellular polymeric substances (EPS) is complex and variable, and its precise role in cell adhesion and cohesion is not completely understood [17].
The aims of this study were to examine the ability of a collection of MRSA isolates with spa type t127 to form biofilms, to determine the extracellular matrix components in the biofilms formed by these strains, and to elucidate the influence of biofilms on the ability of these bacteria to adhere and aggregate.
Identification and Genotyping of MRSA Strains.
A total of 25 MRSA clinical isolates were obtained from the Medical Microbiology Laboratory at the Universiti Putra Malaysia. These isolates were obtained from different systemic infection sites, and their identity was confirmed by Gram staining, growth on mannitol-salt agar (Oxoid, UK), and CHROMagar MRSA (Paris, France). Kirby-Bauer testing was performed for oxacillin (1 g) (Oxoid, UK) and cefoxitin (30 g) (Oxoid, UK) on Muller-Hinton agar (Oxoid, UK) [25]. The MRSA strain ATCC33591 and clinical methicillinsensitive Staphylococcus aureus (MSSA) strain were used as standards in every test, which were performed in triplicate. The isolates were confirmed to be S. aureus by detection of the Sa442 fragment and MRSA by detection of the mecA gene. A single polymerase chain reaction (PCR) was used to detect the Sa442 fragment with the Sa442 forward primer 5 -AATCTTTGTCGGTACACGATATTCTTCACG-3 and Sa442 reverse primer 5 -CGTAATGAGATTTCA-GTAAATACAACA-3 . PCR conditions were the following: an initial temperature of 96 ∘ C (3 min), followed by denaturation at 95 ∘ C (1 min), annealing at 55 ∘ C (30 s), and elongation at 72 ∘ C (3 min), and a final elongation step at 72 ∘ C (4 min). Amplicons of the expected size (108 bp) were obtained [26]. The mecA gene was detected using mecA forward primer 5 -ACCAGATTACAACTTCACCAGG-3 and mecA reverse primer 5 -CCACTTCATATCTTGTAACG-3 , initial temperature of 95 ∘ C (1 min), denaturation 95 ∘ C (15 s), annealing 45 ∘ C (15 s), and elongation 72 ∘ C (30 s), with a final extension at 72 ∘ C (4 min). Amplicons of the expected size (162 bp) existed [27]. All isolates were subjected to spa typing, according to Christensen et al. [28]. The polymorphic X region of the protein A gene (spa) was amplified with primer designed from an S. aureus sequence in Gen-Bank (accession number J01786): 1079 F [1079-1099]: 5 -TCATCCAAAGCCTTAAAGACC-3 and 1516R [1536-1516]: 5 -GTCAGCAGTAGTGCCGTTTG-3 . The PCR reaction was performed using a KOD FX Neo Kit from Toyobo Co., Ltd. (Osaka, Japan) as recommended by the manufacturer. PCR conditions were 94 ∘ C for 2 min; 35 cycles each of 94 ∘ C for 30 s, 50 ∘ C for 30 s, and 72 ∘ C for 60 s; and a final extension at 72 ∘ C for 5 min. The expected product size was between 300 bp and 600 bp, with the size varying by the number of spa repeats. All PCR products were sequenced using 1st BASE (BioSyntech, Inc.) after purification with the GeneJET PCR Purification Kit (Thermo Fisher Scientific). Sequence assembly was performed in Clone Manager Basic 9 (SciEd), followed by analysis of the spa tandem repeats using spa typing online software (http://spatyper.fortinbras.us/) and the Ridom Spa Server database (http://www.spaserver.ridom.de/) [29].
Biofilm Semiquantification with Crystal Violet (CV) Staining.
Biofilm formation by MRSA strains was quantified using the microwell plate method described by Christensen et al. and Manago et al. [28,29]. All MRSA isolates were grown in tryptone soya broth with 1% glucose (TSBG), and then 250 L of each bacterial strain culture was diluted to an 600 of 0.05 and incubated in 96-well flat-bottomed polystyrene microwell plates (MWP) at 37 ∘ C for 48 h without shaking. The well contents were removed by flipping the plates, and the wells were washed with phosphate buffered saline (PBS, pH 7.2), heat-fixed by exposing the plate to hot air at 60 ∘ C in a hybridization oven (model HS-101, Amerex, USA) for 1 h, and then stained with 250 L of 0.1% (w/v) CV solution for 15 min at room temperature to allow the dye to penetrate the biofilm and be washed with tap water. The plates were emptied and left to dry overnight. Biofilms were quantified by eluting the CV stain with 250 L of 33% glacial acetic acid and measuring the absorbance of the solution at 570 nm ( 570 ) using a BioTek Synergy 2 plate reader. The biofilm assay was performed for each strain in triplicate using a microwell plate, and the background was determined by using noninoculated media as a control. The amount of biofilm produced was quantified by comparing the experimental values between the inoculated and noninoculated media. The cut-off value of noninoculated media at an optical density at 570 nm (OD 570 ) was recorded as 1.31. This value was considered the deadline point to define biofilm quantities. The biofilm formation abilities of isolates were categorized based on this value. The isolates were considered strong biofilm producers and denoted as "+++" when the absorbance was more than 5.24 ( 570 > 5.24), moderate biofilm producers as "++" when the absorbance was between 2.62 and 5.24 ( 570 = 2.62-5.24), weak biofilm producers as "+" (1.31 < 570 < 2.62), and biofilm nonproducers as "−" ( 570 < 1.31). These criteria were established by Stepanović et al. [30].
Phenotypic Evaluation of Colony Morphotypes.
Colony morphologies were assessed using a spot test on tryptone soya agar (Oxoid, UK) supplemented with 1% glucose (TSAG), whereas Congo red agar [brain heart infusion agar (Oxoid, UK) supplemented with 5% sucrose and 40 g/mL Congo red dye (BDH Chemicals Ltd., UK)] was used to differentiate between PNAG-producing (black colony) and non-PNAGproducing (red colony) phenotypes as described previously [18]. In brief, strains were cultured on TSAG (1% glucose) plates at 37 ∘ C for 16 h. Cells were resuspended in tryptone soya broth (TSB) medium, and the concentration was adjusted to an OD 600 of 2. Five microliters of the suspension was spotted on TSAG and Congo agar plates. The phenotype was observed after 48 h.
Biochemical Composition of Biofilms.
Biofilms were prepared in 96-well plates of MWP as described above and then treated with 250 L of 40 mM NaIO 4 in 50 mM sodium acetate buffer (pH 5.5) for exopolysaccharides degradation; proteinase K (0.1 and 1 mg/mL) in 20 mM Tris-HCl (pH 7.5) with 100 mM NaCl and trypsin (0.1 and 1 mg/mL) for protein degradation; 140 U/mL DNase I in 5 mM MgCl 2 for DNA degradation; and RNase 100 g/mL for RNA degradation. All plates were incubated for 16 h at 37 ∘ C, except for plates with NaIO 4 and its control, which were incubated at 37 ∘ C in the dark for 16 h [22,31,32]. In addition, deoxyribonuclease with a final concentration of 140 U/mL in magnesium peptone water buffer (0.1% peptone and 5 mM MgCl 2 ), which was incubated at 37 ∘ C for 16 h, and proteinase K with a final concentration of 100 g/mL in Tris-peptone buffer (0.1% peptone, 20 mM Tris-HCl [pH 7.5], and 100 mM NaCl), which was incubated at 37 ∘ C for 16 h, were added successively to the established biofilm in MWP. Control wells were filled with appropriate buffers without enzymes. The biofilms were rinsed twice with PBS (pH 7.2), dried for 1 h at 60 ∘ C, and stained with 0.1% CV as described above. Biofilm dispersion was calculated as the absorbance of the CV-stained biofilm at 570 nm. For each sample, three replicates were used, and each experiment was repeated at least three times independently.
Role of Biofilms in MRSA Adhesiveness and Cohesiveness.
Two preparations of bacterial cells, "unwashed cells" and "washed cells," were prepared for each MRSA isolate. After an overnight incubation in TSB supplemented with 1% glucose, each bacterial culture was diluted to OD 660 = 0.8 in TSB without glucose. Then, 80 mL from each sample was centrifuged at 8000 ×g for 10 min. The pellet formed was dissolved in 80 mL PBS (pH 7.2). These cells were considered "unwashed cells"; a substantial amount of biofilm matrix was left on their cell walls. Mechanical disruption was used to prepare "washed cells" by repeatedly dissolving cell pellets in 80 mL PBS (pH 7.2), followed by sonication (Sonic Ruptor 400, OMNI International, GA, USA) for 5 min (1 min sonication, power output 5, pulses 5, with 30 s rest) and centrifugation. The supernatant was discarded, and the cell pellet was resuspended in PBS by vortexing. This process was repeated five times. Washed and unwashed cells of each of the 25 bacterial isolates were used to determine cell adhesiveness by the packed-bead method as shown in Figure 1 according to [24].
MRSA biofilm cohesiveness (aggregation) was assessed using the washed cells. Total culture turbidity was measured at 660 nm, with the initial turbidity designated OD and the culture after the fifth round of sonication designated OD . The percentage of cells that were aggregated was estimated as follows: % aggregation = [(OD − OD ) × 100]/OD , as described previously [33,34]. These experiments were performed three times independently in a sterilized laminar flow cabinet.
Statistical Analysis
All statistical analyses were performed using SPSS Statistics 21 for windows (IBM). Data were expressed as mean values ± standard error of mean (SEM). Comparison of OD 570 and OD 660 between groups was carried out using Student's -test. All results were considered statistically significant at the < 0.05 level.
Confirmation of S. aureus Identity.
All isolates studied produced golden-yellow, round, smooth, raised, and mucoid colonies surrounded by a large yellow zone on mannitol-salt agar and changed in colour from rose to mauve on CHRO-Magar MRSA. These isolates were confirmed to be S. aureus by the presence of the specific glutamate synthetase (Sa442) fragment and to be methicillin-resistant by the presence of the mecA gene. All isolates were completely resistant (100%) to oxacillin and cefoxitin. Isolates were classified into four clones, with the majority (22/25) belonging to clone t127, and the others belonging to t2246 (1/25), t790 (1/25), and t223 (1/25). Phylogenetic tree analysis for these clones was shown in Figure 2. Furthermore, the Ridom Spa Server-Spa-MLST mapping shows that clone t127 related to sequence type (ST-1).
Morphology of MRSA.
MRSA biofilms on TSA with 1% glucose developed complex architectural features as shown in Figure 3(a), including a layer of highly autoaggregated cells at the centre of each colony, mounted on transparent layers of adherent cells with irregular margins along the edges. Some colonies had circular or vertical lines radiating from the centre, giving the colonies a bloom-shaped appearance. Some of these colonies were black because of the presence of exopolysaccharides or red because of the presence of proteins on Congo red agar (Figure 3(b)).
Biofilm Components.
The mature MRSA biofilms were examined for interactions with NaIO 4 , proteinase K, trypsin, DNase I, and RNase A. Figure 4 shows 48 h MRSA biofilms formed in microwell plates that were subsequently exposed to NaIO 4 for 16 h. Some isolates showed significant detachment of biofilms and displayed reductions in biofilms of 76% (t790/19), 67% (t127/17), and 42-52% in the rest of the isolates.
Because proteinase K (100 g/mL) did not completely disperse the established biofilms, the experiments were repeated with a higher concentration of proteinase K (1 mg/mL). Interestingly, as shown in Figure 6, proteinase K at this concentration enhanced biofilm formation in the majority of the isolates tested, except for isolates t127/22 and t127/25, which showed reductions in biofilm biomass of 56% and 48%, respectively. Isolates t127/15, t127/18, and t127/23 seemed not to be affected by proteinase K at this concentration, in spite of showing sensitivity to proteinase K at the lower concentration of 100 g/mL.
Many found that DNase I was a more effective biofilm inhibitor than proteinase K, but that neither dispersed biofilms completely. The maximum percentage biofilm dispersal by DNase was 84%, whereas, with proteinase K, this was 75%. To investigate whether DNase and proteinase K could complement each other to eliminate biofilms, 48 h established biofilms were treated consecutively with DNase and proteinase K treatment. As shown in Figure 11, the majority of isolates showed a significantly greater ( = 0.001) reduction in biofilms compared to that with DNase or proteinase K alone. However, isolates t127/14, t790/19, t223/20, and t127/24 showed more effective biofilm dispersal when treated with DNase alone, compared with either treatment with proteinase K alone or treatment with DNase followed by proteinase K.
Discussion
MRSA biofilms play a significant role in numerous chronic infections [35,36]. To improve MRSA diagnostics, it is necessary to understand the biofilms that lead to chronic infections [37]. Although there have been many studies on the components of MRSA biofilms, very few of these studies have addressed the impact of biofilms on the adhesiveness and cohesiveness of bacterial cells [13,14,[38][39][40].
The gene spa type t127 is frequently present communityacquired MRSA in the UK [41], as well as in the US [42]. Similarly, in this study, we found that the majority of MRSA isolates tested had spa type t127, with a small number having spa types t2246, t790, and t223. Based on a semiquantitative microwell plate assay, the majority of these isolates showed a moderate ability to produce biofilms. The production of slime on TSAG (Figure 3(a)), however, did not seem to be related to the adhesion strength of these biofilms on microwell plates.
Assessing biofilm dispersal is considered the main method to determine the components involved in biofilm formation. In our study, antibiofilm agents such as NaIO 4 and extracellular enzymes were used to try to disperse mature biofilms of isolates t127, t2246, t790, and t223. These antibiofilm agents have been shown to eliminate biofilms from nonliving and living surfaces [43,44]. However, it is important to consider the structures of the biofilms that are 25 58 being targeted [45], as many of these agents differ in their effects on the various forms of biofilms produced by different bacterial species [14,46,47]. PIA/PNAG polymeric chains appear to be major constituents of many biofilms in both Gram-positive and Gramnegative pathogens [48]. NaIO 4 can modify these polymeric chains by splitting the C3-C4 bonds on exopolysaccharide residues and oxidizing the carbons to yield vicinal hydroxyl groups [45]. Our study showed that NaIO 4 had varying effects, from high to low levels of biofilm reduction for MRSA isolates related to clone t127. This could be of a result of the effects of NaIO 4 on exopolysaccharides that are chemically identical in structure, but that have some differences in both the amount of acetates O-linked with succinate and acetylation levels of amino groups [32,49]. In biofilms, the polysaccharides do not exist alone but appear either in association or segregated, interacting with a broad range of other molecular species, including DNA, proteins, and lipids [50]. As a consequence, depolymerisation of exopolysaccharides in response to NaIO 4 varies depending on biofilm components. In our study, the colony morphologies of MRSA isolates, observed on Congo red agar, revealed different patterns of interaction between the exopolysaccharides (black colour) and proteins (red colour); some isolates produced smooth, black and red colonies and others produced mucoid redblack colonies with a red pellet that appeared to have melted inside (Figure 3(b)). Sager et al. [51] showed that NaIO 4 had a stimulating influence on established biofilms of Pasteurella pneumotropica.
The exopolysaccharides present in bacterial capsules seemed to have a negative effect on biofilm production. For example, mutations in the capsule genes of S. haemolyticus, Vibrio vulnificus, and Porphyromonas gingivalis resulted in an increase in biofilm formation compared to the wild-type strains because of decreased capsular exopolysaccharide production [52][53][54]. NaIO 4 seemed to enhance the production of biofilms, as indicated in Figure 4, by increasing the ability of some MRSA isolates related to clone t127 to produce biofilms. This could be the result of exopolysaccharides present in the capsules of bacteria being eliminated.
Protease treatment is known to disperse mature MRSA biofilms. Kumar Shukla and Rao [55] showed that proteinase K treatment impaired biofilm formation because of the absence of biofilm-associated protein (encoded by Bap) on the surface of S. aureus strain V329, but that it did not have any effect on strain M556, which lacked Bap. In this study, proteinase K and trypsin were used to determine whether proteins were components of mature biofilms. Proteinase K (100 g/mL) caused preformed biofilms to detach, but with dispersal percentages that were comparatively low for all 25 MRSA isolates tested. However, the majority of our isolates appeared to be sensitive to proteinase K (100 g/mL), consistent with the findings of previous studies that showed the high sensitivity of S. aureus biofilms to proteinase K [13,14,40,45,47]. Our results showed that in 48 h established biofilms, treatment with a high concentration of proteinase K (1 mg/mL) promoted biofilm formation by all of the isolates except t127/22 and t127/25.
Additionally, trypsin (100 g/mL) showed a variety of effects. In half of the isolates studied, including isolates related to clones t127 and t2246, trypsin treatment increased biofilm formation, whereas in the other half, including isolates related to clones t127, t790, and t223, it decreased biofilm biomass to varying degrees. Interestingly, trypsin (1 mg/mL) was able to partially remove biofilms of some isolates. However, the reason behind these inconsistent observations in the interactions between the two common proteases, trypsin and proteinase K, is not clear. The biofilms of some of isolates were efficiently removed by both proteases. According to Boles and Horswill [44], proteinase K inhibited biofilm formation and promoted the dispersal of established biofilms. Our results agreed with findings by Gilan and Sivan [56], who showed that proteinase K (1 mg/mL) treatment doubled the size of a Rhodococcus ruber C208 biofilm. Moreover, the biofilm seemed to be multilayered, mucoid, and more robust than that before treatment. However, the established biofilm was decreased by trypsin, with a monolayered, sparser structure resulting. We propose that a high concentration of proteinase K enhances autolysis of bacterial cells, thereby releasing extracellular DNA [57,58].
eDNA is an important part of biofilm structure [59]. This was first discovered in Pseudomonas aeruginosa and then in other bacterial species [17,[60][61][62][63]. eDNA is released mainly through cell lysis [64][65][66][67][68] or is secreted from cells [63,69,70]. Biofilm formation has been reported to be blocked, or its morphology altered, by DNase I treatment of Gram-negative cells such as Pseudomonas aeruginosa and Escherichia coli, as well as Gram-positive cells such as S. aureus, S. pneumonia, and L. monocytogenes [59,71,72]. Our data shows that DNase I significantly affected the dispersal of biofilms in the majority of isolates tested. Consistent with this, Rice et al. [17] found that the structural stability of S. aureus biofilms depended on eDNA. Moreover, DNase I-induced degradation of eDNA resulted in a reduction in the biofilm.
Mulcahy et al. [73] suggested that eDNA not only increased biofilm stability but also its resistance to antibiotics. Our study showed that eRNA is also an important part of biofilms, as similar effects on established biofilms were observed in response to DNase I and RNase treatment (Figure 10). Nishimura et al. [74] showed the presence of eRNA in biofilms of the marine bacterium Rhodovulum sulfidophilum. Similarly, Gilan and Sivan [75] showed that applying RNase to cultures of Rhodococcus ruber strain C208 reduced biofilm formation. They also showed that the formation of biofilms was not increased by the addition of short fragments of DNA (ca. 300 and 500 bp) in C208 culture. Izano et al. [62] suggested that the size of the eDNA in S. aureus is important to the formation of biofilms, as different forms of nucleic acids play different roles in this process. eDNA seems to be important structural component of biofilms, whereas eRNA may be involved in regulating biofilm formation because of the significant size difference between these molecules.
To confirm the role of protein in biofilm formation, 48 h biofilms were first treated with DNase I and then by proteinase K in microwell plates. The results, shown in Figure 11, confirmed the significant roles played by both DNA and proteins in biofilm matrix formation. Our findings are consistent with an earlier report showing that MRSA biofilms decreased significantly in the presence of the two enzymes as compared to treatment with the individual enzymes alone [31]. This is further supported by the observation that autolysin (encoded by Atl) and fibronectin-binding proteins (encoded by FnBP) expression is a basic feature of the MRSA biofilm phenotype [13,19].
Many studies have shown that biofilms are sessile communities of bacteria that precipitate and adhere to all surfaces [76,77]. The architecture of a biofilm is dependent on cellto-surface and cell-to-cell interactions [24,[78][79][80]. Figure 12 shows that biofilms of some MRSA isolates only weakly adhered to glass beads, whereas these same isolates strongly adhered to glass beads after extensive washing.
We speculate that slime layers on biofilms reduced the ability of the biofilms to adhere to glass beads. As shown in Figure 13, the washing process reduced the amount of slime present on the biofilms and increased the percentage of cells that aggregated. It is probable that after the washing process, some clusters of bacteria were still covered or surrounded by remnants of the polymer matrix, thereby increasing the adhesiveness of cells to glass beads. These findings are consistent with those of Gómez-Suárez et al. [81], who reported that the ability to adhere to solid surfaces was greater for nonbiofilmed Pseudomonas aeruginosa SG81R1 than for biofilmed P. aeruginosa SG81.
Our data showed a specific relationship between adhesiveness and cohesiveness of the MRSA biofilm isolates tested. When the percentage of cell-to-cell aggregates ( Table 2) was higher than that of cell-to-surface aggregates in biofilms, the cells seemed to have an increased ability to attach to glass beads after washing. However, when the percentage of cell-to-cell aggregates was lower than that of cell-to-surface aggregates, the ability of the cells to attach to glass beads was reduced after washing ( Figure 12). MRSA isolates in this study did not depend on static electricity and polymeric interactions to adhere to glass surfaces as proposed by Tsuneda et al. [24], as there was no correlation between the amount of EPS in the biofilms and cell adhesiveness. This could be because the majority of our isolates produced a moderate amount of biofilm. Moreover, there was no correlation between cell adhesiveness and PIA independence or dependence of the biofilms.
Conclusion
Based on the comparative analysis of biofilm extracellular matrices, it can be concluded that the tested biofilms consisted of nucleic acid-protein complexes, with or without exopolysaccharides. Different biofilm phenotypes were observed for the same MRSA clone. In addition, there seemed to be an association between cellular adhesiveness and cohesiveness of MRSA biofilms.
|
2018-04-03T03:26:57.297Z
|
2016-12-18T00:00:00.000
|
{
"year": 2016,
"sha1": "1280c474812ef8ce06ea4969bb404a92093e1499",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2016/4708425.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f19f239aa7a1c54b7512a54b728fb8a53af669e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
240426209
|
pes2o/s2orc
|
v3-fos-license
|
Global Surface HCHO Distribution derived from Satellite Observations with Neural Networks Technique
: Formaldehyde (HCHO) is one of the most important carcinogenic air contaminants. However, the lack of global surface concentration of HCHO monitoring is currently hindering research on outdoor HCHO pollution. Traditional methods are either restricted to small areas or data-demanding for a global scale of research. To alleviate this issue, we adopted neural networks to estimate surface HCHO concentration with confidence intervals in 2019, where HCHO vertical column density data from TROPOMI, in-situ data from HAPs (harmful air pollutants) monitoring network and ATom mission are utilized. Our result shows that the global surface HCHO average concentration is 2.30 μg/m 3 . Furthermore, in terms of regions, the concentration in Amazon Basin, Northern China, South-east Asia, Bay of Bengal, Central and Western Africa are among the highest. The results from our study provides a first dataset of the global surface HCHO concentration. In addition, the derived confidence interval of surface HCHO concentration adds an extra layer for the confidence to our results. As a pioneer work in adopting confidence interval estimation into AI-driven atmospheric pollutant research and the first global HCHO surface distribution dataset, our paper will pave the way for the rigorous study on global ambient HCHO health risk and economic loss, thus providing a basis for pollutant controlling policies worldwide.
Introduction
Formaldehyde (HCHO) is a carcinogenic trace gas and toxic pollutant in the atmosphere [1]. It is considered by the U.S. Environmental Protection Agency (EPA) to be one of the most important carcinogens in outdoor air among 187 harmful air pollutants (HAPs) [2], and accounts for more than 50% of the total risk of HAP related cancer in the United States [3]. 13 out of every million people receive nasopharyngeal carcinoma after being exposed to an average concentration of 1 microgram per cubic meter of HCHO for a lifetime [4]. As the most abundant aldehyde compound in the atmosphere, HCHO is one of the major volatile organic compounds (VOCs) and pollutants in the troposphere [5], which has a close relationship with the formation and extinction of O3 and NO2 in the atmosphere. HCHO pollution is a global scale issue. Ambient HCHO can be produced naturally and artificially, such as photolysis of isoprene from vegetation [6,7] farmland emissions [8], energy production and automobile exhaust emissions [9,10].
Surface concentration represents the amount of HCHO that people are exposed to, and is the direct data source of health risk estimation. Nevertheless, despite the crucial role of HCHO in human's health and atmosphere, it is difficult to monitor HCHO systematically and comprehensively by using traditional ground-based methods because of the large error and the expensive cost [11]. As a result, there is still no regular or largescale monitoring of HCHO over most regions of the world. Most countries and regions with serious pollution fail to measure the surface HCHO concentration. Only in the United States, the HAP sampling network collects HCHO information but is limited to cities and industrial sites [12].
In contrast, remote sensing technology can not only monitor the long-term and largescale dynamics, but also avoid many interference factors. Currently, there are many satellite missions reporting HCHO vertical column density (VCD) [13], which provides fundamental datasets for many related researches. The main sensors used to measure the concentration of HCHO VCD in the atmosphere include GOME-1 [14], GOME-2 [15], SCIAMACHY [16], OMI [17] and TROPOMI [18]. In terms of precision, TROPOMI is the most advanced atmospheric monitoring spectrometer with the highest resolution, with a swath of 2600km and daily global coverage [19]. However, most satellite-based retrieval can only provide the total column concentration due to their limitation on vertical resolution. Therefore, most studies on ambient HCHO only focus on the total amount in the vertical column in certain regions, such as North America [20], South America [21], Europe [22], Asia [23,24], Africa [7], instead of focusing on its surface concentration.
With the increasing attention towards health risks and photochemical pollution, demand for HCHO surface concentration distribution from the global perspective is growing more urgent. Many efforts have been put to derive surface concentration from total column concentration, such as using the fixed forms of linear models to assess the relationship between VCD and in-situ concentration 1 of NO2, SO2, CO, PM [25], and using R 2 to assess the relationship between vertical column density and ground in-situ concentration [26]. However, these methods seem to be less accurate and may only be limited to specific pollutants. In the other few existing studies HCHO surface concentration was derived by applying the vertical distribution profile from, GEOS-Chem model to the satellite-derived total column concentration [27]. However, the atmospheric transportation model itself requires numerous input parameters, which may impede its application to a global scale with a reasonable spatial and telporal resolution. Therefore, our main focus here is to derive the global surface HCHO concentration distribution based from satelllite-derived total column HCHO concentration and a quite limited in-situ HCHO concentration.
Neural network, a powerful machine learning algorithm, has gained its reputation for revealing hidden patterns inside data with a great accuracy in various fields, such as image classification [28], object detection [29], image denoising [30], image synthesis [31], person re-identification [32], etc. However, some algorithms, such as vanilla neural network, do not assign confidence level nor confidence interval to its point estimation results, which is necessary for scientific estimation and public policy decision-making. To quantify uncertainty of results derived from neural networks, a diverse of approaches have been adopted, including Bayesian neural network [33], delta method [34], bootstrap [35], mean variance estimation [35], interpreting dropout as performing variational inference [36]. However, these methods are either computationally demanding or strongly based on assumptions. Quality-driven (QD) method, a method based on LUBE to derive confidence intervals for the neural network, by combining the uncertainty estimating loss and the neural network loss function as a whole [37], is not only compatible with gradient descent algorithms, but also shrink the average confidence interval length up to 10%, compared with previous works [38]. Therefore, to enhance the credibility of our model, this method is leveraged to obtain the interval estimation of surface concentration of HCHO. By combining the point and interval estimation, it is believed to meet a balance between maintaining accuracy and controlling uncertainty in the form of a pre-set confidence level.
The potential health impact of HCHO but lack of global surface monitoring data demands an efficient way to get a better understanding of global HCHO surface distribution with limited data. In this paper, as a novel study, we derived the global surface concentration of HCHO in 2019 by feeding TROPOMI VCD data and limited surface HCHO concentration data into a neural network model. In addition, besides the capture of the seasonal changes of key areas, confidence intervals for the derived surface HCHO are also estimated by using QD method. As a novel work on adopting interval estimation in AI-driven atmospheric pollutant research and deriving the first dataset ofglobal HCHO surface distribution, our paper will pave the way for rigorous study on global ambient HCHO health risk and economic loss, thus providing a basis for pollutant controlling policies worldwide.
Figure 1. Data processing workflow
To estimate the global distribution of HCHO surface concentration, we used two discrete in-situ data sources and Sentinel-5P TROPOMI VCD data on the corresponding location (as shown by red points in Figure 1) to train our neural network model. Then we apply our model to a global scale and estimate the surface HCHO distribution with confidence intervals.
Sentinel-5P VCD Data
The data of vertical column density (VCD) of HCHO in this study comes from TROPOMI (Tropospheric Monitoring Instrument), which is carried on Sentinel-5P [19]. Sentinel-5P is a global air pollution monitoring satellite launched by ESA on October 13, 2017, as part of the Copernicus project. TROPOMI can effectively observe trace gas components in the atmosphere around the world, including NO2, O3, SO2, HCHO, CH4, CO and other important indicators closely related to human activities, and can strengthen the observation of aerosols and clouds [39].
In terms of accuracy, TROPOMI is currently the most advanced atmospheric monitoring spectrometer with the highest spatial resolution. The satellite provides global coverage daily with a spatial resolution of 7km×7km and the equator crossing time at about 13:30 local time, which effectively ensures the comparability of data in different regions [19]. Sentinel-5P data are currently available for public access 2 .
We use the data of 2019 because a) 2018 is the first year that Sentinel-5P is in operation; the algorithm of the product is not stable then; b) 2020 is within the global COVID-19 pandemic, which might have special impact on anthropogenic sources, making the result less representative in terms of a long-term status. Offline HCHO data from January 1 to December 31, 2019 are collected. According to the technical documents, data points whose quality index (QA_ value) is less than 0.5 are removed to ensure the best quality. After doing mosaic on the datasets and applying Ordinary Kriging interpolation, we obtained the distribution of global average total column concentration of HCHO with a resolution of 0.05° by 0.05°. The data beyond 60°S and 60°N is discarded due to the sparsity of satellite data and scarceness of human activities , which has little impact on health risk estimation.
In-situ Data
Since our study aims to estimate the surface concentration of HCHO on a global level, we need surface-level concentration data which will cover diverse types of underlying surfaces and also different altitudes to train our model. Therefore, the following two data sources are considered.
ATom flight data. NASA's atmospheric tomography mission (ATom) is a systematic, global sampling of the atmosphere in the United States from 2016 to 2018, and continuous profile analysis from 0.2km to 12km. The volume mixing ratio of HCHO in air was measured in ATom flight data. A large number of gas and aerosol payloads were deployed on NASA's DC-8 aircraft, and the HCHO on NASA's high-altitude aircraft was measured by ISAF instrument [40,41]. The instrument uses laser-induced fluorescence (LIF) to obtain the high sensitivity needed to detect HCHO in the upper troposphere and lower stratosphere, which has an abundance of 10 parts per trillion. LIF can also achieve quick response to measure the abundance of HCHO in the fine structure outflow of convective storms. These HCHO measurements will be used to elucidate the mechanism of convective transport and to quantify the effects of boundary layer pollutants on ozone photochemistry and cloud microphysics in the upper atmosphere [42].
HAPs ground monitoring data. We obtained ground HCHO observations from EPA SLTS network at https://www.epa.gov/outdoor-air-quality-data, which reports average 24-hour HCHO concentration all around the year. Here, we selected 5965 data points from 109 sites in 2019, covering the whole country, as shown in Figure 2 (a).
These two datasets cover a wide range of latitudes, from -8.1977° S to 82.9404° N, and a diverse variety of landscapes in the U.S. The selection of the HAPs dataset is to ensure that the concentration distribution feature at ground level is represented in our model, and the ATom data is to ensure that our model can be generalized and applied to a global extent.
(a) (b) Figure 3. (a) The geographical distribution of our data, where red represents ATom flight data points and green represents HAPS ground monitoring network. (b) The meaning of "Height" and "Altitude" for ATom mission data Since ATom data are obtained far above the surface, and the vertical distribution of HCHO usually changes largely from ground to 1~2km above [43], we take the "Height" of the aircraft measurements as another input variable in our model to control the impact of vertical distribution along the column. For those HAPS ground monitoring data, we assign 0 as their heights.
Global DEM Data
Since descriptive statistics show a negative relationship between surface altitude and in-situ concentration, with a Pearson's correlation of r=-0.3907 in our in-situ dataset, we use global Digital Elevation Model (DEM) data as one of the input variables-"Altitude", in order to estimate the ground-level concentration. The relationship between variable "Height" and variable "Altitude" is shown in Figure 2 In our study, we use the Shuttle Radar Topography Mission (SRTM) DEM product and resample it to a resolution of 0.05°. This dataset has an initial resolution of 90m at the equator and is provided in WGS84 projection with a 1 arc resolution [44].
Data Processing
After collecting and organizing data into formattable structure, we first visualize and preprocess these data. Then, two neural networks are implemented for point and interval estimations by using PyTorch, a well-known deep-learning framework. Our code is available online 3 .
The preprocessed data with the ground truth in-situ HCHO concentration are then divided into two groups, training and testing dataset, to train our models. After that, global VCD data are fed into the model to derive global surface level HCHO concentration.
Preprocessing
In theory, a neural network is able to handle input data from a different distribution, however, a significant defect was noticed in the training process without preprocessing, owing to the highly imbalanced, skewed distribution of the HCHO concentration (both column and in-situ). Therefore, we first applied log-transformation to the raw data. The logarithm of the HCHO concentration data shows a bell-shaped distribution, and increments in estimation accuracy have also proven the effectiveness of logtransformation.
Neural Network Architecture
As a universal function approximator, the neural network plays a vital role in helping us derive the point and interval estimations of the HCHO concentration. But instead of training a single network to get these estimations jointly, two separate neural networks are constructed for point and interval estimation respectively, because several experiments which we carried out indicated that a joint model always has to compromise between point estimation and interval estimation, thus greatly reducing the accuracy of point estimation.
Like ordinary multi-layer perceptrons, each neural network in our model contains three input nodes, three BFR blocks (with the ReLUs in the last blocks are disabled). The network for point estimation has one output node, and the other network for interval estimation gets two nodes. The structure of our model is shown in Figure 3. For the sake of stabilizing the training and prediction procedure, instead of stacking full-connection and non-linear activation layers, we proposed to stack BFR blocks, which are made up of a batch normalization layer, a full connection layer and a ReLU activation layer sequentially.
Batch normalization (BN) is first introduced to address Internal Covariate Shift, a phenomenon referring to the unfavorable change of data distributions in the hidden layers. Just like the data standardization, BN forces the distribution of each hidden layer to have exactly the same means and variances dimension-wisely, which not only regularizes the network, but also accelerates the training procedure by reducing the dependence of gradients on the scale of the parameters or of their initial values [45].
Full connection (FC) layer is connected immediately after the BN layer in order to provide linear transformation, where we set the number of hidden neurons as 50. The output from the FC layer is non-linearly activated by ReLU function [46,47].
Loss function
Objective functions with suitable forms are crucial for applying stochastic gradient descent algorithms to converge while training. Though point estimation only needs to take the precision into consideration, two conflicting factors are involved in evaluating the quality of interval estimation -higher confidential level usually yields an interval with greater length and vice versa.
Point estimation loss. Instead of fancy forms, we found that a 1 loss is sufficient for training rapidly: Interval estimation loss is relatively complex compared to point estimation loss. The QD-loss takes the confidential level and interval length into consideration simultaneously [38]: On one hand, to control the confidential level of the interval estimator, is set to indicate at most how many(proportionally) intervals failing to cover the true value can be tolerated. We set multiple ′ , including 0.05, 0.10, 0.20, in our model to derive interval predictions of various confidential level and average coverage length, and it is verified that higher yields shorter intervals.
On the other hand, the average length of intervals subject to > 1 − should be minimized. However, intervals that fail to capture their corresponding data point should not be encouraged to shrink further. The average interval length to penalize is, therefore, where ̃= ( ⋅ ( −̂)) ⋅ ( ⋅ (̂− )) , works as a continuous approximation towards "hard" {̂ < <̂}. Since the sigmoid function is known for providing a differentiable alternative to discrete stepwise functions, and = 160 is a super-parameter for smoothness.
Point Estimation
Point estimation model in this study shows a relatively high accuracy and is generally consistent with previous studies on the vertical distribution of HCHO. Figure 5. shows the point estimation value of in-situ concentration with the change of vertical column density (VCD) and height, when altitude at sea level is fixed. It is seen that in-situ concentration is negatively correlated with the height and positively correlated with VCD. To evaluate the performance of our model, statistics including MAE 4 and RMSE 5 are calculated based on the training and testing datasets respectively. As shown by Table 1, both MAEs and RMSEs are relatively small, which indicates that the model performs well in the point estimation. By loading the global DEM, logarithm VCD and the height (0 m at surface) into the model, the annual average of the global surface HCHO distribution map was derived. As shown in Figure 5, there are generally 6 regions where HCHO surface concentration is high, namely the Amazon area, south east U.S., Central and Western Africa, North Eastern India, South East Asia, and North China, with an average concentration of more than 4 μg/m3. The seasonal change of HCHO in these key areas is discussed in section 3.3. The uneven distribution of HCHO concentration on the sea and land surface is also noticed in Figure 6, which shows the HCHO concentration is relatively lower and more homogeneous on the sea surface than on the land. Statistics given in Table 2 have also confirms this fact. It is seen that the annual mean of surface HCHO concentration is about 2.21 μg/m3 over ocean and 2.77 μg/m3 over land. Cities, as the regions with the densest population, deserve specific attention towards their surface HCHO concentration due to its known and potential harm to people living there. Table 3 shows the surface concentration of HCHO of some of the typical cities in these regions, where Jakarta and Singapore, two major cities (country) in South East Asia, rank the highest and the second highest, reaching to 6.18 and 5.83 μg/m3, respectively.
Interval Estimation
Besides point estimation, the model in this study also provides the estimation of upper and lower bounds of surface concentration of HCHO, so that the uncertainty, or variability of the surface concentration can be evaluated. In Figure 6, the relationship between the estimated upper bound, lower bound and the point estimation are displayed in a 3D space. It is worth emphasizing that the captured uncertainty, or the interval length, delineates the variability of the data itself, not the lower trustworthiness of our model or its estimations. Confidence level, together with the covering length, lay the foundation for the trustworthiness and precision of our interval prediction. As shown in Table 4, interval estimation model obtains the covering rates and the ratio of true values covered by predicted interval, of 94.41% and 88.74%, exceeding the pre-set confidential level = 0.9 and = 0.8, respectively.
In addition, as expected in section 2.2.1.2, a higher confidence level yields a longer average interval length 6 , which is 4.530 μg /m 3 for = 0.9, 17% more than 3.864 μg /m 3 for = 0.8. Such a phenomenon can also be seen in the statistics, shown in Table 4, for minimum, maximum and mean values of upper and lower bounds, respectively for the two confidence levels. However,the standard deviation of upper bounds seems to be larger than that of the lower bounds under both scenarios in Table 4. From the density scatter plot between these two, shown in Figure 7, It is seen that that the upper bound estimation is not deterministic, though interval estimation successfully covers the true values (and point estimations as shall be discussed below) of surface concentration. Nevertheless, further exploration of seasonal changes of HCHO in some key areas in section 3.3 could basically explain that seasonal variations of surface HCHO may contribute to the majority of the uncertainty of interval estimation. Global distribution of the estimated upper and lower bounds are given in Figure 8(a). It shows that the upper and lower bounds generally share the same global pattern, though with different magnitude, with a range of between 3.77 and 8.83 μg/m 3 for upper bounds and from 0.52 to 1.03 μg/m 3 for lower bounds. The interval length 6 of 90% confidence interval is 4.77 μg/m 3 .
As shown in Figure 8
Seasonal Changes of HCHO in Some Key regions
To better understand the seasonal variation of surface HCHO, the distribution pattern of four typical months of some key areas where surface concentration is relatively high are analyzed.
America. Figure 9 shows the surface concentration of February, May, August, and November in South America and around Caribbean Sea. Amazon Basin, Paraguay, and Eastern Central America have a high HCHO surface concentration in November and February, while the south-east coast of U.S. has the highest concentration in November and are almost free from HCHO pollution in February and May. The Andes Mountains has a significantly low concentration, with a value of less than 0.5 μg/m 3 . Africa. As shown in Figure 10, there are two regions in Africa whose HCHO surface concentration is relatively high. One is in the south of R. D. Congo around the city of Kolwezi, a mining center with a humid subtropical climate. The surface concentration of HCHO here reaches its maximum in February. The other pollution belt stretches along the Gulf of Guinea, which is famous for its rainforest climate.
Consistency and innovativeness
It is clear that the global surface distribution of HCHO with point and interval estimation is able to be obtained successfully by using neural network models described above. As shown in Figure 13, the results obtained through machine learning technique are generally consistent with results from the previous works which is obtained by combining OMI total column HCHO concentration with GEOS-Chem model from 2005 to 2016, but with less noise across the satellite track. It is seen from the blowup box, which is shown on the right of the figure, corresponding to each result respectively, results from previous studies bear a strip across the satellite track, but the new results from this study does not. In addition, the estimation results in this study show some reversal trend in the Cordillera mountains area. Future validation may be needed for this case. However, since this difference occurs in places where population is sparse, it is not likely to have a perceivable influence on the estimation of cancer risks. The result of global surface concentration estimation of 2019 gives a closer look at the global distribution pattern of HCHO. Obviously, HCHO tends to prevail on the plain of the continent, instead of on the ocean or on high altitude areas. According to previous study, this can be attributed to the scarceness of VOC sources like chemical industry, combustion and rainforest, which are common precursors of the free radical reaction of HCHO production [46][47][48]. By mapping the distribution of HCHO, two kinds of sources around the world can be distinguished preliminarily. One is plant-related, including Amazon, South East Asia and Gulf of Guinea, the other is human-related, including North China Plain and Pearl River Delta [49,50]. More work is needed to accurately identify the source of these HCHO-polluted areas.
In addition, we introduce the interval estimation of neural network model in the conversion from VCD to global surface concentration of HCHO for the first time, increasing the credibility of the model by providing uncertainty information. This new idea can make up for the deficiency of inexplicability of the neural network model [51], thus being useful for the application of neural network models into the field of estimating atmospheric pollutant or health risk in the future.
Limitations and potential improvements
Despite the consistency and innovativeness mentioned before, the shortage of in-situ data is also hindering the further improvement of the model accuracy. On one hand, the existing HCHO in-situ concentration data is seriously insufficient in both spatial and temporal dimensions. Only the United States monitors HCHO in-situ concentration routinely. Even if ATom data are also adopted, in-situ concentration data in low latitude regions is still sparse, which may lead to estimation bias in the low latitude areas such as Asia and Africa. On another hand, it is also difficult to reach a better result by adding more covariates into our model. Experiments with additional covariates input, such as latitude and months, have failed with degenerated or overfitting outputs. In addition, the large gap between true values and the upper bounds from our interval estimation model may suggest a heterogeneous in-situ concentration of HCHO distribution in different months or seasons, Since the model is required to give the interval estimations on the scale of a whole year, rather than on a fine time scale. The seasonal changes of HCHO in some key areas as discussed in section 3.3 has also shown this phenomenon directly.
Therefore, as more HCHO in-situ monitoring network develop, a larger amount of data from more diverse sites could enable scientists to adopt a careful designation of temporal data input and could help give a better estimation towards in-situ concentration of HCHO. Meanwhile, with more Sentinel-5P data accumulating over time, the model in this study can take more factors, including latitude and seasons, into consideration, which could provide more precise estimation of a global scale health risk and economic loss based on specific regions and seasons. Besides the significance of the health risk, the results from this study can also help research on the generation of photochemical pollution, the concentration of VOC, NO2 and other photochemical reaction related pollutants. HCHO, as one of the most important carcinogens in the outdoor environment [2], draws little attention due to the lack of ground measurements of HCHO in most countries and regions for a long time, leading to the shortage of knowledge about health and economic loss. Even if the vertical column density of HCHO is currently available and does settle parts of concern about these issues, it is the ground level HCHO concentration that reflects the actual amount of concentration people are exposed to.
Health risk of HCHO in major cities
Taking 2019 as an example, it is assumed that the HCHO concentration is always the same as this year. According to the inhalation unit risk estimate from EPA and population data [4,54]. Health risks in main high-risk cities are calculated and given in Table 5. It is indicated that more than a thousand people have the potential to get cancer due to exposure to HCHO in Jakarta, Dhaka, Bangkok, Kolkata, Beijing and Guangzhou. Jakarta has the highest potential patients due to exposure, with a number of up to 2593. Jakarta, Singapore, Kuala Lumpur, Dhaka and Lagos are the most prevalent cities, with 80.34, 75.79, 72.93, 71.63, 71.37 potential patients per million. The main cities with high health risk concentration in Southeast Asia, which was previously neglected by the academia, may become the next hotspots for research in HCHO pollution and health risk.
Conclusions
With the benefit from a quality-driven interval estimation algorithm designed for neural network, we are able to derive the confidence interval and a precise point estimation of 2019 global surface HCHO on different confidence levels with a limited amount of data. By mapping the HCHO surface concentration distribution, we found that Southeast Asia, North China, Central and Western Africa, and the rainforest area of Latin America have a relatively more serious HCHO pollution than the rest regions. Major cities in these regions, such as Bangkok, Beijing, Guangzhou, Singapore, have an annual concentration over 5.00 μg/m 3 . The health effects from such high levels of HCHO pollution deserve more attention from the academia and governments.
Our work paves a way for research on formaldehyde-related cancers, and provides guidance for policy making and insurance pricing. To the best of our knowledge, we are the first to map the global distribution of HCHO and provide insights on its potential health risks. With more HCHO VCD data from Sentinel-5P accumulated, the surface concentration of HCHO dataset covering a longer period of time will be generated, which will help for better assessment of the global risk distribution of formaldehyde-related cancers.
And the data presented in this study are available in [https://drive.google.com/file/d/10A2VIEHm22DF_gyCufV-pbgUdYYhNJKf/view?usp=sharing].
|
2021-09-24T15:38:00.147Z
|
2021-08-30T00:00:00.000
|
{
"year": 2021,
"sha1": "948eeeddda61ee9743319b4cd117a0401ae7e721",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/20/4055/pdf?version=1634022239",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b16471b462b48b45da003c8ad73425e1acdfd575",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
259053442
|
pes2o/s2orc
|
v3-fos-license
|
Avermectin Trunk Injections: A Promising Approach for Managing the Walnut Husk Fly ( Rhagoletis completa )
: This study examined the larvicidal effect of trunk-injected abamectin and emamectin benzoate against the walnut husk fly ( Rhagoletis completa Cresson, 1929). Walnut trees in two locations in two years were injected with the pesticides at different concentrations. For the toxicokinetic studies, the active ingredient content was measured in the leaves, flowers, husks, and kernels, using a UHPLC-MS/MS analytical method. The walnut husk fly infestation rates were between 3 and 70% and 10 and 34% for abamectin and emamectin benzoate, respectively, and were much lower compared to those measured for the control. The active ingredient content in the walnut husk showed a positive correlation with the larvicidal effect. The injections had a measurable but unsatisfactory insecticidal effect in the second year, when the economic threshold was exceeded. Trace amounts of the active ingredients were detected in the flowers. The residue analysis showed a declining concentration trend in the leaves over time. The largest quantities were detected in the leaves ( ≤ 439 ng/g of abamectin; ≤ 19,079 ng/g of emamectin benzoate), with concentrations in the husks of orders of magnitude lower ( ≤ 5.86 ng/g; ≤ 50.19 ng/g). The measurements showed no active ingredient residue above the MRLs in either fresh or dried kernels. The results indicate that trunk injections of abamectin, as well as trunk injections of emamectin benzoate, have the potential to suppress walnut husk fly populations.
Introduction
The English walnut (Juglans regia L., 1753) is one of the most widely grown nuts in Europe, with a cultivated area of 154,160 ha and a total yield of 344,728 t/year [1]. Recent years have seen drops in the volumes and average yields of European walnut production despite the establishment of new orchards [1,2], which is in part due to the detrimental effects of climate change and, mainly, the appearance of the walnut husk fly (Rhagoletis completa Cresson, 1929). This invasive pest, which is native to central and eastern America and northeast Mexico, was first recorded in Europe in Switzerland [3][4][5] and was first identified in orchards in 1991 in Italy [6], from where it spread to France, Spain, Germany, Austria, Croatia, Slovenia, and Hungary [7]. As the species has not yet reached the ecologically delimited boundaries of its distribution, it is expected to arrive in all walnut-growing areas of Europe and Asia [7,8].
In Europe, its primary threat is to the English walnut (Juglans regia); however, there are significant differences among cultivars in susceptibility [9][10][11]. In the Carpathian Basin,
Injection Method
The experimental design was completely randomized. Three replicates were set up for each treatment. For the study, a total of 45 trees were used, and two types of controls were designated: (1) a water-based control (C aq.), with the injection of only water, and (2) no injection (C no inj.).
Two different tools with similar principles were used for injecting. Both tools directly were connected to the hole previously drilled for the injection. The injection points were 20 cm above the ground level. After the injections, the wounds were closed with a tree gel (FAGÉL, FÉNYLAKK Kft.).
In the case of the tool Treenject ( Figure S1), 4-8 (depending on the trunk diameter) 50 mm deep holes with a 3.5 mm diameter were drilled around the trunks at equal distances using an electric drill and a clean, sharp bit. This tool can inject a small (10-20 mL/tree) amount of liquid with a maximum pressure of 12.6 bar. The other tool ( Figure S2) was a combination of a pressurized rubber bag and an applicator pipe (Ynject GO, Fertinyect S.L., Córdoba, Spain). The application with the second tool involved four 6.5 mm diameter drilled holes with a depth of 50 mm around the trunk. Using this tool, we could inject a high (60-200 mL/tree) amount of liquid.
The pesticide products containing 18 g/L of ABA (Vertimec 1.8 EC, Syngenta, Basel, Switzerland) and 95 g/L of EMA (Revive II, Syngenta) were used separately, as they are readily available in the Hungarian market. The formulation of the first product was developed for foliar application, while the second one was specially designed for trunk injection. The treatments took place during the trees' intensive growth phase, approximately 4-6 weeks before the expected emergence of adults (Table 1). The injections were performed between 10 a.m. and 3 p.m. on sunny days.
Sampling
The plant samples were randomly collected from all parts of the tree canopy to examine insecticidal effect and pesticide residues. When taking the samples, 60 compound leaves were collected from each tree replicate (with the tree being divided into four parts according to the four cardinal points) at the times indicated in Table 1, as was 500 g of flowers (for the chemical analysis, we divided it into three parallel samples) from each tree in full bloom. During the fruit sampling, 100-150 husked walnuts were collected in Raschel bags from each tree. After evaluating the insecticidal effect (Section 2.4), three parallel samples of 500 g of husk were made from the collected fruits to determine the active ingredient content for each tree. To avoid the unwanted wetting of the samples, collection was performed on sunny days. Abbreviations: ABA = abamectin (18 g/L); EMA = emamectin benzoate (95 g/L); AI = active ingredient; C aq. = injected with distilled water; C no inj. = no injection was performed; T = Treenject, Y = Ynject GO.
The samples collected for the residual analyses were kept at −80 • C until analysis. Half of the walnut fruit samples were also stored unfrozen, simulating the traditional postharvest technology (storing at room temperature in a well-ventilated room, spread out), for 4 weeks following harvesting.
Insecticidal Effect and Rating of the Damage
The insecticidal effect was evaluated based on the presence of live larvae in the walnut husk. The investigation was performed in September, when the husks split but before they dropped from their shells. The husks were examined within 1-2 days of collection by cutting them into eight slices. The fruits were classified into two groups based on whether the husk contained live larvae or not, to determine the infestation rate. The infestation rate was determined as the percentage of total examined nuts (based on 100-150 walnuts per tree replicate) that were damaged, indicating the presence or absence of larvae. Although this variable is a good indicator of the direct insecticidal effect, it is not equivalent to the degree of economic damage, which is more important to walnut production. In light of this fact, a rating to express the extent of economic damage was added to the evaluation in 2021.
The extent of economic damage was estimated based on the frequency of fruits displaying black spots and the occurrence of such spotting on each nut. Of these two parameters, the effect of the treatment on economic or production quality was classified in the group corresponding to the higher rating. The trees were classified into six groups according to the degree of damage (Table 2).
Short-and Long-Term Efficacy of the Endotherapy
Residue determination was carried out over both short-and long-term time periods to evaluate the pesticides' toxicokinetic behavior. For the short-term study, the residue content was measured in leaf samples collected three times during one vegetative period (35,71, and 106 DAT, Trial II., Table 1), then the residue content was determined. The residue content was also measured in husks and kernels (108 DAT, Trial I., and 111 DAT, Trial II., Table 1).
The long-term efficacy study was performed also in the first year and in the second year following the winter dormancy period. The pesticide residues in the leaf samples were measured over a longer period of time (57,108,138,348,463, and 483 DAT, Trial I.). The residues were measured in husks (483 DAT, Trial I.), kernels (483 DAT, Trial I.), and flowers (348 DAT, Trial I., 337 DAT, Trial II.) in the second year (Table 1). For evaluating the second-year insecticidal effect, the infestation rates were also examined in the same manner described above (483 DAT, Trial I., Table 1).
Chemical Analysis
The pesticide residues in the samples were extracted in accordance with EN 15662:2018 [49] using a citrate-buffered QuEChERS sample preparation method. The chemicals used for the residue measurement are described in Kmellár et al. (2010) [50]. In the case of the kernel samples, a defatting step was also integrated into the procedure, which included freezing out and dSPE cleaning with a C18 sorbent. The method involved an extraction with acetonitrile, which facilitates the determination of pesticide residues using a UHPLC-MS/MS-linked technology. For determining the ABA content, an SPE cleaning procedure was also applied after the QuEChERS extraction.
Regarding the instrumental parameters, the Single Residue Method of the AI published by the Community Reference Laboratories for Residues of Pesticides was applied [51,52]. The method validation was performed for the detection limit (DL), quantification limit (QL), extraction efficiency, linearity, and matrix effect. The method validation was performed in accordance with the SANTE guidelines [53]. The DL values in the cases of husk, flower, leaf, and kernel were different (ABA: 1.2 ng/g; 2.4 ng/g; 2.4 ng/g; 2.4 ng/g; 2.4 ng/g; EMA: 0.1 ng/g; 0.2 ng/g; 0.2 ng/g; 0.2 ng/g; 0.2 ng/g, respectively). The calibration curves were linear up to a concentration of 1000 ng/mL of ABA and 1880 ng/mL of EMA, and the matrix effect calculated with the use of the Matuszewski equation [54] was between 61% and 95% and between 70% and 120%, respectively.
Statistical Analysis
The statistical analyses were performed using IBM SPSS Statistics 27 and Excel 2016 software. As the conditions of the ANOVA were not met for the AI concentrations, a Kruskal-Wallis test was performed to compare the ABA and EMA concentrations. The Marascuilo [55] procedure was used to compare the treatments based on the infestation rates of the trees. The infestation rate (percentage of fruits with live larvae) was calculated for each treatment and presented as total infestation across all replicates. Spearman's rank correlation was used to examine the associations between the variables (injected quantity of AI, infestation rate, residual content). The results were considered significant if p < 0.05.
Insecticidal Effect
The infestation in the fruits of the treated trees was significantly lower in the year of injection than in the control group ( Figure 1). The husk showed the earliest evidence of the larvicidal effect, and a microscopic examination of the oviposition sites indicated that the 1-2 mm blackish patches were the remains of larvae that had died very early in development beneath the epidermis ( Figures S3 and S4).
injection than in the control group ( Figure 1). The husk showed the earliest evidence of the larvicidal effect, and a microscopic examination of the oviposition sites indicated that the 1-2 mm blackish patches were the remains of larvae that had died very early in development beneath the epidermis ( Figures S3 and S4).
On the trees treated with EMA, the percentages of walnut husks containing live larvae was lower (14% and 9%, respectively) than on trees treated with ABA (34% and 23%, respectively). In the case of both AIs, the higher dose resulted in a better insecticidal effect, although the dose-response relationship was not unequivocally significant. The Marascuilo comparison showed that the higher EMA dose had a significantly better larvicidal effect compared to both doses of ABA. Both the higher dose of ABA and the lower dose of EMA had intermediate effects. Trees injected with water and untreated trees experienced high infestation rates, with husks containing live larvae ranging between 84% and 94%. On trees injected with EMA, the AI content in the walnut husk was 7.65 ± 0.96 ng/g (mean ± SE) for the lower dose and 16.61 ± 1.55 ng/g for the higher dose. In the control, the AI content was below the detection limit ( Figure 1). The percentage of husks with live maggots from the trees treated in 2021 (Trial II.) was significantly lower than in the case of tree subjected to the control treatments. The infestation rate was the lowest on the trees treated with EMA (22% and 34%), with ABA showing slightly worse results (42% and 70%), while the control trees were 100% infested (Figures 2 and 3). The lowest dose of ABA resulted in the weakest insecticidal effect; however, a significantly better effect was found with increased doses of ABA ( Figure 2). Accordingly, not only were the infestation rates lower when applying EMA rather than of On the trees treated with EMA, the percentages of walnut husks containing live larvae was lower (14% and 9%, respectively) than on trees treated with ABA (34% and 23%, respectively). In the case of both AIs, the higher dose resulted in a better insecticidal effect, although the dose-response relationship was not unequivocally significant. The Marascuilo comparison showed that the higher EMA dose had a significantly better larvicidal effect compared to both doses of ABA. Both the higher dose of ABA and the lower dose of EMA had intermediate effects. Trees injected with water and untreated trees experienced high infestation rates, with husks containing live larvae ranging between 84% and 94%. On trees injected with EMA, the AI content in the walnut husk was 7.65 ± 0.96 ng/g (mean ± SE) for the lower dose and 16.61 ± 1.55 ng/g for the higher dose. In the control, the AI content was below the detection limit ( Figure 1).
The percentage of husks with live maggots from the trees treated in 2021 (Trial II.) was significantly lower than in the case of tree subjected to the control treatments. The infestation rate was the lowest on the trees treated with EMA (22% and 34%), with ABA showing slightly worse results (42% and 70%), while the control trees were 100% infested (Figures 2 and 3). The lowest dose of ABA resulted in the weakest insecticidal effect; however, a significantly better effect was found with increased doses of ABA ( Figure 2). Accordingly, not only were the infestation rates lower when applying EMA rather than of ABA, but the actual difference in the infestation rates was also lower depending on the injection of the various quantities of AIs. On trees injected with EMA, no difference was observed in the insecticidal effect between the lowest and the highest doses ( Figure 3). ABA, but the actual difference in the infestation rates was also lower depending on the injection of the various quantities of AIs. On trees injected with EMA, no difference was observed in the insecticidal effect between the lowest and the highest doses ( Figure 3). The ABA residue measured in the walnut pericarp showed a slightly positive correlation with the quantity of the injected AI ( 0.478 and a negative correlation with the degree of infestation 0.455 , though, statistically, these values did not show a significant relationship ( 0.116 and 0.138, respectively . The average ABA content in the husk was between 1.67 ± 0.28 and 5.86 ± 1.14 ng/g, though these seemingly different values did not differ significantly (Kruskal-Wallis H = 3.00, df = 3, p = 0.392).
In the case of EMA, a negative correlation ( 0.867, 0.002 was identified between the residue content and the infestation rate, and there was no significant correlation between the injected and the residual quantity of AI ( 0.316, 0.407) ( Figure 3). The average content was between 9.69 ± 3.23 and 50.19 ± 8.14 ng/g, but the differences were not significant in this case either (Kruskal-Wallis H = 5.067, df = 2, p = 0.079). In the 2021 economic damage assessment of the fruits, all trees treated with EMA, and the trees treated with the highest dose of ABA reached an acceptable damage score of 2 or less. The medium dose of ABA was partly acceptable, and its smallest dose fell into the unacceptable economic damage category (Table 3). The ABA residue measured in the walnut pericarp showed a slightly positive correlation with the quantity of the injected AI ( r s = 0.478) and a negative correlation with the degree of infestation (r s = −0.455), though, statistically, these values did not show a significant relationship ( p = 0.116 and p = 0.138, respectively). The average ABA content in the husk was between 1.67 ± 0.28 and 5.86 ± 1.14 ng/g, though these seemingly different values did not differ significantly (Kruskal-Wallis H = 3.00, df = 3, p = 0.392).
In the case of EMA, a negative correlation ( r s = −0.867, p = 0.002) was identified between the residue content and the infestation rate, and there was no significant correlation between the injected and the residual quantity of AI (r s = 0.316, p = 0.407) (Figure 3). The average content was between 9.69 ± 3.23 and 50.19 ± 8.14 ng/g, but the differences were not significant in this case either (Kruskal-Wallis H = 5.067, df = 2, p = 0.079).
In the 2021 economic damage assessment of the fruits, all trees treated with EMA, and the trees treated with the highest dose of ABA reached an acceptable damage score of 2 or less. The medium dose of ABA was partly acceptable, and its smallest dose fell into the unacceptable economic damage category (Table 3). Using two kinds of injection tools (at 1.8 g AI/tree), a significant difference was not identified either between the infestation rates or between the residue content of the husk in the case of ABA ( Figure 2). However, in the leaves, the use of the Ynject GO tool resulted in more-than-twice higher concentrations of residue at all three sampling times, compared to that of the Treenject tool (Figure 4a).
Short-Term Residue Monitoring in the Leaves
In general, the pesticide concentration determined in the leaf samples gradually declined over the sampling period. The smallest injection quantity was an exception, as the active ingredient content of the samples collected on the 71st day was slightly higher for both AIs than in the samples collected on the 35th day ( Figure 4). The pesticide residue
Short-Term Residue Monitoring in the Leaves
In general, the pesticide concentration determined in the leaf samples gradually declined over the sampling period. The smallest injection quantity was an exception, as the active ingredient content of the samples collected on the 71st day was slightly higher for both AIs than in the samples collected on the 35th day (Figure 4). The pesticide residue content in colored leaves taken during the autumnal leaf drop was lower than in green leaves collected at the same time, which were mostly still assimilating ( Figure 5).
In general, the pesticide concentration determined in the leaf samples gradually declined over the sampling period. The smallest injection quantity was an exception, as the active ingredient content of the samples collected on the 71st day was slightly higher for both AIs than in the samples collected on the 35th day ( Figure 4). The pesticide residue content in colored leaves taken during the autumnal leaf drop was lower than in green leaves collected at the same time, which were mostly still assimilating ( Figure 5).
During the short-term monitoring, a positive correlation between the injected and the residual AI content was shown in the leaves (Figure 4a), with = 0.819 ( = 0.001) on the 35th day, = 0.785 ( = 0.002) on the 71st day, and = 0.853 ( < 0.001) on the 106th day. The positive correlation was weaker but significant between the injected and the residual EMA content in the leaves on the 35th day, with = 0.685 ( = 0.042), and was not significant on the 71st and 106th days ( = 0.422, = 0.258 and = 0.580, = 0.102, respectively). According to Figure 4b, for reasons unknown, the seemingly logical connection between the injected and the mean residual EMA content no longer existed on the 71st and 106th days at the two higher concentrations. During the short-term monitoring, a positive correlation between the injected and the residual AI content was shown in the leaves (Figure 4a), with r s = 0.819(p = 0.001) on the 35th day, r s = 0.785(p = 0.002) on the 71st day, and r s = 0.853(p < 0.001) on the 106th day. The positive correlation was weaker but significant between the injected and the residual EMA content in the leaves on the 35th day, with r s = 0.685(p = 0.042), and was not significant on the 71st and 106th days (r s = 0.422, p = 0.258 and r s = 0.580, p = 0.102, respectively). According to Figure 4b, for reasons unknown, the seemingly logical connection between the injected and the mean residual EMA content no longer existed on the 71st and 106th days at the two higher concentrations.
Long-Term Efficacy of the Endotherapy
Leaf samples were collected throughout two vegetation periods, between 57 and 483 DAT (Trial I., Table 1). Figure 5 shows that both AIs appeared for at least two years, though the quantity in the second year was orders of magnitude lower than in the first year.
In terms of a long-term effect, the most important question focuses on the residue content in the husk, where the larvae develop. Biological efficacy as well as residue content were investigated in the husk samples collected in the second year of injection (483 DAT). Although these trees still displayed a detectable insecticidal effect, it was much lower than in the first year: the infestation rate was 65% in ABA-injected trees and 60% in EMA-injected trees, with the controls showing a 91-92% infestation rate ( Figure 6).
Residue in the Flowers
The ABA content of the flower samples collected from trees injected the previous year was below the detection limit, and only a trace of EMA could be detected (Table 4).
Residue in the Kernels
The ABA residual content in the kernels did not exceed the detection limit (2.4 ng/g) in either the short-term or the long-term monitoring, while the EMA content stayed below the detection limit (0.2 ng/g), with one exception, where it amounted to 0.5 ng/g (tree No. 11., DAT 108).
Leaf samples were collected throughout two vegetation periods, between 57 and 483 DAT (Trial I., Table 1). Figure 5 shows that both AIs appeared for at least two years, though the quantity in the second year was orders of magnitude lower than in the first year.
In terms of a long-term effect, the most important question focuses on the residue content in the husk, where the larvae develop. Biological efficacy as well as residue content were investigated in the husk samples collected in the second year of injection (483 DAT). Although these trees still displayed a detectable insecticidal effect, it was much lower than in the first year: the infestation rate was 65% in ABA-injected trees and 60% in EMA-injected trees, with the controls showing a 91-92% infestation rate ( Figure 6). Figure 6. Infestation rate and residue content in the husks (mean ± SE) in the second year following the trunk injections. Trial I. For treatments in columns marked with the same letter, the Marascuilo comparison shows that the infestation rate is not significantly different (p > 0.05). Location: Taksony, injection date: 4 June 2020, sampling date: 30 September 2021 (483 DAT); detection limit: 1.2 ng/g ABA, 0.1 ng/g EMA. AI = active ingredient; C aq. = injected with distilled water; C no inj. = no injection was performed; ABA = abamectin; EMA = emamectin benzoate. * <DL, ** trace.
Residue in the Flowers
The ABA content of the flower samples collected from trees injected the previous year was below the detection limit, and only a trace of EMA could be detected (Table 4).
Discussion
Trunk injection led to the appearance of both AIs in green plant parts, including leaves and husks, and EMA was also detectable in the flowers. The concentration of EMA in the plant parts was higher than that of ABA due to the presence of basic nitrogen, which increased its polarity and caused it to become protonated, leading to a greater water solubility [45].
On the injected trees, eggs were laid in the same manner and number as on the control trees. No repellent effect was observed, and it is assumed that the AIs did not exert an ovicidal effect on eggs laid in the husk. They did kill the larvae after they emerged from the eggs and started to feed. These dead, shriveled maggots and the eggshells were found under the exocarp. The oviposition was easily distinguished on the husk, identified by small, black, dry spots, though these did not cause any reduction in quality and any secondary infestation in the kernel or had any negative effect on the success of walnut production ( Figure S5). Trunk injection can be used to prevent larval damage; it cannot be used to kill adults or prevent egg laying.
The infestation rates were significantly different between the two sites for both pesticides (Figures 1-3). The authors believe that this was due to the difference in distribution of the pesticide in the two years, as transpiration is greatly influenced by the soil water content [25,28,56], and to the increased presence of the pest. When considering these data, the strictness of the evaluation method must be taken into consideration, as the infestation of the husk does not necessarily mean that the kernel is unsaleable. If any surviving larvae remain in the husk, the AIs can still exert their sublethal effects, slowing maggot development. The husk may turn black in part, but the economic damage might be low, as the kernel may retain its marketability. Our data about the economic damage related to the produced walnut showed that the high dose of ABA and all EMA treatments provided a sufficient insecticidal effect ( Table 2).
In the case of the EMA treatment, in contrast to the ABA treatment, the infestation rate did not consistently correspond to the dose applied. However, there was a negative correlation between the residue content of the husks and the infestation rate. This can be a result of the fact that the trunk diameters and the canopies of the trees were not always directly proportional, though the manufacturers clearly recommend determining the dose based on the trunk diameter at breast height (DBH) [57,58].
The higher infestation in the second year indicated that the treatment must be repeated annually in the case of the examined pesticides. Although the difference in the infestation rates was significant between the treated and the control trees, the resulting drop of the biological effect in the second year showed that the treatments were no longer effective for practical purposes. This can be explained by the low pesticide residues in the pericarp, which also showed a sharp drop compared to the first year (Figures 1 and 6). Several studies reported the long-lasting efficiency of trunk injection [33,34,37,59], but the differences between tree species, pests, and pesticides perhaps make it impossible to draw comparisons based solely on the literature [58,60]. The decrease in effectiveness seen in the second year is consistent with the decreasing concentration of pesticide residues in the leaves over time ( Figure 5).
In most cases, the concentration of the active substances in the leaves decreased over the course of the sampling, except for the smallest treatment concentration (0.9, ABA; 1.9, EMA), where a slightly higher value was measured on the 71st day than on the 35th day ( Figure 4). This phenomenon can be explained by the fact that the smaller concentration took a longer time to reach the canopy, meaning the maximum concentration in the canopy was achieved somewhere between the 35th and the 71st day.
One disadvantage of the injection may be the effects that wounds on the trunk can exert on the performance and health of the wood. Regardless of the dosage, the trunks treated with the foliar spray formulation (Vertimec 1.8 EC, Figure S6) displayed extensive phytotoxicity, while no phytotoxicity was observed resulting from the use of the plant protection product developed specifically for trunk injection (Revive II, Figure S7). Although the phytotoxic symptoms disappeared in the second year following the treatment, the wound closure was not ideal, contrary to what observed for the product intended for trunk injection. It is strongly recommended to use products formulated specifically for trunk injection without any additives that may have detrimental effects on the trunk tissue.
Due to the risks to the health of the trunk [32,61] and the necessity of repeating the treatment every year, it is recommended to develop a non-invasive method that requires no drilling. This could be a form of needle-based injection [28] or a basal bark spray treatment with a special adjuvant system [62][63][64][65][66]. According to our assumption, planting a stainless valve in the trunk could also serve this purpose, eliminating the need to create holes every year.
The concentration of pesticide residues in the kernel did not exceed the MRL (Maximum Residue Limit) values specified in the EU pesticide database (ABA: 0.02 mg/kg; EMA: 0.01 mg/kg), meaning that the trunk injection was suitable from the perspective of food safety.
The authors believe that trunk injection is fundamentally more eco-friendly than foliar sprays. Although it offers a number of advantages [67,68], certain risks were also identified (wound healing, the role of the formula, the uniform and sufficient translocation of the AIs) that appear to be avoidable with the introduction of additional developments [69][70][71]. The study found that the flowers from the injected walnut trees did not contain any detectable level of ABA residue, and trace amounts of EMA residue were present in the flowers, similar to the levels found in apple nectar and pollen by Coslor et al. (2019) [72]. This is a promising result, as it suggests that the injections had a minimal impact on the presence of pesticide residues in the flowers. However, the difficulties in assessing the pesticides' impact on flower-visiting insects require complex tests in the future [73,74].
This alternative method can be used to achieve results analogous to those obtained with traditional foliar sprays, where 4-5 foliar spray treatments are usually required to ensure coverage throughout the emergence period. In terms of costs, it is similar to traditional spraying on an annual basis. In Hungary, walnut production is based both on individual orchards and on backyard-kept walnut trees. Protecting walnuts grown within urban areas is limited or not realistic, which is why the authors consider trunk injection to be an alternative solution suitable for these areas.
Trunk injection in walnut against the WHF has probably also a beneficial insecticidal effect against other substantial pests such as the codling moth (Cydia pomonella L., 1758), which should also be examined in the future.
Conclusions
Our research provides insights into how the trunk injection technology can protect plants from the WHF, walnut's most important pest. The application of both ABA and EMA via trunk injection was found to be successful in preventing damage caused by the WHF, but only in the year of the injection and not in the following year.
The residues measured in either fresh or dried kernels did not exceed the maximum residue limit, which indicates the safety of the tested compounds applied via endotherapy for walnut pest control.
Our study highlights the potential of the trunk injection technology as a viable option for the management of the WHF, providing practical guidance.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/horticulturae9060655/s1. Figure S1: Treenject injection tool; Figure S2: Pressurised rubber injection bag (Ynject GO); Figure S3: The oviposition site on the green walnut husk (left side) and the dead young maggots in the mesocarp (right side); Figure S4: The live maggots in the pericarp (right side) and the maggots killed as a result of the treatment (left side); Figure S5: Small, black, dry spots on the husk as a consequence of oviposition; Figure S6: Side effects due to the use of a plant protection product not suited for injection in the year of the treatment (left, 15 October 2020) and one year later (right, 18 May 2021); Figure S7: Absence of side effects due to the use of a plant protection product intended for trunk injection in the year of the treatment (left, 15 October 2020) and one year later (right, 18 May 2021).
|
2023-06-04T15:09:14.657Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4a52e8a4999c70ba20aa95020c65870ad1324c4f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/horticulturae9060655",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2d7977ae6b15c9e619d5633759e218c54d9b10cf",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
268781083
|
pes2o/s2orc
|
v3-fos-license
|
Association between gut microbiota and diabetic nephropathy: a mendelian randomization study
Background The correlation between diabetic nephropathy (DN) and gut microbiota (GM) has been suggested in numerous animal experiments and cross-sectional studies. However, a causal association between GM and DN has not been ascertained. Methods This research adopted MR analysis to evaluate the causal link between GM and DN derived from data acquired through publicly available genome-wide association studies (GWAS). The study utilized the inverse variance weighted (IVW) approach to assess causal association between GM and DN. Four additional methods including MR-Egger, weighted median, weighted mode, and simple mode were employed to ensure comprehensive analysis and robust results. The Cochran’s Q test and the MR-Egger method were conducted to identify heterogeneity and horizontal pleiotropy, respectively. The leave-one-out approach was utilized to evaluate the stability of MR results. Finally, a reverse MR was performed to identify the reverse causal association between GM and DN. Results According to IVW analysis, Class Verrucomicrobiae (p = 0.003), Order Verrucomicrobiales (p = 0.003), Family Verrucomicrobiaceae (p = 0.003), Genus Akkermansia (p = 0.003), Genus Catenibacterium (p = 0.031), Genus Coprococcus 1 (p = 0.022), Genus Eubacterium hallii group (p = 0.018), and Genus Marvinbryantia (p = 0.023) were associated with a higher risk of DN. On the contrary, Class Actinobacteria (p = 0.037), Group Eubacterium ventriosum group (p = 0.030), Group Ruminococcus gauvreauii group (p = 0.048), Order Lactobacillales (p = 0.045), Phylum Proteobacteria (p = 0.017) were associated with a lower risk of DN. The sensitivity analysis did not identify any substantial pleiotropy or heterogeneity in the outcomes. We found causal effects of DN on 11 GM species in the reverse MR analysis. Notably, Phylum Proteobacteria and DN are mutually causalities. Conclusion This study identified the causal association between GM and DN with MR analysis, which may enhance the understanding of the intestinal-renal axis and provide novel potential targets for early non-invasive diagnosis and treatment of DN.
Introduction
Diabetes Nephropathy (DN) is widely recognized as the leading factor of Chronic Kidney Disease (CKD) and is considered one of the microvascular complications of Diabetes Mellitus (DM) (Gnudi et al., 2016).Around 30 to 40% of individuals with diabetes eventually develop DN.The prevalence of DN has demonstrated a consistent upward trend, paralleling the evolution of society and economy, alongside shifts in individual lifestyles and dietary patterns (Saeedi et al., 2019).DN is one of the key factors of endstage renal disease (ESRD), contributing to around 54% of newly diagnosed incidents of ESRD and 30% of individuals requiring maintenance dialysis (Zhang L. et al., 2020).Moreover, DN can result in severe cardiovascular complications (Verma et al., 2020).The pathogenesis of DN is characterized by complex interplays of several factors (Samsu, 2021).At present, the treatment of DN mainly involves glycemic control and the use of pharmacological interventions, including angiotensin II receptor blockers (ARBs) or angiotensin-converting enzyme inhibitors (ACEIs), aimed at the regulation of the conditions (Doshi and Friedman, 2017;Cole and Florez, 2020).However, the risk of developing ESRD remains quite high (Anders et al., 2018;de Boer et al., 2020;Samsu, 2021).Therefore, enhancing our understanding of the pathogenic mechanisms of DN and exploring alternative therapeutic targets is imperative.
The human intestinal microbiome is described as the "second genome" that regulates health (Gilbert et al., 2018).Gut microbiota (GM) has been found to participate in the pathogenesis of various ailments by influencing the permeability of the intestinal barrier, the inflammatory response, and the balance of the immunological microenvironment (Lehto and Groop, 2018).Increasing evidence has suggested that the gut microbiota and kidney diseases can reciprocally influence each other through the induction of metabolic, immunological, and endocrine changes, a phenomenon commonly referred to as the "intestinal-renal axis" (Altamura et al., 2023).Several studies have demonstrated the substantial importance of gut microbial dysbiosis in the initiation and deterioration of DN (Wang Y. et al., 2022;Zhang et al., 2022;Zhao et al., 2023).It has been noted in several studies of GM particular alterations in the intestinal microbial composition of patients with DN, including the increase in the abundance of Clostridium and Aspergillus, and the decrease in the level of Rhodococcus (Salguero et al., 2019;Du et al., 2021;Wang Y. et al., 2022).Additionally, studies have demonstrated that a multitude of factors can contribute to the impairment of intestinal mucosa and an increase in permeability among individuals with DN, which allows the entry of metabolites such as indole and p-cresol into the bloodstream, subsequently instigating renal injury (Lehto and Groop, 2018;Lau et al., 2021).Numerous animal experiments have shown that pharmaceutical interventions induced alterations in the gut microbial composition of DN mice, subsequently leading to an amelioration of the disease's attributes.This enhancement could potentially be attributed to mechanisms including the reduction of lipopolysaccharide (LPS)-producing microbes and the augmentation of short-chain fatty acids (SCFAs)-producing microbes (Chen et al., 2022;Deng et al., 2022).
The correlation between DN and GM has been well-established, and substantial research has been undertaken to unravel the potential mechanisms and corresponding therapeutic strategies involving GM and the kidney injury induced by their metabolites.However, the majority of these studies have mainly relied on animal models and cross-sectional investigations.Furthermore, the human GM is a complex and extensive ecosystem, which creates difficulties in identifying the causal association between certain GM and DN.
The Mendelian randomization (MR) analysis method has been widely applied in the field of epidemiological causal inference in recent years (Lehto and Groop, 2018).This method of analysis, based on Mendel's second law, employed single-nucleotide polymorphisms (SNPs) associated with clinical phenotypes as Instrumental Variables (IVs) to build models so as to identify the causal association between exposures and outcomes at the genomic level (Davey Smith and Hemani, 2014).It achieves this without factoring in the influence of confounding variables (Holmes et al., 2017).The reliability of MR analysis has been substantiated by multiple studies.Furthermore, several studies have successfully utilized MR analysis to identify causal relationships between various exposures and outcomes successfully (Yeung and Schooling, 2021;Liu et al., 2022;Ma et al., 2022).
In this study, we aim to assess the causal association between GM and DN by employing MR analysis, thereby enhancing our understanding of the intestinal-renal axis and unveiling novel therapeutic targets for the early non-invasive diagnosis and treatment of DN.
Study design
This study explored the causal association between GM and DN, concurrently validating the robustness of the results through two-sample MR analysis (Figure 1).Three crucial assumptions of MR were satisfied to ensure the utmost accuracy throughout the entire process.Firstly, the selected IVs should demonstrate significant associations with the outcome.Secondly, IVs should demonstrate independence from any conceivable confounding factors that may affect both exposure and outcome.Thirdly, IVs should only influence the outcome via the exposure.
Data source
The genome-wide association studies (GWAS) summary data for human GM were acquired from the Microbiome Genome (MiBioGen) consortium.These data were obtained from an extensive multi-ethnic GWAS meta-analysis including a total of 18,340 individuals of European descent from 24 different cohorts.The investigation specifically focused on the GM, with a total of 211 distinct microbial species documented (Kurilshikov et al., 2021).The GWAS data could be accessed via the following URL. 1 A total of 211 intestinal bacterial species were chosen as exposure factors and subsequently classified into five distinct biological categories, including phylum, class, order, family, and genus.DN was regarded as the outcome in this study.The statistics were acquired from the IEU Open GWAS project database, which contained a sample size of 3,283 cases and 210,463 controls originating from European populations, with a total of 16,380,453 SNPs identified.Diabetic nephropathy was identified as an outcome when glomerular disorders in individuals with diabetes mellitus, in accordance with the ICD-10 criteria (code: N08.3 * ).The GWAS data for DN could be accessible through the following link. 2
Selection of instruments variables
For the purpose of obtaining qualified IVs, a further screening was conducted for the SNPs identified by GWAS.The SNPs used for MR analysis had to be strongly linked with the exposure, satisfying the association assumption of MR.Firstly, to ensure the inclusion of a satisfactory number of IVs, we selected SNPs with p-value below the locus-wide significance level (1x10 −5 ).In addition, the formula F = (R 2 /(R 2 −1)) × ((N − K−1)/K) was utilized to compute the F value and IVs exhibiting F value below 10 were subsequently omitted, thereby addressing potential bias associated with weak IVs and ensuring the robustness of the association between the selected IVs and exposure factors (Li et al., 2023).This approach aims to prevent deviations and to enhance the validity of the findings.Thirdly, to ensure the independence of the IVs from one another, the clumping process was executed utilizing the TwoSampleMR package within the R software, in accordance with the criterion of Linkage Disequilibrium (LD).This step employed parameter settings with a SNP linkage disequilibrium value (r 2 ) threshold of 0.01 and a clumping distance value of 10,000kb.
Mendelian randomization analysis
The inverse variance weighted (IVW) method was the main method employed in this study, in order to detect the potential causal association between GM and DN.In the meanwhile, four verified methods-MR-Egger, weighted median, weighted mode, and simple mode-were employed to give a thorough review of potential association in order to enhance the robustness of the results.In cases of inconsistent results, we prioritized IVW as the primary result.The IVW method utilizes a ratio approach to infer the causal impact of exposure on the outcome by weighted linear regression models under the assumption that the intercept term of IVs was zero.Notably, the IVW method has better efficacy and precision, when there is no horizontal pleiotropy among IVs, resulting in unbiased estimates of the status (Davey Smith and Hemani, 2014).The results were considered to be significant if the p-value of IVW was below 0.05, suggesting a causal association between the exposure and the outcome.The MR-Egger method employs weighted regressions that consider the inclusion of an intercept term, in contrast to the IVW method.The intercept term is utilized to assess the extent of multicollinearity among IVs, and the slope is the estimated value of the causal effect.The weighted median method reduces the rate of type I error and also accommodates the potential failures of certain genetic variants.The validity of the weighted mode method remains unaffected, in cases where the majority of IVs with comparable causal effect are valid, even if some IVs do not meet the criteria set by the MR method for causal inference (Xiang et al., 2021).Lastly, the simple mode method is less potent than the IVW method, but it still contributes to the robustness of our findings (Holmes et al., 2017).
Sensitivity analysis
The results of MR analysis may produce erroneous results influenced by weak IVs, genetic pleiotropy and other underlying issues.Therefore, we conducted sensitivity analysis to ascertain the stability of the outcomes.In this study, The MR-Egger method was employed to ascertain the existence of horizontal pleiotropy.The MR-Egger method has become a commonly adopted approach to examine the presence of horizontal pleiotropy by utilizing the intercept term of MR-Egger regression (Wang F. et al., 2023).If the intercept's p-value exceeds 0.05, the horizontal pleiotropy is not statistically significant, and the exclusionary hypothesis holds true (Yavorska and Burgess, 2017;Yeung and Schooling, 2021).The Cochran's Q test was conducted to detect the presence of heterogeneity among IVs for both the IVW and MR-Egger methods (Bowden et al., 2017;Liu et al., 2022;Ma et al., 2022).And if p-value exceeds 0.05, the influence of heterogeneity on the causal effect could be disregarded.Conversely, when heterogeneity is statistically significant (p < 0.05), the IVW random-effects estimator is utilized to mitigate the impact of heterogeneity on causal effects (Ren et al., 2023).In addition, in this study, the leave-one-out method was utilized to do the sensitivity analysis of the results to assess whether the causal effects observed in the MR analysis were caused by any single IV (Ren et al., 2023).The procedure entailed removing each SNP individually and then comparing the results obtained before and after the removal to determine whether there was a statistical significance.If p-value exceeded 0.05 derived after excluding an SNP suggested that the SNP did not have a non-specific influence on the effect estimates (Hemani et al., 2018).The "TwoSampleMR" R package (version 0.5.6,Stephen Burgess, Chicago, IL, USA) was utilized to perform two-sample MR analysis between exposure and outcome.
Reverse mendelian randomization analysis
A reverse MR analysis was conducted to detect the causal effect of DN on GM utilizing five MR methods.And the robustness of the results was validated through sensitivity analysis.
The selection of instrumental variables
A comprehensive screening process was undertaken on a total of 211 distinct IVs representing diverse taxa within GM.Following meticulous scrutiny, IVs exhibiting cascading disequilibrium effects and those demonstrating a susceptibility to weak instrumental variable bias were excluded.Consequently, a total of 2,280 IVs emerged, meeting the threshold for significant locus-wide association (p < 10 −5 ).These IVs were derived from diverse taxonomic classes within the GM, encompassing 9 phyla, 20 orders, 34 families, 16 classes, and 128 genera.Elaborated information can be referenced in Supplementary Table 1.
Sensitivity analysis
In the context of sensitivity analysis, the MR-Egger regression analysis indicated that the p-value of the intercept associated with the selected IVs lacked statistical significance (p > 0.05), suggesting that the chosen IVs were not subject to horizontal pleiotropy (Table 1).Furthermore, the heterogeneity of the IVs was assessed utilizing Cochran's Q test, and the results of neither the IVW nor the MR-Egger analyses yielded statistically significant results (p > 0.05), indicating an absence of significant heterogeneity among the selected IVs (Table 1).Lastly, the outcomes of the leaveone-out method are depicted in Figure 5.The outcomes implied that the established causal association was unlikely to be influenced by any specific SNP.
Reverse mendelian randomization analysis
In reverse MR analysis, causal effects were found of DN on 11 GM species utilizing IVW method (Figure 6), including Class Gammaproteobacteria, Family Rhodospirillaceae, Family Enterobacteriaceae, Genus Christensenellaceae R 7group, Genus Lachnospiraceae UCG010, Genus Anaerofilum, Genus Ruminococcus 2, Genus Bilophila, Order Rhodospirillales, Order Enterobacteriales, and Phylum Proteobacteria.Notably, Phylum Proteobacteria and DN were mutually causalities.The results of the other four MR analysis are delineated in Supplementary Table 3, and the scatter plots of the five MR analyses are depicted in Supplementary Figure 1.The funnel plots of the IVW and MR-Egger analyses are depicted in Supplementary Figure 2, and no significant bias of the results is demonstrated.The sensitivity analysis indicated no significant horizontal pleiotropy and heterogeneity among the selected IVs (Table 2).Lastly, the outcomes of the leave-one-out method implied that the established causal association was unlikely to be influenced by any specific SNP (Supplementary Figure 3).
Discussion
There has been a rise in the prevalence of both diabetes and DN in recent years.According to the International Diabetes Federation, there will be the largest DM population in China, India, and the United States by the year of 2030, the number of Frontiers in Microbiology 05 frontiersin.orgwhich will hit 140 million, 101 million, and 34 million respectively (Saeedi et al., 2019).Concurrently, along with the proposition of concept of the gut-kidney axis and the subsequent in-depth investigations therein, the substantive role of GM dysbiosis in the intricate pathogenesis of DN has been progressively unveiled.Despite these progressions, the conclusive establishment of a causal association between GM and DN remains an unresolved inquiry.
In this study, MR analysis was employed to comprehensively identify the underlying causal association between GM and DN.
The results of this study elucidated causal associations between thirteen distinct GM species and DN.Class Verrucomicrobiae, Order Verrucomicrobiales, Family Verrucomicrobiaceae, Genus Akkermansia, Genus Catenibacterium, Genus Coprococcus 1, Genus Eubacterium hallii group and Genus Marvinbryantia were associated with an elevated risk of DN.Conversely, Class Actinobacteria, Group Eubacterium ventriosum group, Group Ruminococcus gauvreauii group, Order Lactobacillales, Phylum Proteobacteria were associated with a reduced risk of DN.Notably, by integrating the results of this study with previous research, we found that that dysbiosis of the gut microbiota and its metabolic byproducts play significant roles in the progression of DN through various mechanisms.
Among patients with DN, the co-occurrence of systemic inflammation and compromised innate immunity has been noted (Han et al., 2023).Research has indicated that the gut microbiota, inhabiting the gastrointestinal tract, can modulate antigen reactivity in lymphoid tissues, thus initiating and gradually maturing the intestinal immune system Frontiers in Microbiology 06 frontiersin.org(Cerf-Bensussan and Eberl, 2012).Specifically, genera such as Bacteroides, Bifidobacterium, Lactobacillus, and Bacillus proteus have been found to notably contribute to immune system enhancement (Wu and Wu, 2012).The dysregulation of the gut microbiota can influence the maturation of macrophages, prompting the release of tumour necrosis factor-alpha (TNFα) and interleukin-6 (IL-6) upon Toll-like receptor (TLR) stimulation, thereby triggering renal inflammation (Yang et al., 2019).Additionally, this imbalance may activate innate immune cells, amplifying the activity of TLR-2 and TLR-4 pathways, and fostering the production of inflammatory cytokines.Such dysbiosis in patients with DN could potentially lead to immune dysfunction and renal injury (Lin et al., 2012;Mudaliar et al., 2013;Donaldson et al., 2016;Chi et al., 2021).Concurrently, compromised immune function may diminish the body's defence capabilities and heighten susceptibility to infections (Robinson and Freedman, 2018).Urinary tract infections, a frequent complication in individuals with diabetes, may be linked to systemic inflammation associated with hyperglycaemia and dysbiosis of the gut microbiota, thereby exacerbating renal damage in DN (Micle, 2020;Wang C. et al., 2023).
The GM exerted an impact on the body's metabolic processes and obesity via insulin resistance (IR) (Hoseini Tavassol et al., 2023).Previous research underscored the positive correlation between the severity of IR and the incidence of DN (Penno et al., 2021).Furthermore, it was established that IR contributed to the pathogenesis of DN independently of hyperglycaemia, as it could elicit an increased salt sensitivity (Karalliedde and Gnudi, 2016).Simultaneously, IR led to a reduction in glucose transport by podocytes, which could precipitate the disruption of the glomerular filtration barrier and the onset of proteinuria (Alqallaf et al., 2022).SCFAs represent a subset of saturated fatty acids characterized by no more than six carbon atoms, predominantly encompassing acetic, propionic, and butyric acids (Zhang et al., 2021).And SCFAs are generated by GM.These chemicals have multifaceted functions, primarily in the regulation of energy metabolism, maintenance of the intestinal epithelial barrier, facilitation of immune responses, and modulation of inflammatory processes (Du et al., 2022).Prior investigations demonstrated diminished serum concentrations of total SCFAs in individuals diagnosed with DN (Zhong et al., 2021;Cai et al., 2022).SCFAs exerted their influence as signalling moieties, engaging a dedicated G-protein-coupled receptor 43 (GPR43), in which activation of GPR43 signalling instigated a regulation of inflammatory cascades.This effect manifested as a reduction in levels of proinflammatory cytokines within the tissues of the colon, thus contributing to the preservation of intestinal homeostasis (Sun et al., 2017;Priyadarshini et al., 2018).Moreover, SCFAs can augment neutrophil chemotaxis and facilitate the differentiation and proliferation of natural killer cells and regulatory T cells, thus activating the immune system (Kim et al., 2014).Concurrently, the augmentation in the abundance of urease-producing bacteria precipitated a surge in intestinal pH, consequently elevating the permeability of the intestinal mucosa (Lehto and Groop, 2018;Thaiss et al., 2018;Lau et al., 2021).This facilitated the entry of specific uremic toxin precursors, including cresol, indole, and trimethylamine into the systemic circulation, thereby facilitating the production of uremic toxins.These entities, in turn, elicited oxidative stress and fostered renal tubulointerstitial fibrosis, resulting in a progressive decline in renal function (Kikuchi et al., 2019;Tan et al., 2021).However, SCFAs can preserve intestinal homeostasis and consequently mitigate the absorption of harmful substances through maintaining the structural integrity of the intestinal mucosa, thus bestowing a protective impact on renal function.Furthermore, it has been revealed that SCFAs, especially butyric acid, can enhance tight junction protein complex, thereby maintaining the perpetuation of the intestinal barrier's functional integrity (Xia et al., 2020).This mechanistic facet holds potential in averting renal impairments.Furthermore, a hyperglycaemic state typically caused the body to generate an overproduction of reactive oxygen species (ROS), resulting in the development of DN (Miranda-Díaz et al., 2016).It was proposed that SCFAs had the capacity to inhibit the stimulation of glomerular mesangial cells caused by hyperglycaemia and lipopolysaccharides (LPS).Concurrently, SCFAs were found to diminish the generation of ROS and malondialdehyde (MDA), along with inflammatory factors, while elevating the level of superoxide dismutase (SOD).These activities mitigated inflammatory responses, ultimately safeguarding renal function (Zhang et al., 2022).Previous research outlined the role of SCFAs in mitochondrial biosynthesis processes (Andrade-Oliveira et al., 2015).This finding suggested that SCFAs could potentially have a positive impact on alleviating renal Frontiers in Microbiology 08 frontiersin.orgepithelial cell hypoxia.Moreover, SCFAs treatment improved renal insufficiency in a murine model of acute kidney injury subsequent to renal ischemia-reperfusion (Antza et al., 2018).
Prior research identified an increased prevalence of the Order Lactobacillales and the Class Actinobacteria in the intestines of patients with DN (Du et al., 2021;Wang Y. et al., 2022).This observation was attributed to the use of hypoglycaemic medications (Forslund et al., 2015;Gu et al., 2017).Bifidobacterium and Lactobacillus are widely acknowledged as significant intestinal probiotics (Bindels et al., 2015;Markowiak-Kopeć and Śliżewska, 2020), generating lactic acid and acetic acid within the intestinal environment, thereby safeguarding the host from invasion by intestinal pathogens (Zhang et al., 2015;Roy and Dhaneshwar, 2023).It was found that an elevation in the abundance of the Class Actinobacteria could lead to heightened production of SCFAs, primarily due to the presence of Bifidobacterium (Binda et al., 2018).Lactobacillus, a notable constituent of the Order Lactobacillales, possesses the capability to convert lactic acid into diverse SCFAs forms.In patients with CKD, a reverse relationship was observed between Lactobacillus levels and markers of kidney impairment, including blood creatinine and urea nitrogen (Ren et al., 2020).These findings imply that Order Lactobacillales and Class Actinobacteria serve as protective factors against DN.This concurs with our own research findings and deepens the understanding of the pivotal role these microbiotas play in DN.Causal association between DN and GM using inverse variance weighted method.The Phylum Proteobacteria represents one of the most expansive and phenotypically diverse phyla within the field of bacteria (Sharma et al., 2022), and it is considered to have the potential for pathogenicity under specific circumstances (Du et al., 2021).The findings from the present investigation elucidate a correlation between this particular bacterial cohort and a lower risk of DN.Previous research demonstrated an association between the Phylum Proteobacteria and an increased synthesis of SCFA production (Machate et al., 2020).Moreover, an alteration in the abundance of the Phylum Proteobacteria was observed within the intestinal environment of patients with DN, exhibiting a disparity when compared to the composition found in healthy populations Frontiers in Microbiology 10 frontiersin.org(Wang Y. et al., 2022).In alignment with the outcomes of the current study, these findings propose that this genus is linked with and held the potential to function as a protective factor for the progression of DN.The Genus Eubacterium ventriosum group is a component of the Family Lachnospiraceae.The Genus Ruminococcus gauvreauii group belongs to the Family Ruminococcaceae and is a component of the Phylum Firmicutes (Zhang et al., 2023).The association between these two genera and DN has not been examined by conventional epidemiological studies.Nevertheless, the outcomes derived from the present study suggest that both genera are associated with a reduced risk of DN.This assertion is likely attributable to their membership within the community of microbiota known for producing SCFAs (Kemp et al., 2021;Amamoto et al., 2022;Wang J. et al., 2023).This discovery offers a novel perspective on the involvement of these two genera in DN.
In healthy individuals, the Genus Akkermansia (Class Verrucomicrobiae, Order Verrucomicrobiales, Family Verrucomicrobiaceae) typically constitutes approximately 3 and 5 % of the gastrointestinal microbial community (Hasani et al., 2021).Currently, the Genus Akkermansia represents the only Phylum Verrucomicrobia found in the human intestine.As a result, many 16S rRNA gene sequence analyses considered the Phylum Verrucomicrobia to be a representative of Genus Akkermansia (Cani et al., 2022).It has been demonstrated that there is a correlation between a reduction in the abundance of the Genus Akkermansia, a promising probiotic, and the development of various illnesses, including type 2 diabetes and inflammatory bowel disease (Wang K. et al., 2022).In addition, the level of Genus Akkermansia was negatively correlated with the level of IR (Shin et al., 2014;Watanabe et al., 2023).Moreover, it should be noted that Genus Akkermansia also has the capacity to produce SCFAs (Luo et al., 2022), but its role remains somewhat controversial.In animal models of DN and patients with CKD, the level of Genus Akkermansia was positively correlated with markers of kidney injury, including blood creatinine and urea nitrogen (Forslund et al., 2015;Lan et al., 2023).Additionally, it was also found that the relative abundance of Genus Akkermansia was positively correlated with the severity of the disease in patients with Parkinson's disease and multiple sclerosis (Jangi et al., 2016;Cekanaviciute et al., 2017;Zhang F. et al., 2020).Based on the results of the current study, there was a positive association between Genus Akkermansia and the risk of developing DN.This may be due to the fact that the relative abundance of Gram-negative bacteria is increased in the gastrointestinal tract of patients with DN, including Genus Akkermansia.Gram-negative bacteria contain LPS as components of the outer membrane of the cell wall.As the intestinal barrier was compromised, LPS entered the circulation and stimulates the body to produce excessive pro-inflammatory factors (e.g., IL-1β, IL-6, and TNFα, etc.), which further exacerbated the systemic inflammatory response in patients with DN (Salguero et al., 2019).Finally, it was demonstrated that LPS could bind to TLR, thereby activating the TLR-4 signaling pathway.This activation subsequently triggered downstream signaling pathways, resulting in tissue damage through mechanisms including oxidative stress and DNA damage (Hung et al., 2017).
To date, no study has reported alterations in the abundance of Genus Catenibacterium, Genus Coprococcus 1, Genus Eubacterium hallii group, and Genus Marvinbryantia in the intestinal tract of DN patients.The Genus Catenibacterium belongs to the Family Erysipelotrichaceae (Ricci et al., 2023).According to the results of this study, the Genus Catenibacterium is a risk factor for DN.This genus was found to be associated with several metabolic diseases (Burakova et al., 2022).Moreover, an increased prevalence of this genus was identified in faecal samples from individuals with ESRD, suggesting a potential correlation between the levels of this genus and the progression of nephropathy (Vaziri et al., 2013).There are very few studies on the Genus Coprococcus 1.However, the results of the current study suggest that this genus is positively associated with the risk of DN, warranting comprehensive elucidation of its underlying mechanisms.Genus Eubacterium hallii group is a member of the Family Lachnospiraceae within the Phylum Firmicutes, and as one of the butyrate-secreting genera, it has the potential to ameliorate DM and other diseases associated with IR (Zhang et al., 2015).Similarly, an elevated abundance of Genus Marvinbryantia was found to align with diminished IR levels, which could also produce butyrate (Chen et al., 2021).This indicates the potential benefits of both Genus Eubacterium hallii group and the Genus Marvinbryantia in ameliorating the condition of DN.Nevertheless, the present study reveals a positive correlation between these microbiotas and the risk of DN, potentially attributable to complex gene-gene and geneenvironment interactions.
To the best of our knowledge, this is the first study to employ MR to assess the causal association between GM and DN.The implementation of this analytical methodology serves to reduce potential biases arising from reverse causation and residual confounding.This study provides confirmation of a causal association between GM and DN while delving into the plausible mechanistic pathways of intestinal dysbiosis in DN.These findings underscore the significance for nephrologists to exercise vigilant attention over DN patients exhibiting intestinal dysbiosis in their clinical practice.Furthermore, the identification of distinct GM associated with DN within this study provides novel biomarkers for the prevention, diagnosis and therapeutic intervention of DN, thereby enhancing our comprehension of the gut-renal axis.
However, this study has several limitations.Firstly, the predominant participants within the GWAS cohort were European ancestry, which might potentially affect the applicability of study findings across diverse ethnicities.Secondly, the available data pertaining to GM were solely categorized at a taxonomic level higher than the genus, hence limiting the ability to identify causal connections between GM and DN at more specific taxonomic levels such as species or strain.Additionally, there may be a partial overlap of data on SNPs present in GM across distinct taxonomic tiers, which may impact the reproducibility of the results of MR analyses.Lastly, some of the GM identified in this study have not been previously identified as directly associated with DN, and the findings of this study of certain GM fail to align with the results of prior studies.Hence, further population-based prospective studies and experiments are required to investigate the potential biological mechanisms between these GM and DN.
Conclusion
In conclusion, our study provides genetic insights into the potential causal relationships between specific GM and DN.We 10.3389/fmicb.2024.1309871identified particular microbial communities with protective or detrimental roles in DN, thereby augmenting our understanding of the intricate interplay between the gut and kidneys in the development of DN.These findings offer valuable directions for future research and therapeutic interventions.
FIGURE 1
FIGURE 1Study design and flow chart of MR analysis.MR, mendelian randomization.GWAS, genome-wide association study.LD, linkage disequilibrium.
FIGURE 6
FIGURE 6 The results of tests for horizontal pleiotropy and heterogeneity in toward MR analysis.No horizontal pleiotropy or heterogeneity was found among instruments variables.MR, mendelian randomization.IVW, inverse variance weighted.
TABLE 2
The results of tests for horizontal pleiotropy and heterogeneity in reverse MR analysis.No horizontal pleiotropy or heterogeneity was found among instruments variables.
|
2024-03-31T15:22:13.670Z
|
2024-03-27T00:00:00.000
|
{
"year": 2024,
"sha1": "55e8f3294a50462de3fcc2e9094be20ed1412a1f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2024.1309871/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00759d63b8d57001aadf5a1db91ba3172bebe073",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
236428482
|
pes2o/s2orc
|
v3-fos-license
|
On the Le Cam distance between multivariate hypergeometric and multivariate normal experiments
In this short note, we develop a local approximation for the log-ratio of the multivariate hypergeometric probability mass function over the corresponding multinomial probability mass function. In conjunction with the bounds from Carter (2002) and Ouimet (2021) on the total variation between the law of a multinomial vector jittered by a uniform on $(-1/2,1/2)^d$ and the law of the corresponding multivariate normal distribution, the local expansion for the log-ratio is then used to obtain a total variation bound between the law of a multivariate hypergeometric random vector jittered by a uniform on $(-1/2,1/2)^d$ and the law of the corresponding multivariate normal distribution. As a corollary, we find an upper bound on the Le Cam distance between multivariate hypergeometric and multivariate normal experiments.
Introduction
Let d ∈ N. The d-dimensional (unit) simplex and its interior are defined by where x 1 := d i=1 |x i | denotes the ℓ 1 norm on R d . Given a set of probability weights p ∈ N −1 N d 0 ∩ Int(S d ), the probability mass function of the multivariate hypergeometric distribution, Hypergeometric(N, n, p), is defined, by Johnson et al. [6, Chapter 39], as This distribution represents the first d components of the vector of categorical sample counts when randomly sorting a random sample of n objects from a finite population of N objects into d + 1 categories, where p i , 1 ≤ i ≤ d + 1, is the probability of any given object to be sorted in the i-th category.
Our first main goal in this paper is to develop a local approximation for the log-ratio of the multivariate hypergeometric probability mass function (1.1) over the Multinomial(n, p) probability mass function, namely This latter distribution represents the exact same as (1.1) above, except that the population from which the n objects are drawn is infinite (N = ∞). Another way of distinguishing P N,n, p and Q n, p in a finite population of N objects is to say that we sample the n objects without replacement and with replacement, respectively. In both cases, the categorical probabilities (p, p d+1 ) are the same. For good general references on normal approximations, we refer the reader to Bhattacharya & Ranga Rao [2] and Kolassa [7].
Our second main goal is to prove an upper bound on the total variation between the probability measure on R d induced by a random vector distributed according to P N,n, p and jittered by a uniform on (−1/2, 1/2) d , and the probability measure on R d induced by a multivariate normal random vector with the same mean and covariances as a random vector distributed according to Q n, p , namely n p and n(diag(p) − pp ⊤ ). The proof makes use of the total variation bound from Ouimet [14, Lemma 3.1] (which improved Lemma 2 in [4]) on the total variation between the probability measure on R d induced by a multinomial vector distributed according to Q n, p and jittered by a uniform on (−1/2, 1/2) d , and the probability measure on R d induced by a multivariate normal random vector with the same mean and covariances. As pointed out by Mattner & Schulz [11, p.732], the univariate case here would be much simpler since Morgenstern [12, p.62-63] showed that the hypergeometric probability mass function can be written as a ratio of three binomial probability mass functions, and local limit theorems are well-known for the binomial distribution, see, e.g., Prokhorov [15] and Govindarajulu [5].
The deficiency between a given statistical experiment and another measures the loss of information from carrying inferences to the second setting using information from the first setting. This loss of information goes in both directions, but the deficiency is not necessarily symmetric. The maximum of the two deficiencies is called the Le Cam distance (or ∆-distance in [8]). The usefulness of this notion comes from the fact that seemingly completely different statistical experiments can result in asymptotically equivalent inferences using Markov kernels to carry information from one setting to another. For instance, it was famously shown by Nussbaum [13] that the density estimation problem and the Gaussian white noise problem are asymptotically equivalent in the sense that the Le Cam distance between the two experiments goes to 0 as the number of observations goes to infinity. The main idea was that the information we get from sampling observations from an unknown density function and counting the observations that fall in the various boxes of a fine partition of the density's support can be encoded using the increments of a properly scaled Brownian motion with drift t → t 0 f (s) ds, and vice versa. An alternative (simpler) proof of this asymptotic equivalence was shown by Brown et al. [3] who combined a Haar 2 wavelet cascade scheme with coupling inequalities relating the binomial and univariate normal distributions at each step (a similar argument was developed previously by Carter [4] to derive a multinomial/multivariate coupling inequality). Not only Brown et al. [3] streamlined the proof of the asymptotic equivalence originally shown by Nussbaum [13], but their results hold for a larger class of densities and the asymptotic equivalence was also extended to Poisson processes. Our third main result in the present paper extends the multinomial/multivariate normal comparison from [4] (revisited and improved by Ouimet [14], who removed the inductive part of the argument) to the multivariate hypergeometric/multivariate normal comparison (recall from (1.2) that the multinomial distribution is just the limiting case N = ∞ of the multivariate hypergeometric distribution). For an excellent and concise review on Le Cam's theory for the comparison of statistical models, we refer the reader to Mariucci [10].
The three results we have just described are presented in Section 2, and the related proofs are gathered in Section 3. Here are now some motivations for these results. First, we believe that the first two results (the local expansion of the log-ratio and the total variation bound) could help in developing asymptotic Berry-Esseen type bounds for the symmetric multivariate hypergeometric distribution and the symmetric multinomial distribution, similar to the exact optimal bounds proved recently by Mattner & Schulz [11] in the univariate setting. Second, there might be a way to use the Le Cam distance upper bound between multivariate hypergeometric and multivariate normal experiments to extend the results on the asymptotic equivalence between the density estimation problem and the Gaussian white noise problem shown by Nussbaum [13] and Brown et al. [3].
Remark 1. Throughout the paper, the notation u = O(v) means that lim sup N→∞ |u/v| < C, where C ∈ (0, ∞) is a universal constant. Whenever C might depend on a parameter, we add a subscript (for example, u = O d (v)).
Results
Our first main result is an asymptotic expansion for the log-ratio of the multivariate hypergeometric probability mass function (1.1) over the corresponding multinomial probability mass function (1.2).
Theorem 1 (Local limit theorem for the log-ratio). Assume that n, N ∈ N with n ≤ N and p ∈ N −1 N d 0 ∩ Int(S d ) hold, and pick any γ ∈ (0, 1). Then, uniformly for k ∈ K d such that
1)
and The local limit theorem above together with the total variation bound in [4,14] between jittered multinomials and the corresponding multivariate normals allow us to derive an upper bound on the total variation between the probability measure on R d induced by a multivariate hypergeometric random vector jittered by a uniform random vector on (−1/2, 1/2) d and the probability measure on R d induced by a multivariate normal random vector with the same mean and covariances as the multinomial distribution in (1.2).
Theorem 2 (Total variation upper bound). Assume that n, N ∈ N with n ≤ (3/4) N and Hypergeometric(N, n, p), L ∼ Multinomial(n, p), and U, V ∼ Uniform(−1/2, 1/2) d , where K, L, U and V are assumed to be jointly independent. Define X := K + U and Y := L + V, and let P N,n, p and Q n, p be the laws of X and Y, respectively. Also, let Q n, p be the law of the Normal d (n p, nΣ p ) distribution, where Σ p := diag(p) − pp ⊤ . Then, as N → ∞, , and · denotes the total variation norm. Since the Le Cam distance is a pseudometric and the Markov kernel that jitters a random vector by a uniform on (−1/2, 1/2) d is easily inverted (round off each component of the vector to the nearest integer), then we find, as a consequence of the total variation bound in Theorem 2, an upper bound on the Le Cam distance between multivariate hypergeometric and multivariate normal experiments.
Define the experiments P := {P N,n, p } p∈Θ R , P N,n, p is the measure induced by Hypergeometric(N, n, p), Q := {Q n, p } p∈Θ R , Q n, p is the measure induced by Normal d (n p, nΣ p ).
Then, for N ≥ n 3 /d 2 , we have the following upper bound on the Le Cam distance ∆(P, Q) between P and Q, where C R is a positive constant that depends only on R, Now, consider the following multivariate normal experiments with independent components Q := {Q n, p } p∈Θ R , Q n, p is the measure induced by Normal d (n p, ndiag(p)), where 1 : = (1, 1, . . . , 1) ⊤ , then Carter [4,Section 7] showed, using a variance stabilizing transformation, that with proper adjustments to the definition of the deficiencies in (2.3).
Corollary 1. With the same notation as in Theorem 3, we have, for N ≥ n 3 /d 2 , where C R is a positive constant that depends only on R.
Proofs
Proof of Theorem 1. Throughout the proof, the parameter n ∈ N satisfies n ≤ γN and the asymptotic expressions are valid as N → ∞.
By applying the following Taylor expansions, valid for |x| ≤ γ < 1, After rearranging some terms and noticing that d+1 i=1 k i = n, we get To estimate the expectation in (3.3), note that if P N,n, p (x) and Q n, p (x) denote the density functions associated with P N,n, p and Q n, p (i.e., P N,n, p (x) is equal to P N,n, p (k) whenever k ∈ K d is closest to x, and analogously for Q n, p (x)), then, for N large enough, we have Together with the large deviation bound in (3.4), we deduce from (3.3) that Putting (3.5) and (3.6) together yields the conclusion.
Proof of Theorem 3. By Theorem 2 with our assumption N ≥ n 3 /d 2 , we get the desired bound on δ(P, Q) by choosing the Markov kernel T ⋆ 1 that adds U ∼ Uniform(−1/2, 1/2) d to K ∼ Hypergeometric(N, n, p), namely To get the bound on δ(Q, P), it suffices to consider a Markov kernel T ⋆ 2 that inverts the effect of T ⋆ 1 , i.e., rounding off every components of Z ∼ Normal d (n p, nΣ p ) to the nearest integer. Then, as explained by Carter [4, Section 5], we get δ(Q, P) ≤ P N,n, p −
|
2021-07-27T01:16:17.978Z
|
2021-07-24T00:00:00.000
|
{
"year": 2021,
"sha1": "cb923118e90799994a422fe9c6970297002243ad",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/113165/2/2107.11565.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e1263cae2aa2741544bb160703da58e2c49b2382",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
54982994
|
pes2o/s2orc
|
v3-fos-license
|
Effect of waves on the tidal energy resource at a planned tidal stream array
Waveecurrent interaction (WCI) processes can potentially alter tidal currents, and consequently affect the tidal stream resource at wave exposed sites. In this research, a high resolution coupled wave-tide model of a proposed tidal stream array has been developed. We investigated the effect of WCI processes on the tidal resource of the site for typical dominant wave scenarios of the region. We have implemented a simplified method to include the effect of waves on bottom friction. The results show that as a consequence of the combined effects of the wave radiation stresses and enhanced bottom friction, the tidal energy resource can be reduced by up to 20% and 15%, for extreme and mean winter wave scenarios, respectively. Whilst this study assessed the impact for a site relatively exposed to waves, the magnitude of this effect is variable depending on the wave climate of a region, and is expected to be different, particularly, in sites which are more exposed to waves. Such effects can be investigated in detail in future studies using a similar procedure to that presented here. It was also shown that the wind generated currents due to wind shear stress can alter the distribution of this effect. © 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/).
Introduction
The NW European shelf seas are amongst several regions in the world where relatively strong waves are present at many locations that are potentially suitable for the development of tidal stream arrays [1]. Waves can have a critical effect on planning, operation, maintenance, and generally, assessment of the interactions of a tidal energy converter (TEC) device with the marine environment. For instance, wave-induced loads have an important role in the TEC design process [2]. Additionally, waveecurrent interaction processes affect the turbulence, and the dynamics of sediment transport [3]; therefore, they should be considered when the impact of a TEC device, or an array of such devices, on the environment is studied.
Wave effects can be investigated on various forms of ocean currents e which are driven by forces generated by wind, air pressure, heating and cooling, Coriolis, and astronomical tidal currents; however, tidal-stream sites are usually located in shallow regions of shelf seas which are vertically well mixed and dominated by tidal forcing [4]. Further, the development of tidal-stream sites is primarily based on tidally generated currents. Therefore, the interaction of astronomical tidal currents and waves is of primary importance at tidal-stream sites, in this respect.
Ocean models are widely used to characterise the tidal energy resources of potential tidal-stream sites (e.g. Refs.
[5e7]), in conjunction with direct measurement of currents. While these models can simulate tidal currents using relatively established procedures, simulating the effect of waves on tidal currents usually requires additional modelling steps, including the development of a wave model, and a coupling procedure. Apart from a few studies [1,6,8,9], the interaction of waves and tidal currents has not generally been considered in the assessment of marine renewable energy resources (e.g. Refs. [10e12,7]). In particular, much more effort has been invested in characterising the effect of tides on the wave energy resource [1,6,8], in comparison with quantifying the effect of waves on the tidal energy resource. Nevertheless, previous research has shown that waveecurrent interaction processes can change the hydrodynamics of tidal currents via several mechanisms such as wave induced forces and enhanced bottom friction (e.g. Refs. [13e15]), which could considerably alter the tidal energy resource of a site. These effects can be significant for water depths less than 50 m [16], where the majority of first generation tidal devices are likely to operate [17].
The theory of wave effects on currents has been extensively developed in previous research, and can be implemented using a range of coupled Ocean-Wave-Sediment Transport models [18,19].
However, few studies have attempted to simulate the interaction of tides and waves over the northwest European shelf seas [20,21]. For instance, Bolanos-Sanchez et al. [22] and Bolanos et al. [23] coupled the POLCOMS (Proudman Oceanographic Laboratory Coastal Ocean Modelling System) ocean model and the WAM (WAve Model), and implemented several waveecurrent interaction processes, including wave refraction by currents, bottom friction, enhanced wind drag due to waves, Stokes drift, wave radiation stresses, and Doppler velocity. The POLCOMS-WAM coupled modelling system has been applied in a number of research studies, such as surge prediction in the Irish Sea [24].
Among coupled modelling systems which can simulate the interaction of tidal currents and waves, TELEMAC is an open access code which is used frequently for tidal energy resource assessment, both for academic research and commercial projects [25,11,26,27]. The TELEMAC numerical discretisation is based on the unstructured finite element/volume method, and allows the user to refine the mesh in regions of interest, without encountering complications which arise from the nesting procedure. In addition to hydrodynamic modules, TELEMAC has a spectral wave module, TOMAWAC (TELEMAC-based Operational Model Addressing Wave Action Computation), which can simulate the evolution of waves on a mesh which is common to all modules, and export the wave parameters to the current model for the inclusion of waveecurrent interaction processes [28]. TELEMAC has been previously used to model complex coastal regions where wave-tide interaction plays a key role in sediment transport [29].
In this research, the effect of waves on the tidal energy resource at a proposed tidal stream array has been investigated. The site is within the coastal waters of Anglesey, North Wales, which is one of the hot spots for tidal stream development, and is likely to be the site of one of the first commercial tidal arrays in UK waters. Section 2 introduces the study region, sources of data, and numerical models used in this study. In particular, the details of the methodology which have been implemented to study the effect of waves on the tidal energy resource is discussed in Section 2.5. All symbols used to describe model formulations or wave current interaction formulae are listed in Table 1. The results are presented in Section 3, which demonstrate the effect of waves on the tidal energy resource in various forms: wave forces, enhanced bottom friction, and combined effects. Section 4 provides additional discussion on the effect of wind generated currents, and highlights topics for further research (e.g. 3-D effects). Finally, our conclusions are summarised in Section 5.
Study region
The Irish Sea is a highly energetic shelf sea region, with high tidal velocities generated where flow is constricted around headlands [30]. One such example is the northwestern headland of Anglesey (Fig. 1a), a large island located off the NW coast of Wales, where tidal flow is constricted by a bathymetric feature called the Skerries and hence further accelerated.
Due to proximity of the Skerries site to a good grid connection and Holyhead port, suitable bathymetry and peak spring tidal currents in excess of 2.5 m/s [25], Marine Current Turbines (MCT)/ Siemens has proposed to install a tidal stream array off the NW coast of Anglesey. The array site is a sound between the Isle of Anglesey and a small group of islands known as the Skerries, less than 1 km from the coast. The proposed tidal stream array consists of five SeaGen S 2 MW tidal stream turbines, with a total array capacity of around 10 MW (www.marineturbines.com). More information on the device can be found at the MCT and SeaGen websites (www.seageneration.co.uk). Apart from this site, a Crown Estate tidal energy demonstration zone has been planned to the west of Holy Island which is close to this site. Other tidal energy companies are also looking for suitable sites in this region for tidal energy development.
Description of models
Although a number of models have been developed for this region (e.g. Refs. [31,12]), these studies have focused mainly on tides or sediment transport [25]. Wave characteristics at potential tidal stream sites should be considered in several respects, such as wave induced hydrodynamic loading, operation and maintenance, wave-tide interactions, and sediment transport. Accordingly, a coupled tide-wave model of the region, which includes the effect of waves on currents and vice versa, was developed using the TELE-MAC modelling system [32]. Table 1 List of symbols.
Symbol Description
A Semi orbital wave excursion, Wave friction factor, f w ¼ 0.237(A/k s ) À0.52 [3]. Source or sink of momentum in i direction (x or y). S xx Component of the radiation stress tensor, evaluated by:
S yy
Component of the radiation stress tensor, evaluated by:
S xy
Component of the radiation stress tensor, evaluated by: Angle between wave direction and current direction.
TELEMAC modelling system
TELEMAC is a finite element or finite volume modelling system which was originally developed to simulate free surface flow. The theoretical/numerical formulation of TELEMAC is described in Hervouet [33], and its source codes and manuals are available online: www.telemacsystem.com. TELEMAC comprises a suite of modules for the simulation of hydrodynamic and morphodynamic processes in oceanic/coastal environments including shallow water (horizontal) flows (TELEMAC-2D), 3-D flows (TELEMAC-3D), sediment transport and bed evolution (SISYPHE), and waves (TOMA-WAC). Villaret et al. [28] recently presented several validation test cases of TELEMAC which involved various modules. In the latest version of TELEMAC (i.e. v6.3), the hydrodynamic (TELEMAC-2D), wave, and sediment transport modules are coupled: the modules exchange data at a user defined time step. More details about waveecurrent interaction simulation using TELEMAC is provided in Section 2.5. TELEMAC-2D, which has been used in this study, is based on the depth-averaged Navier Stokes Equations: where h is the water depth, S h represent sources/sinks of mass in the continuity equation, u is the depth averaged velocity, n t is the momentum diffusion coefficient (turbulence and dispersion), z s is the water elevation, S ui represent other forces (friction, wave forces, wind stress, etc.), and i represents either x or y directions. TELEMAC benefits from an unstructured mesh, which allows the use of very high resolution mesh at locations of interest without resort to nesting. The model was used to characterise the tide and wave conditions in and around the Skerries. TOMAWAC, the wave module of TELEMAC, is a third generation wave model which solves the evolution of the directional spectrum of the wave action. In realistic sea states, the wave energy is distributed over a range of frequencies and directions. The spectral energy density function is the intensity of the wave energy per unit frequency, per unit direction (E ¼ E(s,q); see Table 1 for definition of symbols), and can represent the wave sea state at a particular time and location. In spectral models like TOMAWAC or SWAN, 'wave action density', rather than wave spectral density, is used as the state variable, since it is conserved in presence of ambient currents [34,35]. The wave action is defined as: Nð x ! ; k ! ; tÞ ¼ E=ðrgsÞ, and is conserved as follows.
where c g ¼ (c g k x /k, c g k y /k), and Q represents various source and sink terms. TOMAWAC includes deep and shallow water physics such as refraction, white-capping, bottom friction and depth-induced wave breaking, as well as non-linear waveewave quadruplet and triad interactions. TOMAWAC can be applied to a range of scales from continental shelf seas to coastal zones [34].
TELEMAC settings
An unstructured mesh of the region was created with variable resolution, being relatively fine (15e250 m) around the site and Anglesey, and coarser (500e2000 m) elsewhere in the Irish Sea (Fig. 1b). The model domain covers the whole Irish Sea, extending from 8+ W to 2.5 + W, and from 50 + N to 56 + N, which is necessary for wave modelling in order to generate sufficient fetch. Gridded Admiralty bathymetry data available at 200 m resolution (digimap.edina.ac.uk) was mapped on to the mesh. TELEMAC2D, the 2-D hydrodynamic module of TELEMAC, solves the 2-D shallow water equations using finite element method to simulate tidal currents, which is a good approximation for the fully mixed barotropic flows in this area. Tidal currents in the NW European shelf seas are dominated by M 2 and S 2 : principal lunar and solar semidiurnal components, respectively [36]. The next three tidal constituents, which are relatively significant in some areas of the NW European shelf seas, are K 1 and O 1 lunar diurnal components, and the lunar elliptic semidiurnal constituent, N 2 [37]. Therefore, the open boundaries of the tidal model were forced by 5 tidal constituents (M 2 , S 2 , N 2 , K 1 , O 1 ) interpolated from FES2004 tidal data [37]. For friction, a constant Chezy's coefficient of 70 (approximately equivalent to C D ¼ 0.0025) was used, which led to convincing validation for water level and current speed for the astronomical tides at observation locations. The friction coefficient was then enhanced based on the wave parameters for WCI effects (Section 2.5.2).
TOMAWAC was applied to the same mesh and bathymetry as the TELEMAC2D model. Hourly wind forcing data was provided by the UK Met Office Integrated Data Archive System (MIDAS; for Valley station see Fig. 1). TOMAWAC was run in third-generation mode, including Janssen's wind generation (WAM cycle 4), whitecapping, and quadruplet waveewave interactions. The bottom friction and depth induced wave breaking were also included in the numerical simulations.
SWAN wave model
Since the high resolution coupled TELEMAC model was expensive to run for long periods of time, a SWAN (Simulating WAves Nearshore) model of the NW European Shelf seas was used to characterise the temporal variability of the wave climate over a decade of simulation. The SWAN model was developed and validated extensively in a previous research study [38].
SWAN is another open source third-generation numerical wave model which simulates random waves from deep waters to the surf zone and coastal regions in the spectral domain. SWAN has been described in Booij et al. [35] and is based on the Eulerian formulation of the discrete spectral balance of action density. It has been widely used for simulating waves at various scales (e.g. Refs. [38,39]). It accounts for refractive propagation over arbitrary bathymetry and ambient current fields. The physics and formulation of SWAN are similar to those of TOMAWAC described in Section 2.2.1; however, in SWAN, the wave action is formulated as a function of wave frequency and direction rather than wave number (used in TOMAWAC). Several processes including wind generation, whitecapping, quadruplet waveewave interactions, and bottom dissipation are represented explicitly in SWAN.
SWAN settings
The SWAN wave model setting and its validation, which was applied for a decade (2003e2012) of simulation, are described in detail in Neill and Hashemi [38]. It consisted of a parent model which included the entire North Atlantic at a grid resolution of 1/ 6 Â 1/6 , extending from 60 W to 15 E, and from 40 N to 70 N. 2-D wave spectra were output hourly from the parent model and interpolated to the boundary of an inner nested model of the NW European shelf seas. The inner nested model had a grid resolution of 1/24 + Â1/24 + , extending from 14 + W to 11 + E, and from 42 + N to 62 + N. Wind forcing was provided by European Centre for Medium-Range Weather Forecasts (ECMWF; www.ecmwf.int). ERA (European Research Area) Interim reanalysis full resolution data, which are available 3-hourly at a spatial resolution of 3/4 Â3/4 were used. SWAN was run in third-generation mode, with Komen linear wave growth, white-capping, and quadruplet waveewave interactions.
Wave climate of the region
In contrast to the astronomical tides, the wave climate of a region is highly variable. It has been previously shown that the wave climate of the NW European shelf seas is strongly related to the North Atlantic Oscillation, which has high inter-annual variability [38]. Fig. 2 shows the wave and wind roses at two points ( Fig. 1a) off the NW of Anglesey based on the 10 year SWAN simulation. As this figure shows, the strongest and most frequent winds and waves are southwesterly. It is also clear that the probability of waves with significant wave height (H s ) greater than 5 m, or wind speeds in excess of 15 m/s, is quite low. Fig. 3b shows the variability of extreme significant wave heights during winter months to the west of Anglesey over the decade of simulation (Point SWN 1 , Fig. 1). According to this figure, the probability of waves with H s exceeding 5.5 m is very low. Further, January 2005 and December 2007 are the most extreme months in our record, with maximum significant wave heights of 6.7 m and 6.8 m, respectively. The expected (i.e. average) value of an extreme significant wave height during the winter period is 3.9 m. In terms of mean wave conditions (Fig. 3a), January is the most energetic month in this region, with expected significant wave heights of approximately 1.6 m (on average). Based on these wave statistics, the TOMAWAC model was forced with different southwesterly wind scenarios; wind speeds of 10 m/s and 15 m/s seemed appropriate to capture mean and extreme wave scenarios, with significant wave heights of 4.0 m and 1.8 m, respectively at SWN 1 (Fig. 1). In the next sections, TELEMAC2D and TOMAWAC are first validated, and then used to study the effect of waves on tidal energy resources of the site for these scenarios.
Model validation
The tidal model was validated at several tidal gauge stations within the Irish Sea. The validation results relatively near to the site is presented here. ADCP (acoustic Doppler current profiler) data collected during August 2013, at Holyhead Deep (ADCP 1 , Fig. 1a), and February 2014, off the northern coast of Anglesey (ADCP 2 , Fig. 1a), were used for current validation. Fig. 4a shows the comparison of model outputs and observed data at Holyhead tidal gauge. Table 2 also shows the performance of the model for water elevation and current velocity. The mean absolute error, which is reported in Table 2, is defined as.
where M is the number of data points, u o c and u m c are observed and predicted values of depth averaged velocities, respectively. The current ellipses for M 2 and S 2 based on the model results and observations have also been compared in Fig. 4b. The model error for M 2 and S 2 amplitudes were 5 cm and 8 cm, and for phases were 2 and 1 , respectively. For ADCP 1 , the errors of the current ellipse axes directions were 7 and 8 for M 2 and S 2 while less than 2% for magnitudes of the ellipses major axis. The small error in tidal ellipse directions may be associated with the 3-D nature of the flow [40] at this location, which is deeper than the surrounding areas. Similar discrepancies for current velocities have been reported in a previous study of the region which used ADCIRC depth averaged model [12]. Similar results can be seen for ADCP 2 . The mean absolute error of current velocity for two measurement locations is less than 0.20 m/s. Overall, given the magnitudes of the errors, model performance for both tidal elevations and currents is convincing.
The theoretical average tidal stream energy per unit area (i.e., P ¼ 1=2T sn R r uc 3 dt) over a spring-neap cycle has been plotted in Fig. 5. As this figure shows, the Skerries and west coast of Holyhead are hot spots for tidal energy in northwest Wales. The peak tidal Fig. 3. Distribution of the average and maximum significant wave height at point SWN 1 (See Fig. 1a) over a decade (2003e2012) of simulation. January 2005, and December 2007 are the most extreme months with 6.7 m and 6.8 m significant wave heights, respectively. According to error-bars, which are based on 95% CI, the probability of these events is less than 5%. current velocity exceeds 3 m/s in parts of this region, and there is a relatively large area where peak tidal velocities exceed 2 m/s. The TOMAWAC model of the region was validated for January 2005, which represents one of the most extreme months during our analysed period (Fig. 3). Within this month, periods of high, low and average wave condition existed, which provides a highly variable basis for testing the model. Fig. 6 shows the validation of significant wave height and wave period at the M2 wave buoy (Fig. 1), which is the closest available wave buoy to the site. The mean absolute errors for wave height and period are 0.38 m and 0.65 s, respectively, which is within an acceptable range of accuracy, compared with other models of this region (e.g. Ref. [38]). In particular, the model was able to capture the peak wave height on the eighth of January, which is important in the extreme wave scenario.
Formulation of wave effects on currents
Two important wave effects on currents are: wave induced momentum (or wave radiation stresses), and the enhancement of Table 2 for error magnitudes.
Table 2
Performance of the tidal model (in terms of absolute error) for tidal elevation and velocity at a tidal gauge and 2 ADCP measurement points (see Fig. 1 for locations). The mean absolute error is presented for u c . The variables in this table are defined as follows: a h and a h , tidal elevation amplitude and phase, respectively; C max and C a , current ellipse major axes magnitude and direction, respectively; u c is the depth averaged velocity. the bottom friction felt by currents due to the interaction with the wave boundary layer. Both effects can be included in coupled wavetide models by exporting the appropriate wave parameters to the tidal model, and modifying the corresponding parameters in the momentum equation. The effect of these processes on tidal energy is evaluated here by running the tidal model with and without WCI, and then computing the average tidal power. The relative difference, or the effect of a process, was computed using.
where u à c is the tidal current affected by a waveecurrent interaction process, I is the percentage effect, and r is the water density. In a coupled TELEMAC2D-TOMAWAC model, the wave radiation forces are automatically computed and fed back to the hydrodynamic model [41]. Further, although the effect of wave induced bed shear stresses are incorporated in the sediment transport module [42], the enhanced bottom friction due to WCI is not included in the hydrodynamic model (i.e. TELEMAC2D) formulations [41]. However, it is possible in the TELEMAC modelling system to modify the subroutines associated with bottom friction and include this process according to the wave parameters (Section 2.5.2).
Wave radiation stresses
Wave radiation stresses are the excess flow of momentum due to the presence of waves [43]. The wave induced forces are computed based on the gradient of the wave radiation stresses as follows [44], where F represents the wave force per unit surface area, and the wave radiation stresses (i.e., S ij ) have been defined in Table 1. By analogy, pressure forces are another form of body force, which are stresses generated by the gradient of the water pressure (i.e., vp/ vx ¼ rgvh/vx). In general, wave forces are dominant in the nearshore zone, where the gradients of the radiation stresses are high, and can explain wave set-up and longshore currents. In addition, they can potentially change the current velocity in a tidal stream site, especially if there is a dominant wave climate, and this can consequently affect the tidal energy resource.
Enhanced bottom friction
The interaction of waves with the current boundary layer leads to near-bed turbulence, and consequently the bed shear stress. This effect can reduce tidal currents, and since tidal power is proportional to velocity cubed, it can potentially decrease the tidal energy resource at a site. For instance, Wolf and Prandle [13], observed that the amplitudes of tidal currents reduce due to WCI. The WCI effect on the bottom boundary layer has been extensively studied in previous research (e.g. see Refs. [15,14,45,46]). Here, we investigate the sensitivity of bottom friction to this effect, and its implications in tidal energy resource assessment.
In general, ocean hydrodynamic models, like TELEMAC, have several options available to quantify bottom friction [32,47]. Therefore, to empirically account for enhanced friction due to WCI, the bed roughness length corresponding to the Nikuradse law of friction, the bottom drag coefficient corresponding to quadratic friction law, or Chezy coefficient corresponding to the Chezy law, can be modified. For instance, Van Rijn [45] introduced the following relation to enhance the bed roughness in the presence of waves. k a ¼ k s exp g U w u c < 10; g ¼ 0:80 þ 4 À 0:34 2 (7) where k a and k s represent the apparent and physical roughness, respectively; 4 is the angle between wave direction and current direction in radians. In practice, the apparent bed roughness due to WCI can be an order of magnitude greater than the physical bed roughness. Alternatively, we applied the concept of mean (over the wave period) drag coefficient due to combined waves and current to increase bottom friction in the present research. The mean bed shear stress due to the combined action of waves and currents is given by Refs. [46,3], where t c and t w are bed shear stresses due to current alone or wave alone, respectively. The bed shear stresses are related to depth averaged current velocity through the drag coefficient, where C D and C Ã D are the drag coefficients in the absence and presence of waves, respectively; therefore, Eq. (8) can be written as, Eq. (10) gives the ratio of the combined waveecurrent drag coefficient to the pure current drag coefficient (i.e. x) as a function of the ratio of the wave induced shear stress to the current induced bed shear stress (i.e. l). The wave induced bed shear stress is a function of the bottom wave orbital velocities (U w ), and can be computed using the wave parameters output from a wave model as follows [3], where f w is the wave friction factor, k s is Nikuradse bed roughness, and A is the semi orbital wave excursion (see Table 1). Given the dominant wave climate of a region, Eq. (10) (or alternatively Eq. (7)) can be implemented as a simple procedure to assess the effect of waves on the tidal energy resource in terms of enhanced bottom friction. Although more complex and computationally expensive methods are available in 3-D coupled wave-tide models like COAWST (Coupled Ocean Atmosphere Wave Sediment Transport [9,48]), we used this method which is more convenient and significantly less expensive. It is worth mentioning that other friction factors like the Chezy coefficient can be modified using x.
Since C ¼ g=C 2 D , the modified Chezy coefficient will be: Figs. 7 and 8 show the enhancement of the bottom drag coefficient due to WCI as a function of the wave induced orbital velocity for several wave and current scenarios. The sensitivity analysis has been carried out for the usual operational condition of a tidal stream site with currents of greater than 1.0 m/s (lower cut-in speed of TECs). In terms of the bed friction, k s values of 0.005, 0.0125, and 0.025 correspond to seabed sediment grain sizes of 2 mm, 5 mm, and 10 mm, respectively (assuming k s ¼ 2.5d 50 ), values that are typically observed at high energy sites [25]. The wave orbital velocity, estimated near the bed, which is the basis for the above computations can be directly output from a wave model like TOMAWAC. Alternatively, it can be parameterised using the surface wave parameters [49,50] or approximated by linear wave theory.
where the wave number, k is computed using the linear dispersion equation (s 2 ¼ gktanhkh). In the absence of coupled wave-tide models, the above equation (or similar procedures) along with Fig. 8 give a quick estimate of the enhanced bottom friction, which then can be used to approximately compute the effect of WCI on the tidal energy.
Results
Based on the wave statistics of the site (Section 2.3), the TOM-AWAC model was forced with different southwesterly wind scenarios in stationary mode; wind speeds of 10 m/s and 15 m/s were selected to capture mean and extreme wave scenarios, respectively. To simulate the effect of waves on tidal currents, TELEMAC was run in fully coupled mode, where two-way feedbacks between the wave and the tide models were implemented.
Effect of wave forces on tidal energy
The spatial distribution of significant wave height for the extreme wave scenario is plotted in Fig. 9, which indicates a wave height of about 4 m at SWN 1 . Further, Holy Island has a significant effect on the wave distribution over the NW part of Anglesey, including the Skerries site, for this scenario. The validated TOMAWAC wave model was then used to study the effect of WCI as an element of the coupled wave-tide model of the region. Fig. 10 shows the computed wave radiation stresses, and the corresponding wave forces for two typical wave scenarios. As this figure shows, apart from nearshore zones, the wave forces are also significant in the Skerries tidal stream site, particularly for the extreme wave scenario. Referring to Eq. (6), the gradient of the wave radiation stresses in this area generates the wave forces. Since wave radiation stresses are proportional to wave energy (see Table 1), the spatial change (i.e. gradient) in the wave height distribution leads to the generation of wave induced forces. Referring to Figs. 9 and 10, as a complex result of changes in the bathymetry and coastline, and Holy Island acting as an obstacle in the wave field, the wave height distribution, and consequently wave radiation stresses have a significant gradient around the Skerries. Fig. 11 shows the mean effect of these forces on tidal energy (in percent) over a tidal cycle. Considering the percentages of the impacts, the wave forces have slightly Fig. 10. Wave radiation stresses and wave forces (i.e. F ¼ for two wind scenarios around the Skerries tidal stream site. Wave forces, which are usually expressed in N/m 2 , have been normalised by water density and water depth. The wave radiation stresses have been normalised by water density (consistent with the TOMAWAC model outputs). modified the tidal energy for the average wave scenario (3%), while they have more significant impact for the extreme scenario (7%). Since it is the difference of coupled wave-tide model and decoupled tide model that has been plotted, the effect is an overall reduction of the tidal energy, on average. It is worth mentioning that for the above scenarios, the direction of the wave forces do not change during a tidal cycle, as opposed to tidal currents. Therefore, wave forces, on average, had more effects on opposing currents in contrast to following currents. Nevertheless, the presence of wave forces leads to a new hydrodynamic current field which, in general, is spatially and temporally different from that produced in the absence of waves. Considering the tidal asymmetry of the site [12], further research is needed to study the implication of this asymmetry for tidal energy and sediment transport [51].
Effect of enhanced bottom friction on tidal energy
To implement the method described in Section 2.5.2, the orbital velocities, and other wave parameters, were computed for the two wave scenarios using TOMAWAC, and used to modify the bottom friction coefficient. The modified bottom friction coefficients were then fed back into the tidal model. This step can either be implemented with a separate code as in this research, or included in the subroutines of TELEMAC. Fig. 12 shows the near-bed wave orbital velocities for the two wave scenarios. As this figure shows, the wave orbital velocities are about 0.30 m/s and 0.08 m/s for the two scenarios, which is equivalent to about a 5% and 1% increase in the bed friction enhancement factor (x), respectively (Fig. 8), or lower depending on the current speed and bed roughness. After computing the tidal power based on the modified friction, the effect as a percentage has been plotted in Fig. 13 which is, like the effect of wave forces, significant (6%) for the extreme wave case and very small (2%) for the average wave scenario. Since the effect is always negative (reduction in power), the absolute value has been plotted in this figure.
Combined effects
In the case of a fully coupled simulation, where wave radiation stresses and enhanced bottom friction are both incorporated in the tidal modelling, the impact of WCI is magnified due to the nonlinear nature of these processes. In other words, due to nonlinearity in the friction and wave induced force terms in the momentum equations, these effects are not simply superimposed. Fig. 14 shows the average effect of both processes on tidal power. As a consequence of WCI, tidal power can decrease by up to 20% and 15%, respectively, for the extreme and average scenarios, which represents a significant effect on the tidal stream resource. Fig. 11. Effect of wave forces on the tidal stream power for two scenarios.
Discussion
Another process of interest is wind-driven currents. The effect of wind generated currents can be added to wave effects by including wind shear stresses in the hydrodynamic model (TELEMAC-2D). Fig. 15 shows the results of superimposing the effect of wind generated currents on wave effects for the extreme scenario. As this figure shows, overall, the magnitude of the impact on the tidal energy resource does not change considerably e compared with Fig. 14 e while the distribution changes (reduction) in the vicinity of the tidal-stream site. The depth of penetration of wind generated currents in relation to hub heights of tidal energy devices is another topic of interest, which can be studied using 3-D models. This process can be examined in more detail in future studies. The results are generally in agreement with previous 3-D model studies at tidal energy sites 5.
The Skerries project is likely to be one of the first tidal stream arrays installed in UK waters. The wave climate of this region is moderate, so not as extreme as at other potential tidal stream sites such as NW Scotland, or the west coast of Ireland [38], both coastlines that are directly exposed to North Atlantic waves. Due to the highly non-linear nature of WCI effects, separate studies should be undertaken for other sites, but this research has attempted to provide a simple methodology for a popular hydrodynamic model (TELEMAC) which is used in research, and by developers, for tidal energy studies. It is expected that the effect of WCI processes will be much larger at more exposed tidal stream sites of the NW European shelf seas, but site specific modelling and analysis are required to confirm this and quantify these effects.
Moreover, to protect turbines from extreme wave loads, tidalstream devices do not operate in extreme wave conditions. Therefore, the effect of waves on the practical tidal energy resource of a region may be unimportant for the extreme scenarios; nevertheless, the effect is still considerable for the average wave scenario, when tidal energy devices still operate. Due to various limitations such as the interactions of tidal devices at array scale, the available extractable tidal energy at a site is usually less than the theoretical tidal energy considered here [52]. The impact of wave-tide interaction on the practical extractable energy resource depends on specific devices and array configurations, and can be investigated as another step. Further, the sensitivity of the tidal resources of a region to bottom friction decreases as a result of substantial drag from a large tidal array [7]. This may reduce the effect of enhanced bottom friction due to waves. The interaction of waves and tidal currents has implications in design, efficiency, and loading of tidal energy devices which is the subject of other research (e.g. Refs. [53e55]).
The analysis which was accomplished in this research was based on depth-averaged quantities. The effect of various WCI processes Fig. 13. Effect of enhanced bottom friction due to WCI on tidal stream energy around the Anglesey Skerries site, for two different wave scenarios. varies throughout the water column, and given the hub-height of a particular TEC device, it will be useful to assess the vertical variability of these effects using 3-D models [9]. For instance, there is a debate over using depth-averaged radiation-stress gradient as a depth-uniform body force in ocean models [56]. The depthdependent form of horizontal radiation stress gradient terms has also been proposed [57] and applied in 3-D models [48]. Other aspects such as tidal asymmetry and turbulence can be also addressed in future research [5].
Finally, although Eqs (7) and (8) are based on extensive observations made in previous studies, and it is current practice in ocean models to use similar relations to include waveecurrent interaction processes, the simultaneous measurement of tidal currents and waves at proposed tidal stream arrays can provide more insight into WCI related issues. Traditionally, deployment of wave buoys in regions of strong tidal currents is more challenging, and so wave data tends to be sparse in such regions. Referring to Eq. (5), it is easy to show that dP/Pf3Âdu c /u c , where d is the variation. 1 Therefore, to observe a 6% change in power, one should be able to detect 2% change in the current measurement, which is likely to be about the order of magnitude of the measurement errors.
Conclusions
The effect of WCI processes on the tidal energy resource at the proposed Skerries tidal stream array has been investigated for mean and extreme wave scenarios. In terms of wave radiation stresses, it was shown that both wave forces, and their effect on the tidal energy resource, are significant for the extreme wave scenario, and can reach 7%. A simplified method developed here, which was presented to include the effect of WCI on bottom friction, can be used to assess the sensitivity of the tidal currents and tidal power to these processes, based on the wave climate of a region.
As a result of the combined effects of wave radiation stresses and enhanced bottom friction, the tidal energy resource can be reduced by up to 15% and 20% for mean and extreme winter wave scenarios, respectively, at the Skerries tidal stream site. The impact of these two processes is magnified when they are considered together, rather than separately, due to the nonlinear nature of the forces. For more exposed sites, e.g. NW of Scotland, the impact is expected to be greater. Also, wind generated currents change the distribution of this effect in the vicinity of the tidal-stream site.
The effect of WCI processes on tidal energy increases as the ratio of wave stress to current stress increases. Therefore, this effect is more significant for lower tidal energy sites which are exposed to strong waves, rather than higher tidal energy sites which are exposed to moderate waves.
Simultaneous measurement of waves and tidal currents at potential tidal stream sites is necessary to further investigate the impact of waves on various aspects of tidal energy development. However, it should be stressed that very high accuracy measurements would be required due to relatively small values of WCI processes compared with main parameters of the flow. Nevertheless, the effect of these processes can become significant with respect to other parameters like tidal energy or sediment transport.
|
2018-12-05T04:37:38.215Z
|
2015-03-01T00:00:00.000
|
{
"year": 2015,
"sha1": "a00b5f5d9ffa8897c079c51b6f8803242dabf7a2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.renene.2014.10.029",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9cf3758454d1095a40e980e3e543efc3e3235519",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
249192059
|
pes2o/s2orc
|
v3-fos-license
|
An algorithm for finding weakly reversible deficiency zero realizations of polynomial dynamical systems
Systems of differential equations with polynomial right-hand sides are very common in applications. On the other hand, their mathematical analysis is very challenging in general, due to the possibility of complex dynamics: multiple basins of attraction, oscillations, and even chaotic dynamics. Even if we restrict our attention to mass-action systems, all of these complex dynamical behaviours are still possible. On the other hand, if a polynomial dynamical system has a weakly reversible deficiency zero ($WR_0$) realization, then its dynamics is known to be remarkably simple: oscillations and chaotic dynamics are ruled out and, up to linear conservation laws, there exists a single positive steady state, which is asymptotically stable. Here we describe an algorithm for finding $WR_0$ realizations of polynomial dynamical systems, whenever such realizations exist.
Introduction
By a polynomial dynamical system we mean a system of ODEs with polynomial right-hand side, of the form dx 1 dt = p 1 (x 1 , ..., x n ), . . .
where p i (x 1 , . . . , x n ) ∈ R[x 1 , . . . , x n ]. In general, such systems are very difficult to analyze due to nonlinearities and feedback that may give rise to bifurcations, multiple basins of attraction, oscillations, and even chaotic dynamics. The second part of Hilbert's 16th problem (about the number of limit cycles of polynomial dynamical systems in the plane) is still essentially unsolved, even for quadratic polynomials [25]. Even the simplest object associated to (1), its steady state set, is central to real algebraic geometry.
In terms of applications, polynomial dynamical systems often show up in, for example, chemistry, biology, and population dynamics. In these models, the variable x i typically represents concentration, population, or another quantity that is strictly positive, so the domain of (1) is Not only are the dynamical properties of complex-balanced systems well understood, but also the network and parameter structures that characterize them [21]. While in general, there are algebraic conditions on the parameters necessary for complex-balancing, the exception to this rule is the case of weakly reversible and deficiency zero (WR 0 ) networks -these systems are complex-balanced for any choices of parameters, in a sense that will be made clear below. This fact is very important in applications, because the exact values of the coefficients in the polynomial right-hand sides of these dynamical systems are often very difficult to estimate accurately in practice.
In this paper we describe an efficient algorithm for determining whether a given polynomial dynamical system admits a WR 0 realization, and for finding such a realization whenever it exists (see Algorithm 1). Our algorithm does not require solving the differential equation (1), nor does it require solving for its steady state set. Instead, the algorithm, making use of the geometric and log-linear structure of WR 0 networks, requires as its inputs only the monomials and the matrix of coefficients. If a WR 0 realization exists, in Theorem 3.12 we provide a bijection between the positive steady state set of (1) and the solution to a system of linear equations.
The paper is organized as follows. In Section 2 we introduce interaction networks as embedded in R n and formalize their relations to polynomial dynamical systems; we also introduce complexbalanced systems, WR 0 networks, and other relevant notions and results. In Section 3.1 we describe our algorithm for finding a WR 0 realization of a given polynomial dynamical system, whose steady state set is studied in Section 3.2. Our algorithm applies to the case of where the coefficients in the polynomials are unspecified; we consider such systems in Section 3.3.
Background
Throughout this work, we denote by R n ≥ and R n > the sets of vectors with non-negative and positive entries respectively. Similarly, Z n ≥ is the set of vectors with non-negative integer components. Vectors are typically denoted x, y, or w. We denote byẋ the time-derivative dx dt . For any x ∈ R n > and y ∈ R n , define the operation x y = x y 1 1 x y 2 2 · · · x yn n . If Y = y 1 y 2 . . . y n , then x Y = (x y 1 , x y 2 , . . . , x y n ) ⊤ . The support of a vector x ∈ R n is the set of indices supp(x) = {i : x i = 0}.
Dynamical systems and Euclidean embedded graphs
In this section, we introduce the Euclidean embedded graph (E-graph), a directed graph in R n , and explain how a system of differential equations with polynomial right-hand side (a polynomial dynamical system) is defined by it.
where V is a finite subset of R n ≥ , and there are neither self-loops nor isolated vertices. Denote by V s the set of source vertices.
Let V = {y 1 , y 2 , . . . , y m }. An edge (y i , y j ), or (i, j) ∈ E, is also denoted y i → y j . Since vertices are points in R n , an edge can be regarded as a bona fide vector between vertices. An edge vector y j − y i is associated to the edge y i → y j .
For the purpose of using E-graphs to study polynomial dynamical systems, we assume V s ⊂ Z n ≥ , even though most results stated in this paper hold for V ⊂ R n ≥ . The set of vertices V of (V, E) is partitioned by its connected components, which we identify by the subset of vertices that belong to that connected component. If every connected component is strongly connected, i.e., every edge is part of a cycle, then (V, E) is said to be weakly reversible.
Two geometric properties of the E-graph will become important to our analysis of polynomial dynamical system. The first is a notion of affine independence within each connected component; the second is a notion of linear independence between connected components. Definition 2.2. An E-graph (V, E) has affinely independent connected components if the vertices in each connected component are affinely independent, i.e., if {y 0 , y 1 , . . . , y r } ⊆ V is a connected component, then the set {y j − y 0 : j = 1, 2, . . . , r} is linearly independent. Definition 2.3. Let (V, E) be an E-graph. For any U ⊆ V , the associated linear subspace of U is S(U ) = span{y j − y i : y i , y j ∈ U }. The associated linear space 1 of (V, E) is If U defines a connected component of (V, E), then S(U ) ⊆ S. Indeed, if V 1 , V 2 , . . . , V ℓ are the connected components, then S = S(V 1 ) + S(V 2 ) + · · · + S(V ℓ ).
Thus far, we have defined an E-graph, and introduced several objects and properties associated to it. We now turn our attention to how such a graph is canonically associated to dynamics, by assigning a positive weight to each edge. Definition 2.4. Let (V, E) be an E-graph. For each y i → y j ∈ E, let κ ij > 0 be its weight, and let κ = (κ ij ) ∈ R E > . The associated dynamical system on R n > of the weighted E-graph (V, E, κ) is It is sometimes convenient to refer to κ ij even though y i → y j may not be an edge in the network. In such cases, set κ ij = 0.
Remark 2.5. We defined the domain of (2) to be R n > . Systems of ODEs with polynomial righthand side do not in general leave R n > forward-invariant, but if we assume V ⊂ Z n ≥ , the positive orthant R n > is indeed forward-invariant under (2) [35]. It is clear that the right-hand side of (2) lies in the associated linear space S, so any solution to (2) is confined to a translate of S. By the above remark, any solution to (2) where V ⊂ Z n ≥ with initial condition x 0 ∈ R n > is confined to (x 0 + S) ∩ R n > , which is called the invariant polyhedron of x 0 . (c) Figure 1: Weighted E-graphs from Example 2.6.
Example 2.6. We illustrate the notions and notations defined above. Figure 1 shows three examples of weighted E-graphs. The graphs in Figures 1(a) and 1(b) are weakly reversible, but that in Figure 1(c) is not. The graph in Figure 1(a) has two connected components, each of which is affinely independent; however, those in Figures 1(b) and 1(c) do not have affinely independent connected components.
The associated dynamical system of Figure 1(a) is d dt The source vertices play the role of exponents in the monomials, thus the set of source vertices V s determines the monomials in the associated dynamical system.
It so happens that the weighted E-graphs in Figures 1(b) and 1(c) also have (3) as their associated dynamical systems. We say that the three weighted E-graphs in Figure 1 are dynamically equivalent, and the weighted graphs are realizations of the dynamical system (3); we define these terms precisely in Definition 2.9. This example demonstrates that while a weighted E-graph is associated to a unique dynamical system, the converse is not true; there is in general infinitely many realizations of a given polynomial dynamical system [14]. This work is concerned with finding a realization that guarantee certain algebraic and stability properties.
Another way to study the vector field generated by (2) is to use a linear combination of some fixed vectors, one for each monomial, with the coefficients given by the strength of the monomials at that point. We give a name to those fixed vectors.
Definition 2.7. Let (V, E, κ) be a weighted E-graph, and y i ∈ V s . The net direction vector from y i is The matrix of net direction vectors of (V, E, κ) is For convenience, we may refer to the net direction vector even if y i ∈ V s ; in this case, let the net direction vector be zero. Such a net direction vector will not show up as a column of W.
The matrix W from Definition 2.7 is also well defined when we start not with a weighted E-graph, but with a fixed polynomial dynamical system of the form Note that any polynomial dynamical systems can be uniquely written as such, for some y 1 , y 2 , . . . , y m ∈ Z n ≥ distinct, and w 1 , w 2 , . . . , w m ∈ R n non-zero. Definition 2.8. Consider the polynomial dynamical system (4). The matrix of source vertices Y s and the matrix of net direction vectors W of (4) are Y s = y 1 y 2 · · · y m and W = w 1 w 2 · · · w m .
Thus far, we start with a weighted E-graph (V, E, κ), and from it, define a dynamical system. The goal of the present work is the converse direction: start with a polynomial dynamical system, find some (V, E, κ), ideally with certain properties, that gives rise to such dynamics. For example, (4) is generated by the graph y i 1 −→ y i + w i , for i = 1, 2, . . . , m. As Example 2.6 illustrates, there are in general many weighted E-graphs that can generate the same dynamics. Definition 2.9. A realization of a polynomial dynamical systemẋ = f (x) is a weighted E-graph (V, E, κ) whose associated dynamical system is preciselyẋ = f (x). Two realizations ofẋ = f (x) are said to be dynamically equivalent.
Proof. This follows from the linear independence of monomials as functions on R n > .
Complex-balanced systems and WR 0 systems
General polynomial dynamical systems can display a wide range of dynamical behaviours, ranging from stable or unstable steady states, limit cycles, and even chaos. In this work, we are interested in the family of complex-balanced systems, which enjoy various algebraic and stability properties.
Definition 2.11. Let (V, E, κ) be a weighted E-graph in R n , and letẋ = f (x) be its associated dynamical system. A state x * ∈ R n > is said to be a positive steady state if f (x * ) = 0. Let V > (f ) be the set of positive steady states. A state x * > 0 is a complex-balanced steady state if at every y i ∈ V , we have The equations above can be interpreted as balancing the fluxes flowing across the vertex y i . If a weighted E-graph (V, E, κ) admits one complex-balanced steady state, then every positive steady state is complex-balanced [24]; such a (V, E, κ) is called a complex-balanced system .
These systems first arose from the study of chemical systems under mass-action kinetics, as a generalization of thermodynamic equilibrium. The following theorem lists some of the most important results about complex-balanced systems. For more details, see [16,20,38]. 24]). Let (V, E, κ) be a complex-balanced system, with steady state x * ∈ R n > , and associated linear space S. Then the following are true: (i) All positive steady states are complex-balanced, and there is exactly one steady state within each invariant polyhedron.
(ii) Any complex-balanced steady state x satisfies ln x − ln x * ∈ S ⊥ .
(iii) The function defined on R n > , is a strict Lyapunov function within each invariant polyhedron (x 0 + S) ∩ R n > , with a global minimum at the corresponding complex-balanced steady state.
(iv) Every complex-balanced steady state is asymptotically stable with respect to its invariant polyhedron.
Beside these properties, complex-balanced systems enjoy other remarkable algebraic and dynamical properties. For example, the set of positive steady states V > (f ) admits a monomial parametrization [9,32]. Each positive steady state x * is in fact linearly stable with respect to its invariant polyhedron [7,34]. Complex-balanced systems are also conjectured to be persistent and permanent 2 [13]. Moreover, the unique steady state is conjectured to be globally stable within its invariant polyhedron [23]. The Persistence and Permanence Conjectures have been proved in several cases, such as when there is only one connected component [1,6], or the ambient state space is R 2 [13], or the E-graph is strongly endotactic [19], or the associated linear space S is of dimension two and all trajectories are bounded [31]. The Global Attractor Conjecture has also been proved if there is only one connected component [1,6], or the E-graph is strongly endotactic [19], or the ambient state space is R 3 [13], or when the associated linear space S is of dimension at most three [31].
Besides dynamical stability, complex-balanced systems are characterized graph-theoretically and algebraically. Horn proved in [22] that (V, E, κ) is complex-balanced if and only if (V, E) is weakly reversible and κ satisfies some algebraic equations, the number of which is measured by a nonnegative integer called the deficiency of (V, E). Definition 2.13. Let (V, E) be an E-graph with ℓ connected components, and let S be its associated The notion of deficiency can also be applied to the connected components. Suppose V 1 , V 2 , . . . , V ℓ are the connected components of (V, E). The deficiency of a connected component V p is with equality if and only if S(V 1 ), S(V 2 ), . . . , S(V ℓ ) are linearly independent. If δ = 0, then necessarily δ p = 0 for all p.
If (V, E) is weakly reversible and δ = 0, then the associated dynamical system is always complexbalanced, regardless of the choice of κ. This result is known as the Deficiency Zero Theorem [15,21]. The deficiency is a property of the E-graph, not of the associated dynamical system, yet in the case of deficiency zero, it has strong implications on the dynamics. The goal of this paper is to search for weakly reversible and deficiency zero (WR 0 ) realizations for polynomial dynamical systems, which are automatically complex-balanced, and therefore obey the properties listed in Theorem 2.12.
The system (2) admits a matrix decomposition that aids in studying complex-balanced steady states. For a weighted E-graph (V, E, κ) where |V | = m, its associated dynamical system (2) can decomposed be asẋ = YA κ x Y [24], where Y = y 1 y 2 · · · y m is a matrix whose columns are the vertices (including both sources and targets); x Y is the vector of monomials whose ith component is x y i , and the Kirchoff matrix is the negative transpose of the graph Laplacian of (V, E, κ). In general, the ith component of measures the net flux passing through the ith vertex, so a complex-balanced steady state x * is a solution to the equation Figure 2: A weighted E-graph with two connected components but three terminal strongly connected components (boxed). Its Kirchoff matrix A κ and a basis for ker A κ are given in Example 2.15.
The kernel of A κ is supported on the terminal strongly connected components: According to the Matrix-Tree Theorem [9,21], there is an explicit formula for the entries of c p . Each non-zero [c p ] i is a polynomial of κ ij with positive coefficients, given by the maximal minors of A κ [9,30,36].
when (V, E, κ) has t terminal strongly connected components [18]. Therefore if (V, E) is WR 0 , then ker(YA κ ) = ker A κ , and the matrix of net direction vectors W coincides with YA κ (see Lemma 3.1).
For the purpose of this work, we assume that we are given W and the matrix of source vertices Y s , but we do not know the decomposition of W into the product YA κ , where the columns of Y s are also columns of Y. Because ker A κ is well characterized [17,18,20], we make use of it in our search for WR 0 realizations.
Example 2.15. Consider the weighted E-graph (V, E, κ) in Figure 2. While (V, E) has two connected components, it has three terminal strongly connected components (boxed in Figure 2). With the ordering of vertices as labelled in the figure, the Kirchoff matrix of (V, E, κ) is A basis for its kernel is given by the vectors The supports of the basis vectors c p are precisely the terminal strongly connected components of (V, E). If the graph is weakly reversible, then the basis of ker A κ given in Theorem 2.14 provides a way to partition the set of vertices.
Main results
In this section, we present Algorithm 1 (see also Figure 3) that searches for a weakly reversible and deficiency zero (WR 0 ) realization of a given system of polynomial differential equations where y 1 , . . . , y m ∈ Z n ≥ are distinct, and w 1 , w 2 , . . . , w m ∈ R n \ {0}. Whenever (5) admits a WR 0 realization, the system is complex-balanced and enjoys all the properties listed in Theorem 2.12. Moreover, if no WR 0 realization exists for (5), our algorithm would conclude as much. Whenever a WR 0 realization exists, the set of positive steady states has a log-linear structure that allows us to easily find the steady states of (5), as outlined in Theorem 3.12. Finally, our algorithm is valid even if the w i 's are only known up to a positive scalar multiple; we prove this in Theorem 3.13.
Algorithm for WR 0 realization
The inputs of Algorithm 1 are the source vertices and their net direction vectors via Y s and W respectively. To find a WR 0 realization (V, E, κ) is to find a matrix decomposition of W = Y s A κ , where A κ encodes the graph structure of (V, E). In the following lemma, we prove properties that can be expected should a WR 0 realization exists.
Recall that a set X is a polyhedral cone if X = {x : Mx ≤ 0} for some matrix M. Such a cone is convex. It is pointed , or strongly convex , if it does not contain a positive dimensional linear subspace. Note that a cone contained in the positive orthant R m ≥ is always pointed. A pointed polyhedral cone admits a unique (up to scalar multiple) minimal set of generators [8]. Define the candidate connected components to be V p := supp(c p ), for p = 1, 2, . . . , ℓ. for each i ∈ V p do 11: if w i / ∈ Cone{y j − y i : j ∈ V p } then
15:
Add {y i → y j : κ ij > 0} to edge set E. Lemma 3.1. Suppose a polynomial dynamical systemẋ = f (x) admits a WR 0 realization (V, E, κ) with ℓ connected components. Let Y s be the matrix of source vertices, and W the matrix of net direction vectors of the polynomial dynamical systemẋ = f (x). Let S be the associated linear space, and A κ the Kirchoff matrix of the weighted E-graph (V, E, κ). Then we have: ≥ is a pointed polyhedral cone, and (iv) a minimal set of generators for ker W ∩ R m ≥ has ℓ elements, whose supports correspond to the connected components of (V, E).
Proof. Because (V, E) is weakly reversible, all vertices in V are sources. Moreover, because the deficiency of (V, E) is zero, by [11,Proposition 3.5], the net direction vector from any y i is nonzero, and the set of source vertices corresponds exactly to the columns of Y s .
(i) By definition, f (x) = Wx Ys , and by dynamical equivalence, f (x) = Y s A κ x Ys . Since the coefficients of polynomial functions are uniquely determined, W = Y s A κ . Because dim(ker Y s ∩ im A κ ) = δ = 0, we have ker W = ker A κ .
(ii) Note that where the first and last equalities follow from 0 = δ = dim(ker Y s ∩ im A κ ) = |V | − ℓ − dim S, and the second equality follows from weak reversibility and Theorem 2.14. Clearly im W ⊆ S, so im W = S.
(iii) The set ker W ∩ R m ≥ is the solution to Wν ≥ 0, −Wν ≥ 0, and Id ν ≥ 0; thus the set is a polyhedral cone. That ker W ∩ R m ≥ is pointed follows from it being a subset of R m ≥ . (iv) Let B = {c 1 , c 2 , . . . , c ℓ } be a basis of ker A κ as in Theorem 2.14, where c p ≥ 0, and each V p = {y i : i ∈ supp(c p )} is a connected component of (V, E). Clearly B ⊆ ker W ∩ R m ≥ ; we claim that B is a minimal set of generators for the pointed cone.
Let ν ∈ ker W ∩ R m ≥ be arbitrary. By (ii), B is a basis for ker W, so decompose ν accordingly: for some λ p ∈ R. By Theorem 2.14, each c p is supported on the connected components of (V, E), which partition the set of vertices. In particular, for each i = 1, . . . , m, there is exactly Proof. If the algorithm exits at line 3, then by the contrapositive of Lemma 3.1(iv) no WR 0 realization exists. Continuing with the algorithm, let {c 1 , . . . , c ℓ } be a minimal set of generators of ker W ∩ R n ≥ , and partition the vertices as V p := supp(c p ). If instead the algorithm exits at line 8, then again no WR 0 realization exists because WR 0 realizations have affinely independent connected components [12,Theorem 9]. Finally, exiting at line 12 means that some net direction vector w i cannot be decomposed as edges from y i to other vertices in V p , which defines a connected component of a WR 0 realization if it exists according to Lemma 3.1(iv). Proof. If the algorithm reaches line 23, a realization has been found with edges among V 1 , . . . V ℓ , i.e., the connected components are subsets of V p . We prove now that in fact, each V p is connected in (V, E).
with c i > 0. Furthermore, the if statement in line 11 returning false implies that each w i in (6) can be further decomposed as edges between vertices in U . For any y i in V * , which is a connected component, the net direction vector w i is a positive linear combination of edge vectors between y i and other vertices in V * , so w i ∈ S(V * ). In particular, c * := i∈V * c i w i ∈ S(V * ). Similarly, any vertices in U \ V * are only connected to other vertices in U \ V * . Linear independence of S(V * ) and S(U \ V * ) means that the vectors c * and c − c * are linearly independent. Both c * and c − c * lie in the cone ker W ∩ R m ≥ , so {c 1 , . . . , c ℓ } is not a set of generators, which is a contradiction. We claim that the minimal set of generators {c 1 , c 2 , . . . , c ℓ } forms a basis for ker W. Let c ∈ ker W be arbitrary. If c has non-negative components, then it is a linear combination of c p 's. If c ∈ R m ≥ , then there exist sufficiently large constants µ p > 0 so that This vector is a non-negative combination of c 1 , c 2 , . . . , c ℓ ; thus, c is a linear combination of the generating vectors. Since W ∈ R n×|V | , we have rank W = |V | − ℓ.
Let S be the associated linear space of (V, E), i.e., S = span{y j − y i : y i → y j ∈ E}. The falsity of the if statement in line 11 implies that im W ⊆ S, so dim S ≥ |V | − ℓ. Hence, the deficiency of (V, E) is δ = |V | − ℓ − dim S ≤ 0. Because δ is always non-negative, we conclude that δ = 0. We now prove that the first component (V 1 , E 1 ) is strongly connected. Let Y (1) be the first m 1 columns of Y s and W (1) be the first m 1 columns of W, so that Y (1) A (1) κ = W (1) . Let > spanning the one-dimensional subspace ker W (1) . Finally, let S 1 = span{y j − y i : y i ∈ V 1 }.
Because (V 1 , E 1 ) is connected and V 1 is affinely independent, dim S 1 = |V 1 | − 1, so δ 1 = 0. Moreover, if t denotes the number of terminal strongly connected components in ( κ , and c ′ 1 also spans ker A κ . By Theorem 2.14, c ′ 1 is supported on the terminal strongly connected component, which in this case is all of V 1 . Therefore, (V 1 , E 1 ) is in fact strongly connected.
An analogous claim can be made about the other connected components. Consequently (V, E) is weakly reversible.
The lemmas above provide the technical parts that we need to prove the main result of this paper.
Theorem 3.7. Given a system of differential equations with distinct y i ∈ Z n ≥ , and w i ∈ R n \ {0}. Algorithm 1 returns the unique WR 0 realization of the dynamical system if it exists, or concludes that no WR 0 realization exists.
Proof. There are two possible scenarios: either (1) the algorithm exits at lines 3, 8, or 12 by failing one of the if statements, or (2) the algorithm successfully reaches line 23. In the first scenario, Lemma 3.2 implies that no WR 0 realization exists. In the second scenario, the realization has connected components V 1 , V 2 , . . . , V ℓ according to Lemma 3.3. The realization is weakly reversible and deficiency zero by Lemmas 3.4 and 3.5 respectively. The uniqueness of the realization follows from [11].
Remark 3.8. The uniqueness of the WR 0 realization is also a consequence of Algorithm 1. This is due to the affine and linear independences, as well as the structure of ker W = ker A κ .
If a WR 0 realization exists, the polynomial dynamical system is complex-balanced. Therefore, if a system passes Algorithm 1, it automatically inherits all the algebraic and dynamical properties of complex-balanced system. Weak reversibility implies that a positive steady state exists [5]. The remaining statements in the theorem below are easy consequence of Theorems 2.12 and 3.7.
Theorem 3.9. Suppose the system of differential equations with distinct y i ∈ Z n ≥ and w i ∈ R n \ {0}, passes Algorithm 1. Let W be the matrix of net direction vectors and S = im W. Then the following holds.
(i) A positive steady state x * exists.
(ii) There is exactly one steady state within every invariant polyhedron (x 0 + S) ∩ R n > for any x 0 ∈ R n > , and it is complex-balanced. (iii) Any positive steady state x satisfies ln x − ln x * ∈ S ⊥ .
(iv) The function defined on R n > , is a strict Lyapunov function of (7) within every invariant polyhedron (x 0 + S) ∩ R n > , with a global minimum at the corresponding complex-balanced steady state. (v) Every positive steady state is locally asymptotically stable with respect to its invariant polyhedron.
This implies that the system (8) has exactly one steady state within each invariant triangle given by 2x 1 + x 2 + x 3 = C for some C > 0, and this steady state is a global attractor within each such triangle. From Theorem 3.9, we know the steady state set admits a monomial parametrization of the form (a 1 s 2 , a 2 s, a 3 s) for some constants a i > 0. In fact, the set of steady states is given by and an explanation for the coefficients above will be provided in Theorem 3.12. Example 3.11. Consider the system of differential equations Again, we have n = 3 and m = 3. The monomials are the same as those in the previous example. The difference lies in the first column of the matrix of net direction vectors whose kernel is spanned by c = (2, 1, 1) ⊤ . As in the previous example, the vertices y 1 , y 2 , and y 3 are affinely independent. However, w 1 ∈ Cone{y j − y 1 : j = 2, 3}, so no WR 0 realization exists.
The set of positive steady states of a WR 0 realization
Algorithm 1 determines whether a given polynomial dynamical system admits a WR 0 realization. If it does, its steady state set is in fact log-linear. In this section, we write down a system of linear equations whose solution set is in bijection with the set of positive steady states; this provides an explicit parametrization of the set of positive steady states.
For any z ∈ R n and x ∈ R n > , define the component-wise operations exp z = (e z 1 , e z 2 , . . . , e zn ) ⊤ and log(x) = (log x 1 , log x 2 , . . . , log x n ) ⊤ . We extend these operations to sets. If Z ⊆ R n , then exp(Z) = {exp z : z ∈ Z}, and if X ⊆ R n > , then log(X) = {log x : x ∈ X}. Assume that the polynomial dynamical system with distinct y i ∈ Z n ≥ and w i ∈ R n \ {0}, passes Algorithm 1, i.e., it admits a WR 0 realization (V, E, κ). Without loss of generality, assume the vertices are ordered according to connected components in (V, E), i.e., the first m 1 vertices belong to the connected component (V 1 , E 1 ), the next m 2 vertices belong to the connected component (V 2 , E 2 ), and so forth. Let {c 1 , c 2 , . . . , c ℓ } be a minimal set of generators of ker W ∩ R m ≥ , ordered in an analogous way. From Algorithm 1, we know that the supports of the vectors c 1 , c 2 , . . . , c ℓ correspond to the connected components of (V, E).
Let c 1 = (α 1 , α 2 , . . . , α m 1 , 0, . . . , 0) ⊤ . Define matrix D 1 ∈ R (m 1 −1)×n whose rows are the affine vectors from y 1 to the remaining vertices of V 1 , and define vector J 1 ∈ R m 1 −1 using the logdifferences of the components of c 1 , i.e., For the connected component (V p , E p ), define D p and J p in a similar fashion. Define Theorem 3.12. Suppose the system of differential equation (10) admits a WR 0 realization (V, E, κ), and let D ∈ R (m−ℓ)×n and J ∈ R m−ℓ > be defined as in (11). Then the system Dz = J is solvable. Let z * + ker D be its solution set. Then the set of positive steady states of (10) is exp(z * + ker D).
Proof. First we prove that the linear system Dz = J is solvable. Consider D 1 . The vertices y 1 , y 2 , . . . , y m 1 in the first connected component are affinely independent, so the rows of D 1 are linearly independent. Moreover, as noted in Remark 3.6 the row-space of D 1 is the associated linear subspace S(V 1 ). Therefore rank D 1 = m 1 − 1, and the matrix D 1 is surjective onto R m 1 −1 .
Similarly, for each p = 2, . . . , ℓ, the row-space of the matrix D p is S(V p ), and the matrix D p is surjective. In addition, because the realization (V, E, κ) has deficiency zero, S(V 1 ), S(V 2 ), . . . , S(V ℓ ) are linearly independent; in other words, the m − ℓ rows of the matrix D are linearly independent. Consequently, D is surjective, and the system Dz = J is solvable.
Let z * + ker D be the set of solution to Dz = J. We next show that each solution can be related to a positive steady state of (10), which by definition satisfies In other words, (x y 1 , . . . , x y m ) ⊤ lies in the steady state flux cone ker W ∩ R m > . Decomposing this vector with respect to the generators of the cone allows us to focus on one connected component at a time.
For simplicity of notation, consider the first connected component. At steady state, for some constant λ > 0, we have x y j = λα j for j = 1, 2, . . . , m 1 . Thus x y j −y 1 = α j α 1 for j = 2, 3, . . . , m 1 . Taking the logarithm of both sides, we obtain the system D 1 z = J 1 with z = log x.
Repeating this computation for each connected component, we conclude that x is a positive steady state for (10) if and only if x solves Dz = J with z = log x. This leads us to the characterization of the set of positive steady states for (10) as exp(z * + ker D), where z * + ker D is the set of solutions to Dz = J.
Extension to polynomial systems with unspecified coefficients
If instead of (10), we need to analyze for some unknown a i > 0, it turns out that the answer as to whether a WR 0 realization exists is the same: Theorem 3.13. For any a i > 0, the system (12) admits a WR 0 realization (V, E, κ) if and only if the system (10) admits a WR 0 realization (V, E, κ * ). Moreover, κ ij = a i κ * ij .
Proof. The forward implication is trivial. We focus our attention on the other direction. For any i, j, let κ ij = a i κ * ij , so κ ij > 0 if and only if κ * ij > 0. In other words, the weighted E-graph (V, E, κ) shares the same set of edges as (V, E, κ * ). Because the deficiency is characterized by affine and linear independence of the connected components, and the two graphs share the same structure, (V, E, κ) is weakly reversible and deficiency zero if and only if (V, E, κ * ) is.
Deficiency zero realizations that are not weakly reversible
If a polynomial dynamical system admits a deficiency zero realization that is not weakly reversible, then its dynamics is also greatly restricted: it can have no positive steady states, no oscillations, and no chaotic dynamics [15,17,21]. Actually, such realizations are special examples of mass-action system that are not consistent [2]. An E-graph (V, E) is said to be consistent if there exist real numbers α ij > 0 such that (i,j)∈E α ij (y j − y i ) = 0.
It is easy to see that a polynomial dynamical system of the form (10) has a realization that is not consistent if and only if ker W ∩ R m > = ∅.
If a polynomial dynamical system has a realization that is not consistent, then it cannot have realization that is weakly reversible, because weakly reversible systems must have at least one positive steady state [5]. Therefore, if Algorithm 1 is accompanied by a preprocessing step that checks condition (13), then that step will decide whether our given system (10) has a realization that is not consistent; in particular, this step will also find all cases where our given system has a deficiency zero realization that is not weakly reversible.
|
2022-05-31T01:15:36.584Z
|
2022-05-27T00:00:00.000
|
{
"year": 2022,
"sha1": "35fbb7f51e0432e3d643e60f4df4d7b5d5b88992",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a2393599409caa0dbafcc4464afa80919db3908c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
232385450
|
pes2o/s2orc
|
v3-fos-license
|
Construction of a Stable Lanthanide Metal-Organic Framework as a Luminescent Probe for Rapid Naked-Eye Recognition of Fe3+ and Acetone
Four lanthanide metal-organic frameworks (Ln-MOFs), namely {[Me2NH2][LnL]·2H2O}n (Ln = Eu 1, Tb 2, Dy 3, Gd 4), have been constructed from a new tetradentate ligand 1-(3,5-dicarboxylatobenzyl)-3,5-pyrazole dicarboxylic acid (H4L). These isostructural Ln-MOFs, crystallizing in the monoclinic P21/c space group, feature a 3D structure with 7.5 Å × 9.8 Å channels along the b axis and the point symbol of {410.614.84} {45.6}2. The framework shows high air and hydrolytic stability, which can keep stable after exposed to humid air for 30 days or immersed in water for seven days. Four MOFs with different lanthanide ions (Eu3+, Tb3+, Dy3+, and Gd3+) ions exhibit red, green, yellow, and blue emissions, respectively. The Tb-MOF emitting bright green luminescence can selectively and rapidly (<40 s) detect Fe3+ in aqueous media via a fluorescence quenching effect. The detection shows excellent anti-inference ability toward many other cations and can be easily recognized by naked eyes. In addition, it can also be utilized as a rapid fluorescent sensor to detect acetone solvent as well as acetone vapor. Similar results of sensing experiments were observed from Eu-MOF. The sensing mechanism are further discussed.
Introduction
Environmental and ecological problems have caused widespread concern of scientists in recent years. As an essential metal element in human metabolisms, Fe 3+ ion plays a crucial role in muscle function, brain function, and hemoglobin [1,2]. Serious problems can be caused by the overloading or deficiency in the Fe 3+ ion, such as immunosuppression, cognitive decline, and iron deficient anemia [3]. Likewise, acetone (CH 3 COCH 3 ), which is one of the critical members of volatile organic solvents (VOCs), has experienced an ever-growing attention not only in the laboratory and industry but also in the household, since it can cause irreversible damage to the human body, such as inhibiting breathing and causing dyspnea [4]. Accordingly, rapid and convenient detection of Fe 3+ and acetone are increasingly essential for the environmental and ecological system.
In the recent decades, metal-organic frameworks (MOFs) have drawn increasing research interest on account of their unique features in their structures, such as permanent porosity, tunable structures, functions, and so on [5][6][7][8]. The potential applications of MOFs include but not limit to catalysis [9][10][11], sensing [12][13][14][15][16][17][18][19][20], optics [21][22][23][24][25], gas storage and separation [26][27][28][29], bio-imaging, and drug delivery [30][31][32][33]. In particular, lanthanide metalorganic frameworks (Ln-MOFs), displaying bright luminescence with large Stokes shifts, high quantum yields, and long lifetimes, are widely used as versatile materials in optics and sensing [34]. The forerunners have made great efforts on the recognition of environmentally and biologically relative species based on luminescent Ln-MOFs [35,36]. However, most of the MOFs reported for luminescent sensing are applied in nonaqueous media because of their poor hydrolytic stabilities [37]. In addition, the reports of rapid detection are few. The poor stabilities of the structures and the time consumption in the detection limit the further applications of MOFs in luminescent sensing. In light of these facts, we attempt to construct multifunctional Ln-MOFs for sensing in aqueous media conveniently and rapidly. Herein, a tetracarboxylic acid ligand, 1-(3,5-dicarboxylatobenzyl)-3,5-pyrazole dicarboxylic acid (H 4 L, Scheme 1), which can provide versatile coordination modes and Lewis base sites, was employed to construct four novel MOFs {[Me 2 NH 2 ][LnL]·2H 2 O} n (Ln = Eu 1, Tb 2, Dy 3, and Gd 4). The MOFs demonstrate excellent air and water stabilities as well as colorful luminescence. Based on these virtues, a convenient and fast-response luminescent sensor toward the Fe 3+ ion in aqueous solution with good anti-inference ability was fabricated. It also shows the potential for quickly sensing acetone solvent as well as its vapor in the air. The simplicity, visualization, and high time efficiency of this method make it a competitive dual-functional fluorescent probe. Crystal data and refinement results for compounds 1-4 are listed in Table 1.
Results and Discussion
Single crystal X-ray diffraction analysis (SCXRD) shows that the four compounds are isomorphic. Therefore, we choose 4 (Gd-MOF) as a representative to elaborate the structure. Crystal structure analysis reveals that it crystallizes in space group P2 1 /c. The asymmetric unit contains a crystallographically-independent Gd 3+ ion, a fully deprotonated [L] 4− ligand, and an isolated dimethylamine cation ( Figure S2). In the compound, the coordination environment of the nine-coordinated gadolinium center is composed of six oxygen atoms from three chelating bidentate carboxylate groups and three oxygen atoms from three µ 2 -bridging carboxylate groups (Figure 1a). The Gd1 atom and its corresponding symmetrically generated one are bridged by four bidentate carboxylic groups of four [L] 4− ligands to form a binuclear metal cluster unit [Gd 2 (COO) 4 ] (Figure 1b). The coordination mode of the ligand is a µ 6 -bridging mode connecting with six Gd 3+ ions from four [Gd 2 (COO) 4 ] clusters, where two carboxylate groups adopt a chelating bidentate mode and the other two carboxylate groups adopt a µ 2 -bridging bidentate mode, respectively ( Figure 1c and Figure S3). Each [Gd 2 (COO) 4 ] cluster is connected with the other 14 binuclear clusters through eight [L] 4− ligands ( Figure 1d). As shown in Figure 1e, the binuclear metal cluster units are further interconnected to form an infinite 3D framework through the organic linkers, showing a 1D open channel along the b-axis with the approximate size of 7.5 Å × 9.8 Å. a, b and c are the unit vectors of unit cell. α, β, and γ are the unit cell parameters (angles between a, b and c). Z is the number of molecules in unit cell. F (000) is the total number of electrons in unit cell. R 1 is residual factors. I is diffraction intensity. σ is standard deviation. wR 2 is residual factors. Topological analysis [38] shows that the structure can be simplified as a new (4, 8)connected net with stoichiometry (4−c) (8−c) 2
Framework Stability
The framework stability of MOFs is of vital importance for their practical applications. Therefore, the following experiments were carried out. First, thermogravimetry analyses (TGA) for the four compounds were studied ( Figure S5). The results show that the thermogravimetric curves of the four samples overlap together. When the temperature rises to 180 • C, the weight loss is about 6.3%, which should be the escape of two water molecules. The frameworks keep stable up to 288 • C and begin to collapse when the temperature exceeds 288 • C. Since the MOFs have the same frameworks, 2 is selected for the further tests. A thermo-diffractogram obtained from 30 • C to 450 • C demonstrates the crystalline structure keeps stable from 30-200 • C, slightly changes at 300 • C, and finally collapses when the temperature comes up to 400 • C ( Figure S6a). The observation is in accordance with the result from TGA. The air stability is determined by collecting PXRD patterns after a period of exposure in the atmosphere with a humidity of ca. 55%. The unchanged PXRD patterns reveal that the framework can survive in the humid air for more than 30 days ( Figure S6b). The PXRD patterns of the samples immersed in water shows that the water stability of the framework is excellent (remain stable for more than 7 days ( Figure S6c)). Chemical stability of the MOF was also evaluated. The samples were immersed in aqueous solutions with the pH values varying from 2 to 11 for 42 h. Then, the filtered samples were dried in air. The nearly unchanged PXRD patterns with pH values from 5 to 10 indicate relatively good chemical stability ( Figure S6d).
Luminescent Properties
The luminescence properties of the ligand and compounds 1-4 in a solid state are shown in Figure 3 and Figure S7. Ln 3+ ions suffer from weak light absorption with the absorption coefficients generally less than 10 M −1 cm −1 due to the "Laporte forbidden" feature of the transitions between the 4f n configurations of the Ln 3+ Ions. According to the famous "antenna effect" theory founded by Weissman in 1942 [39][40][41], the organic ligand can efficiently absorb light and transfer this energy to the excited states of the central lanthanide ions by overcoming the weak absorption of lanthanide and, thus, leading to the improvement of the emission efficiency. When excited at 297 nm, 1 shows the characteristic red emission with the emissions at 580 nm, 591 nm, 613 nm, 615 nm, and 700 nm corresponding to 5 D 0 → 7 F J (J = 1-4) transitions of Eu 3+ centers, respectively ( Figure 3a). Under the excitation of 299 nm, 2 exhibits characteristic green emission with the peaks at 488 nm, 619 nm, 543 nm, 584 nm, and 619 nm, ascribed to 5 D 4 → 7 F J (J = 6, 5, 4, and 3) transitions of Tb 3+ ions ( Figure 3b). 3 displays yellow emission bands at 479 nm, 573 nm, 663 nm, and 751 nm upon excitation at 287 nm, which are due to 7 F 9/2 → 6 H J (J = 12/2, 13/2, 11/2, 7/2) transitions of Dy 3+ ions ( Figure 3c). Similar bands around 300 nm observed in the excitation spectra (dash lines, Figure 3) of the three complexes may be ascribed to the n→ π* or π→ π* transitions of the ligand. The ligand displays a broad band at 370-550 nm on the excitation of 346 nm ( Figure S7a). Apart from Eu 3+ , Tb 3+ , and Dy 3+ ions, the energy levels of Gd 3+ ion are too high so that the energy from the lowest triplet state energy level (T1) of the ligand cannot be transferred to the Gd 3+ center, bringing about the broad blue emission similar to that of the pure ligand ( Figure S7b). The phosphorescence spectrum of 4 at 77 K reveals that the triplet state energy level T 1 of the ligand is 24,570 cm −1 (407 nm). The Commission Internationale de l'Éclairage (CIE) 1931 chromaticity diagram shows the different emission colors of the compounds (Figure 3d). The room temperature quantum yields and lifetimes of compounds 1, 2, and 3 were also measured and shown in Table S1 and Figure S8. Compared with the low quantum yield of 3, those of 1 and 2 are as high as 39.29% and 57.77%, respectively, indicating that Eu 3+ and Tb 3+ can be better sensitized by the ligand when compared with Dy 3+ . This can be further proven by the larger lifetimes of compounds 1 and 2.
Sensing of Cations
Porous characteristics, excellent hydrolytic stability, and intensive luminescence presented above inspire us to explore the potential of the Ln-MOFs for chemical sensing in aqueous media. 2 was selected for cation detection in consideration of its highest luminescent efficiency in four compounds. Then, 3 mg of 2 was dispersed into 3 mL of water, and the fluorescence intensity of the suspension was measured as the blank sample. Then 2 was dispersed into aqueous solution of 1 mM M(NO 3 ) x of different kinds of metal ions (M = Li + , Ni 2+ , Mg 2+ , In 3+ , K + , Al 3+ , Zn 2+ , Ca 2+ , Cu 2+ , or Fe 3+ ). The fluorescence spectra of suspensions show that the intensity of 543 nm has slight or negligible changes in the presence of most ions except the Fe 3+ cation (Figure 4a and Figure S9). A remarkable quenching occurs in the Fe 3+ ion solution, which can be easily observed under the irradiation of ultraviolet (UV) light with naked eyes. This indicates that 2 can be used to selectively detect the Fe 3+ ion in aqueous solution. The concentration-dependent luminescence of Fe 3+ was studied in the presence of Fe 3+ from 0 to 5 × 10 −4 M (Figure 4b). The quenching effect can be clearly observed at the concentration of 6 × 10 −6 M (Figure 4c,d). The fluorescence quenching efficiency is measured by using the formula: (1 − I/I 0 ) × 100%), suggesting that 2 shows a high sensitivity in fluorescence quenching sensing. The quenching effect coefficient (K sv ), which verifies the quenching effect, is calculated according to the Stern-Volmer (SV) equation: , where I 0 and I represent the luminescent intensities before and after analyte addition, respectively. [M] represents the molar concentration of metal ions. In the concentration from 0 to 3 × 10 −4 M, the Stern-Volmer quenching curve is close to the first-order linearity, with the correlation coefficient of linear fitting 0.993, and the fitted linear equation I 0 /I = 1.02288 + 6800 [Fe 3+ ]. However, there is no linear correlation but only an upward curve when the concentration reaches up to 3 × 10 −4 M. The non-linear nature suggests the mechanism of the quenching can be attributed to the combination of dynamic and static quenching [42,43].
During the experiment, it was found that the fluorescence of the sample can be quenched by Fe 3+ ions in a remarkably short contact time, so we studied the relationship between the Fe 3+ contact time and the fluorescence intensity. The result showed that the maximum quenching efficiency is achieved in 40 s, and then it remains unchanged (Figure 5a and Figure S10). The phenomenon indicates that the 2 has an ultrafast fluorescence response to the Fe 3+ ion, which is expected to be applied to real-time sensing. In order to study the anti-interference ability toward other cations, sensing experiments were carried out in aqueous solution with a mixture of Fe 3+ ion and other metal cations. The emission intensity of 2 in aqueous solution of mixed cations (Li + , Ni 2+ , Mg 2+ , In 3+ , K + , Al 3+ , Zn 2+ , Ca 2+ and Cu 2+ , 1 × 10 −3 M each) shows only a slight quenching compared with the intensity in pure water ( Figure S11). Once the Fe 3+ ion (1 × 10 −3 M) was added, the slightly changed fluorescence intensity of 2 in other cationic solutions immediately shows a dramatic decrease (Figure 5b). It indicates that the presence of interference ions has little influence on fluorescence detection of 2 toward the Fe 3+ ion.
Furthermore, a fluorescent test paper was made to realize a simple, portable, and realtime Fe 3+ ion sensor. The capillary dipped in 0.01 M Fe 3+ ion aqueous solution was used to write "Fe" on the fluorescent test paper. The position with writing marks was quenched immediately under the irradiation of a UV lamp (Figure 5c). The above experimental results indicate that the 2 could be employed as a convenient, rapid, and easily-recognized probe for Fe 3+ with good anti-interference ability.
Sensing of Organic Molecules
Volatile organic solvents (VOCs) are widely used in laboratory and industrial products. However, they have adverse impact on human health and the environment. As one of the most burgeoning fluorescent probe materials, MOFs are widely used in the detection of organic molecules. To explore the potential application of 2 as a fluorescent sensor for VOCs, a certain amount of its powder was ground and dispersed in different organic solvents, including n-BuOH, EtOH, MeOH, N,N -dimethylformamide (DMF), i-PrOH, CH 2 Cl 2 , C 4 H 10 O, CH 3 CN, tetrahydrofuran (THF), benzene, toluene, n-pentane, and acetone. The photoluminescence test was carried out after ultrasonic dispersing the sample evenly. As shown in Figure 6a and Figure S12, the 5 D 4 → 7 F 5 fluorescence intensity of 2 shows no significant change in most organic solvents while the intensity drops abruptly when 2 is dispersed in acetone. It suggests that 2 may be a selective sensor toward acetone solvent. Then, a concentration-dependent experiment was carried out. Different concentrations of acetone were dissolved in 3 mL of DMF and 3 mg of 2 was then added. The quenching efficiency of the emulsion shows a gradual increase with the growing acetone concentration. A significant variation can be observed when the concentration of acetone is as low as 0.04 vol.%. From 0.04 vol.% to 0.2 vol.%, the increasing trend of the luminescent quenching efficiency of 2 at 543 nm versus the volume ratio of acetone can be fitted with a first-order exponential decay. In addition, the quenching efficiency of 2 toward acetone comes up to 70% at 0.5 vol.% (Figure 6b,c).
The response time of 2 to acetone in liquid was also tested. The result demonstrates that the response of 2 to Fe 3+ ion has already finished in 40 s (Figure 6d and Figure S13). The phenomenon indicates that the probe responds to acetone very quickly. The apparent brightness change, which can be observed with the naked-eye under the irradiation of the UV light, indicates that 2 can be used to detect acetone conveniently (Figure 6e).
Since acetone is an extremely volatile and highly toxic gas, the detection of acetone vapor is of vital importance. Despite some luminescent sensors for acetone solvent being reported, few efforts have been made for the exploration of the rapid detection of acetone vapor [44,45]. Upon exposure to saturated acetone vapor for 40 s, the fluorescence intensity of solid sample 2 drastically decreases 69.2% (Figure 6f). At 60 s, the decrease comes to 87.8%. Moreover, the decline of the intensity turns to 94.5% when 2 is exposed to acetone for 140 s. These phenomena indicate that compound 2 can detect acetone vapor in an extremely short time.
Sensing Mechanism
The superb detection performance of the compounds prompts us to explore the possible mechanism of luminescence quenching caused by Fe 3+ and acetone. First, the PXRD patterns of the samples after sensing experiments were recorded (Figures S14 and S15). The nearly unchanged patterns show the samples keep their crystalline structure after treatment, indicating that the luminescence quenching is not caused by the collapse of the framework. In light of previous studies, competitive energy absorption between analytes and Ln-MOFs may be a very possible reason for the luminescence quenching. With this in mind, the UV-vis spectra of the Fe 3+ as well as other metal ions (measured in aqueous solution, 2 × 10 −4 M) and the acetone (measured in DMF solution, 0.2 vol.%) were recorded. As shown in Figure S16, only the UV-vis absorption spectrum of Fe 3+ has significant absorption around 295 nm, which shows a clear overlap with the excitation spectrum of 2 (Figure 3b). This implies that there is competitive absorption of the light source energy between Fe 3+ and 2, and, hence, leads to the quenching behavior. Similarly, spectral overlap is observed between the excitation spectrum of compound 2 and the UV-vis absorption spectrum of acetone ranging from 260 nm to 323 nm ( Figure S17).
Based on the above results, we surmise that the main reason of fluorescence quenching can be attributed to the competitive energy absorption between analytes (Fe 3+ and acetone) and Ln-MOFs. However, we still wonder if there are any other possible mechanisms involved in the luminescence quenching. UV-vis adsorption spectra of Fe 3+ aqueous solution with different concentrations were then recorded. As Figure S18 shows, there is almost no absorption near 300 nm when the concentration of Fe 3+ ions is lowered to 6 × 10 −6 M, but still ca. 8.5% luminescence quenching occurs. This indicates that there may be another reason for the quenching. Furthermore, it is worth noting that 2 shows clear fluorescence quenching towards acetone vapor. In this sensing experiment, luminescence of solid samples instead of suspension was recorded. Thus, the influence caused by the absorption of acetone can be nearly neglected. These facts give us a clue that there may be another mechanism in the quenching process.
To further investigate the other possible mechanism, the isomorphic 1 was chosen to implement the same sensing tests of metal ions and organic solvents. The outcomes of the experiments are consistent with those of 2 ( Figures S19 and S20). The luminescence of Eu-MOF also shows selectivity responses towards Fe 3+ and acetone. The results give a reasonable indication that the other sensing mechanism does not come from the Tb 3+ ions, but may be associated with the features of the framework itself. According to the famous "antenna effect" theory, we know that the emissions are attributed to energy transfer from the ligand to lanthanide ions. If guest molecules enter the framework, energy transfer may be perturbed, thereby, affecting the intensity of the luminescence [39][40][41]. In light of previous research [46][47][48][49], we assume the other possible reason of quenching is that the Fe 3+ ions and acetone molecules may be adsorbed by the MOFs and may form interactions with the naked Lewis basic pyridyl active sites introduced by the ligand, ultimately resulting in a decrease in the efficiency of the transfer process from the ligand to the lanthanide centers.
In a word, multiple quenching mechanisms may co-work in the Fe 3+ ions and acetone sensing process. Both competitive energy absorption and the influence of guest molecules on energy transfer may account for the quenching behavior of compounds toward Fe 3+ ions and acetone.
Chemicals and Reagents
All chemicals and solvents including the H 4 L ligand were purchased commercially and used without further purification.
Apparatus
Powder X-ray diffraction (PXRD) data were recorded using a Rigaku Miniflex 600 diffractometer (Rigaku, Tokyo, Japan). Infrared (IR) spectra using KBr slices were recorded using a PerkinElmer Spectrum One FT-IR (Fourier-transform infrared spectroscopy) spectrometer (PerkinElmer, Dublin, Ireland) ranging from 400 to 4000 cm −1 . Element analyses (C, H, and N) were measured by a German Elemental Vario EL III instrument (Elementar Analysensysteme GmbH, Langenselbold, Germany). Thermogravimetric analyses (TGA) were measured by a Netzsch STA 449c instrument (Netzsch Corporation, Selb, Germany)and were performed in the temperature range of ambient temperature to 900 • C with a heating rate of 10 • C/min under the flowing nitrogen environment. UV-vis study was recorded on a Lambda 365 (PerkinElmer, Waltham, MA, USA). The photoluminescence spectra of solid-state samples were collected by a Horiba Jobin-Yvon Fluorolog-3 fluorescence spectrometer (HORIBA Jobin Yvon, Kyoto, Japan). An Edinburgh Analytical Instruments FLS920 (Edinburgh Instruments, Edinburgh, UK) was used to capture the fluorescence lifetimes, and the overall photoluminescence quantum yields were investigated by using the integrating sphere covered with barium sulfate at room temperature. = Eu 1, Tb 2, Dy 3, Gd 4) (Compounds 1-4) In total, 45 mg of Ln(NO 3 ) 3 ·6H 2 O and 33 mg of H 4 L were dissolved in DMF (N,Ndimethylformamide) (5 mL) and H 2 O (5 mL). The solution was sealed in a 25-mL Teflonlined stainless-steel autoclave, and heated at 160 • C for 72 h. Then the mixture was cooled to room temperature at a rate of 1 • C/min. The resulting colorless crystals, {[Me 2 NH 2 ][LnL]·2H 2 O} n , were washed with DMF and methanol several times, and then evacuated to remove the co-assembled DMF and methanol in the pores of MOFs. The yields are 68%, 72%, 76%, and 75% for compounds 1-4 based on the metal ions.
Single-Crystal X-ray Crystallography
All the single-crystal X-ray diffraction data for the crystals were recorded on a supernova diffractometer with a multilayer mirror Cu Kα radiation (λ = 1.5418 Å) or Mo Kα (λ = 0.7107 Å). All the structures of compounds 1-4 were solved by a direct method of SHELX-97 and full-matrix least-squares refinement on F 2 was performed using SHELX-97 [50]. Except for some disordered water molecules, all non-hydrogen atoms were anisotropic refined on F 2 by the full-matrix least-squares technique using the SHELXL-97. Only the [Me 2 NH 2 ] + counterion in Gd 3+ MOF can be solved from single crystal X-ray diffraction. The existences of [Me 2 NH 2 ] + in the other three MOFs and water molecules were further verified by the IR spectra, elemental analysis, and TGA data, obtaining the final chemical formulas of compounds 1-4. The CCDC (Cambridge Crystallographic Data Centre) numbers for compounds 1-4 are 2043852, 2043854, 2043855, and 2043853, respectively. See Tables S2-S5 for detailed crystallographic data.
Sensing Experiment
In metal ions sensing, 3 mg of activated MOFs was ground to fine powder prepared for the experiments in aqueous solutions. The samples were then dispersed into 3 mL of various M(NO 3 ) x aqueous solutions and ultrasonically dispersed to obtain uniform suspensions. Then the suspensions were loaded in the cuvettes for collecting the photoluminescence spectra. The fluorescence intensity of the suspension in water was measured as the blank sample.
In the Tb-MOF paper sensor, uniformly dispersed suspension was obtained via an ultrasonic dispersal of Tb-MOF in di-chloromethane (CH 2 Cl 2 ), and then the suspension was evenly spread over the filter paper. After the dichloromethane volatilized and the filter paper dried, the Tb-MOF-coated paper sensor was obtained. The capillary dipped in 0.01 M Fe 3+ ion aqueous solution was used to write "Fe" on the fluorescent test paper.
In organic solvents sensing, the experiments were carried out by dispersing 3 mg of activated and ground MOFs into 3 mL of different organic solvents. After ultrasonic dispersing the sample evenly, the suspensions were added into the cuvettes and the photoluminescence tests were carried out. The fluorescence intensity of the suspension in DMF was measured as the blank sample.
In acetone vapor sensing, the ground Tb-MOF powder was tiled on the quartz slide to detect acetone vapor. The culture dish containing acetone was put into the inverted crystallizing dish to obtain saturated acetone vapor. After three hours, the quartz slide was put into the inverted crystallizing dish rapidly to make the sample power exposure to the saturated acetone vapor. After the reaction, the quartz slide was put into the fluorescence spectrometer for solid fluorescence detection.
Conclusions
In summary, a tetradentate carboxylic ligand with naked Lewis basic pyridyl active sites was employed to construct a series of lanthanide MOFs by the solvothermal method.
The synthesized compounds have isomorphic structures with the topology point symbol of {4 10 .6 14 .8 4 } {4 5 .6} 2 . Eu-, Tb-, and Dy-MOFs display red, green, and yellow emissions with the maximum peaks at 614, 543, and 574 nm, respectively. The high quantum yields of Euand Tb-MOFs suggest that Eu 3+ and Tb 3+ can be better sensitized by the ligand than Dy 3+ , which is also proven by their decay lifetimes. The material can detect Fe 3+ in aqueous solution by a luminescence quenching effect with high selectivity, high sensitivity, and a rapid response (<40 s). The detection shows excellent anti-inference ability toward many other cations and can be easily recognized by naked eyes. The MOF can also sense acetone solvent and vapor selectively and rapidly. Thus, it may be a rapid luminescent probe for detecting Fe 3+ and acetone, which can work at complicated circumstances. Furthermore, the framework shows not only good thermal stability but also excellent air and water stabilities, providing a basis for practical applications.
|
2021-03-29T05:21:59.500Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "57fa95514fe3f6b87c726265adb4f1fd8379daf4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/6/1695/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57fa95514fe3f6b87c726265adb4f1fd8379daf4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238418138
|
pes2o/s2orc
|
v3-fos-license
|
Structural characterization of polysaccharide from jujube (Ziziphus jujuba Mill.) fruit
Jujube (Ziziphus jujuba Mill.) fruit is one of the largest productions in China and its increasing production has drawn considerable attention from researchers. Polysaccharide is one of the most abundant components of jujube, and it represents a major group of biolotegically active constituents. This study intended to investigate the special structure of a homogeneous acidic polysaccharide (PZMP4) produced from Ziziphus Jujuba cv. Muzao fruit using novel methods, including DEAE-Sepharose Fast Flow and Sephacryl S-300 column chromatography. The structure of PZMP4 was determined via high-performance gel permeation chromatography (HPGPC), gas chromatography (GC), Fourier transform infrared spectroscopy (FT-IR), methylation analysis, nuclear magnetic resonance spectroscopy (NMR), scanning electron microscopy (SEM), and atomic force microscopy (AFM). The results reveal that PZMP4 with a molecular weight of 27.90 kDa was composed of rhamnose, arabinose, mannose, glucose, galactose, and galacturonic acid at a ratio of 2.32:2.21:0.22:0.88:2.08:8.83. Advanced structural analysis revealed a netted structure with molecular aggregates of PZMP4. Structural features demonstrated that the basic backbone of PZMP4 appeared to mainly consist of (1→4)-linked GalpA with three branches bonded to O-3 of (1→3)-linked Araf, (1→2)-linked Rhap, and terminated with GalpA. PZMP4’s unique structure could imply distinct bioactivities and have considerable utilization in functional food. Structural characteristics of PZMP4 were analyzed by HPGPC, GC, FT-IR, GC-MS, NMR, SEM and AFM. PZMP4 mainly consisted of (1→4)-linked GalpA with three branches boned to O-3. The branches included (1→3)-linked Araf, (1→2)-linked Rhap, and terminated with GalpA. Structural characteristics of PZMP4 were analyzed by HPGPC, GC, FT-IR, GC-MS, NMR, SEM and AFM. PZMP4 mainly consisted of (1→4)-linked GalpA with three branches boned to O-3. The branches included (1→3)-linked Araf, (1→2)-linked Rhap, and terminated with GalpA.
Open Access
Of the various functional components in Z. jujuba fruit, polysaccharide is especially important because of its bioactivities and large cellular concentrations. It is mainly composed of different ratios of monosaccharides and glycosidic bonds [5]. The activities of polysaccharides of Z. jujuba are determined by their molecular weights and chemical structures. An increased galacturonic acid concentration could result in enhancing antioxidant activity [6]. The structural-physicochemical properties and bioactivities of polysaccharides vary greatly among different Z. jujuba varieties [1]. Our research efforts have contributed to a better understanding of the structural basis of jujube polysaccharides [7][8][9].
Materials
The Z. Jujuba cv. Muzao fruits were supplied by the Jia County of Shaanxi. Sephacryl S-300 and DEAE-Sepharose Fast Flow cellulose were provided by GE Healthcare Life Sciences. Standard monosaccharides were obtained from Sigma Chemical Co. All additional chemicals utilized in the experiments were of analytical grade.
Polysaccharide isolation
The production of crude polysaccharide from Z. Jujuba cv. Muzao (ZMP) from jujube fruit at the red and ripened stage was performed as previously described [10,11]. After re-dissolution, the ZMP was added to a DEAE-Sepharose FF column equalized with 0.4 M NaCl and a Sephacryl S-300 column balanced with distilled water. It was then gathered, concentrated, and lyophilized for the production of designative PZMP4 purified polysaccharides [7].
General methods
The carbohydrate content was determined by the phenol-sulfuric acid method with glucose as the standard [12]. The Bradford method with bovine serum albumin as the reference was used to assess the protein content [13]. The Folin-Ciocalteu colorimetric method was used to quantify the value of the total phenol content [14].
To identify and quantify PZMP4 monosaccharide, GC analysis was performed as reported previously [15]. HPGPC on an Agilent-LC 1200 instrument equipped with a TSK-gel G3000PWxl (7.8 mm × 300 mm) column was used to analyze the homogeneity and average molecular weight of PZMP4 [7,16].
The IR spectra of PZMP4 were obtained by the KBr disc method, with 400-4000 cm −1 range for the FT-IR spectrometer. The one-and two-dimensional NMR spectra of PZMP4 were acquired with a Bruker AVIII-600 NMR spectrometer [8].
The surface morphology of PZMP4 was examined using an S-4800 SEM (Japan) under 10 kV accelerating voltage. PZMP4 was dissolved in distilled water, dropped on the surface of a mica carrier, and then dried at 70 °C under ambient pressure [8]. The AFM images were taken with an Agilent 5500 atomic force microscope (USA) in tapping mode [7].
Results and discussion
Preliminary PZMP4 characterization ZMP was isolated from Z. Jujuba cv. Muzao fruit and extracted by ultrasonic-assisted extraction, ethanol precipitation, deproteination, dialysis, and lyophilization. Further purification was subsequently done using a DEAE Sepharose Fast Flow column (2.6 cm × 100 cm), which was eluted with phosphate-buffered saline (20 mM, pH 6.0), and 0.4 M NaCl solution at a flow rate of 1.5 mL/min. We determined the elution by observing phenol-sulfuric acid, and then passed it through a Sephacryl S-300 column (2.6 cm × 100 cm) with deionized water for further purification. As a result, a single elution peak named PZMP4 was obtained. The total carbohydrate content of PZMP4 was found to be 92.64%. The protein and total phenol contents of PZMP4 were 3.09%, and 0.95%, respectively, higher than the acidic (PZMP2-2) polysaccharide from Z. Jujuba cv. Muzao [8].
As shown in Fig. 1A, HPGPC revealed that the acidic polysaccharide PZMP4 was homogeneous, with only one symmetrical absorption peak. A standard curve of the logarithm of relative molecular weight associated with elution time (t) was created using a serious detraining, shown as the following: lg Mw = − 0.3259t + 10.9495 (R 2 = 99.57%). On the basis of the equation, the average molecular weight of PZMP4 was estimated to be 27.90 kDa with a retention time of 19.983 min. This acidic jujube polysaccharide fraction had a molecular weight similar to HJP-4 (Z. Jujuba cv. Hamidazao polysaccharide) [1].
GC was used to determine the monosaccharide component of PZMP4. The hydrolysate of PZMP4 consisted of six monosaccharides, including rhamnose, arabinose, mannose, glucose, galactose, and galacturonic acid (Fig. 1B). Their ratios were 2.32:2.21:0.22:0.88:2.08:8.83, suggesting that PZMP4 was an acidic heteropolysaccharide. Furthermore, the results indicated that rhamnose, arabinose, and galacturonic acid accounted for the majority of the total polysaccharide content. However, the result was different for Z. Jujuba cv. Hamidazao polysaccharide fractions; this may be caused by different raw materials, as well as extraction and purification methods [1,5].
FT-IR spectrum analysis
The functional groups and chemical bonds of PZMP4 were further analyzed by FT-IR (Additional file 1: Fig. S1A). The O-H-stretching vibration and the C-Hstretching vibration were represented by distinct bands in the 3411 and 2937 cm −1 regions, respectively. The absorbances at 1741 and 1244 cm −1 demonstrated the existence of uronic acid, which corroborated the results of the uronic acid assay [8,17]. The symmetrical C=O-stretching vibrations were verified by the high peak at 1610 cm −1 [18]. The 1415 cm −1 peak represented the characteristic absorption of C-H bands, while the 1099 cm −1 peak indicated the pyranose form's C-O-stretching vibrations [8]. The slight characteristic absorptions at 800-900 cm −1 may indicate the existence of α-and β-configurations [19]. Consequently, FT-IR analysis of PZMP4 revealed absorption peaks of typical plant polysaccharides.
Methylation analysis
PZMP4 was methylated and reacted with trifluoroacetic acid. The resultant partly methylated alditol acetates were examined by gas chromatography-mass spectrometry (GC-MS). Table 1
NMR analysis
To determine the detailed structure of PZMP4, onedimensional and two-dimensional NMR spectra were used for further study. The C/H chemical shifts of several glycosidic bonds were consistent with the previous literature; the data are shown in Fig. 2 and Table 2. The 1 H-NMR spectrum of PZMP4 ( Fig. 2A) displays four main anomeric proton signals at δ 4.87, 5.00, 4.82, and 5.00/5.05, which were designated as A, B, C, and D, respectively. H-2, H-3, H-4, and H-5 of 1,3,4-linked GalpA residues were responsible for the significant peaks in the 3.74-5.06 ppm range. The 13 C NMR spectrum (Fig. 2B) revealed six anomeric signals resonating at 101.89, 70.79/70.93, 79.42, 84.05, 73.41/73.04, and 173.68/173.80 ppm. According to previous results in the literature, the relevant anomeric carbon signals of tagged residues in the 1 H and 13 C NMR spectra were attributed to data in the 2D NMR spectra [20,21]. From the chemical shift data in the COSY (Additional file 2: Fig. S2B), NOESY (Additional file 2: Fig. S2C), 1 H/ 1 H TOCSY (Additional file 2: Fig. S2D), HSQC (Additional file 2: Fig. S2E), and HMBC spectra (Additional file 2: Fig. S2F), the proton and carbon assignments of four main residues in PZMP4 are presented in Table 2.
The signal at δ H 5.00 was determined from the chemical shift of the anomeric proton of residue B. The equivalent signal in the anomeric carbon was δ C 102.36. The δ C 71.45/δ H 3.91, δ C 70.79/δ H 3.63 (3.66), δ C 71.59/δ H 3.96, δ C 74.21/δ H 4.32, and δ C 178.21 signals were allocated to C-2, C-3, C-4, C-, and C-6 of residue B, respectively [22,23]. The anomeric proton of residue C had a chemical shift of 4.82 ppm, whereas the anomeric carbon had a chemical shift of 102.36 ppm. The COSY and TOCSY spectra were used to identify the other protons in residue C. According to HSQC, the other comparable carbon and hydrogen signals were 84. 19 According to the NMR data, the chemical shifts of this residue were identical to that of α-1,2-linked Rhap [24,25]. HSQC indicated the other carbon and hydrogen signals with 110.24 (5.00/5.05), 81.61 (4.20), 86.77 (3.94), 83.74 (4.02), and 63.96 (3.62) ppm. From the NMR data, the chemical shifts of this residue were identical to that of α-1,3-linked Galp [26,27].
HMBC, COSY, and NOESY could determine the glycosidic linkages between sugar residues. Hence, with these techniques, the intra-residue connections were determined and are listed in Table 2. As the HMBC spectrum shows, some inter-residual cross-peaks were identified: A C-3 to D H-1, A C-4 to D H-1, A C-4 to B H-1, D C-3 to D H-1, A C-4 to D H-3, A C-3 to C H-2, and A C-1 to D H-1. In addition, in the COSY spectrum, certain interresidual cross-peaks were also recognized: C/D H-1 to C/D H-2, A/B H-2 to A/B H-3, A/B/C/D H-3 to A/B/C/D H-4, and C/D H-4 to C/D H-5. A/C H-1 to A H-3 were detected in the NOESY spectrum [28].
According to the monosaccharide composition of PZMP4, combined with the results of FT-IR, GC-MS, 1D and 2D NMR analyses, it was determined that PZMP4 was mainly composed of →4)-GalpA-(1→ backbone, with a branching point at the O-3 position consisting of Araf, Rhap, and GalpA residues.
Morphological properties
Different morphological properties are key components that contribute to the complexity of polysaccharide forms. SEM, as a microscopic-molecular-morphology observation technique, is frequently used to characterize the surface morphology of polysaccharides [29].
|
2021-10-07T13:33:40.789Z
|
2021-10-06T00:00:00.000
|
{
"year": 2021,
"sha1": "173f89013057120996656ba0dac3a766c6bb4166",
"oa_license": "CCBY",
"oa_url": "https://chembioagro.springeropen.com/track/pdf/10.1186/s40538-021-00255-2",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "da243625fc61da1c1f854985b51142c17b6e998e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
85302884
|
pes2o/s2orc
|
v3-fos-license
|
Amino Acid Sequence Determinants of Extended Spectrum Cephalosporin Hydrolysis by the Class C P99 (cid:1) -Lactamase*
Class C (cid:1) -lactamases are commonly encoded on the chromosome of Gram-negative bacterial species. Mutations leading to increased expression of these enzymes are a common cause of resistance to many cephalosporins including extended spectrum cephalosporins. Recent reports of plasmid- and integrin-encoded class C (cid:1) -lactamases are a cause for concern because these enzymes are likely to spread horizontally to susceptible strains. Because of their increasing clinical significance, it is critical to identify the determinants of catalysis and substrate specificity of these enzymes. For this purpose, the codons of a set of 21 amino acid residues that encompass the active site region of the P99 (cid:1) -lactamase were individually randomized to create libraries containing all possible amino acid substitutions. The amino acid sequence requirements for the hydrolysis of ceftazidime, an extended spectrum cephalosporin commonly used to treat serious infections, were determined by selecting resistant mutants from each of the 21 libraries. sequencing identified the residue positions that are critical for ceftazidime hydrolysis. In addition, it was found that certain amino acid substitutions in the (cid:2) -loop region of the P99 enzyme result in increased ceftazidime hydrolysis suggesting the loop is an important
-Lactam antibiotics such as the penicillins and cephalosporins are among the most often prescribed antimicrobial agents. These antibiotics act by inhibiting transpeptidase enzymes (also called penicillin-binding proteins or PBPs 1 ) that are essential for the synthesis of the peptidoglycan layer of the bacterial cell wall (1). Inhibition of peptidoglycan synthesis results in death of growing bacteria and accounts for the antimicrobial effect of -lactam antibiotics. In response, bacteria have evolved defense mechanisms to resist the lethal effects of these drugs (2). Due to widespread -lactam antimicrobial use, bacterial resistance has increased and now represents a serious threat to human health (3).
The most common mechanism of bacterial resistance to -lactam antibiotics is the synthesis of -lactamases that cleave the amide bond in the -lactam ring to generate ineffec-tive products (4). Based on primary amino acid sequence homology, -lactamases have been grouped into four classes. Classes A, C, and D are active site serine enzymes that catalyze the hydrolysis of the -lactam antibiotic via a serine-bound acyl intermediate (5). Class B enzymes require zinc for activity, and catalysis does not proceed via a covalent intermediate (6). The active site serine -lactamases belong to a larger family of penicillin-recognizing enzymes that includes the PBPs that cross-link bacterial cell walls (7). All of these enzymes contain the active site serine as well as a conserved triad of K(S/T)G located between the active site serine and the C terminus (7). X-ray structure analysis of several class A enzymes, three class C enzymes, and two PBPs indicates that these enzymes have a similar three-dimensional structure, particularly around the active site, suggesting a common evolutionary origin for the penicillin-recognizing enzymes (1). The structures of several class B enzymes confirm the lack of similarity with the serine -lactamases and PBPs and indicate an independent evolutionary origin for these enzymes (8 -11).
Class C -lactamases (also named AmpC) are most commonly encoded on the chromosome of Gram-negative bacterial species and are inducible (4). Derepression of the gene leads to class C enzyme production and resistance to most cephalosporins, including extended spectrum cephalosporins commonly used to treat serious infections (4). The high level of resistance to cephalosporins is due to the fact that class C -lactamases hydrolyze cephalosporins very efficiently (12). In contrast to the class A enzymes, class C enzymes are not inhibited by the mechanism-based inhibitor clavulanic acid (4).
In recent years, plasmid-encoded class C -lactamases have been identified in several Gram-negative species (13)(14)(15). The increase in clinical importance of class C enzymes, and their ability to hydrolyze extended spectrum cephalosporins and ␣-methoxy--lactams such as cefoxitin, has lead to an increased interest in the structure and function of these enzymes.
The three-dimensional structures of the class C enzymes from Citrobacter freundii, Enterobacter cloacae P99, and Escherichia coli have been determined (16 -18). As described above, the class A and class C enzymes have a similar fold and contain conserved amino acids that act similarly in catalysis (18). However, there are also many differences in the active sites, and importantly, the molecular basis responsible for the differences in substrate specificity between the classes has not been resolved. Therefore, it is of interest to identify the determinants of substrate hydrolysis for a number of -lactam antibiotics. We present the results of analysis of the determinants of substrate hydrolysis of the P99 -lactamase for the clinically important antibiotic ceftazidime.
EXPERIMENTAL PROCEDURES
Materials-Chloramphenicol and cephaloridine were purchased from Sigma. Ceftazidime was a gift from Glaxo Wellcome. All enzymes were purchased from New England Biolabs except for T7 DNA polymerase, which was purchased from U. S. Biochemical Corp. Oligonucleotide primers were custom-synthesized by Integrated DNA Technologies. E-test strips for antibiotic susceptibility testing were purchased from AB Biodisc. SP-Sepharose and G-75 gel filtration columns were purchased from Amersham Pharmacia Biotech.
Construction of Random Libraries-A construct containing the wildtype P99 gene was cloned into the pGR32 plasmid as an SacI-XbaI DNA fragment using the P99-top and P99-bot primers used for the random library constructions described below. The template used for the original PCR was the plasmid pHU354, which contains the wild type P99 gene and was provided by A. Dubus (22). The resulting plasmid construct was named pYY12. The pYY12 plasmid was used as template for all library constructions. Individual codons of the E. cloacae P99 -lactamase gene were randomized by overlap extension PCR as described previously (23). The two outside primers, P99-top 5Ј-CCGCGCGAGCT-CCGTTTGTCAGGCACAGTCAAATC-3Ј and P99-bot 5Ј-CCCCCCTCT-AGACCCGGCAATGTTTTACTGTAGCG-3Ј, were used in conjunction with overlapping primers that were designed to randomize individual codons to create a PCR product containing the P99 gene with a randomized codon. The PCR fragment was digested with the XbaI and SacI restriction enzymes and ligated into the pGR32 vector that had been digested with XbaI and SacI (19). The ligation reaction was electroporated into E. coli XL1-Blue cells. The cells were incubated at 37°C for 1 h and spread on LB agar plates containing 12.5 g/ml chloramphenicol (LB-CMP). The plates were incubated at 37°C overnight, and the colonies then were pooled and stored at Ϫ80°C in 15% glycerol. Each library consisted of a minimum of 10,000 pooled colonies. Therefore, each library has a greater than 99% probability of containing all possible sequences for the codon randomized (24).
Selection of Functional Random Mutants Based on Ceftazidime Resistance-The cells from each library were diluted into LB medium and spread on LB agar plates containing 0.5 g/ml ceftazidime (LB-CAZ). The selections were performed in the absence of isopropyl-1-thio--Dgalactopyranoside so that the -lactamase gene was expressed at the low, constitutive levels of the tac promoter (20). As a control, the library was also spread on LB-CMP plates. The plates were incubated at 37°C overnight. The colonies were counted, and the P99 gene was amplified from 20 clones by colony PCR. The clones were sequenced directly from the PCR product by cycle DNA sequencing. The sequences were determined using an ABI 377 automated DNA sequencing instrument.
Minimum Inhibitory Concentration (MIC) Measurements-The MIC for ceftazidime was determined for each of the clones that were selected for DNA sequencing. The clones were inoculated into 5 ml of LB containing 12.5 g/ml chloramphenicol. The culture was grown at 37°C overnight and diluted to an A 600 of 0.3. A total of 100 l of each diluted culture was spread on an LB agar plate. The plates were allowed to dry and then an E-test strip embedded with ceftazidime was applied. The MIC was read after overnight incubation at 37°C. An E. coli strain containing the wild-type P99 gene was used as a control for all MIC measurements.
Expression and Purification of P99 Enzymes-The wild type P99 enzyme as well as mutant derivatives were expressed and purified for the determination of kinetic parameters. The pYY12 plasmid described above was transformed into E. coli RB791 cells for large scale expression (20). A 1-liter culture was grown at 37°C until the A 600 reached 0.5 and then induced with 0.1 mM isopropyl-1-thio--D-galactopyranoside. The culture was then incubated overnight at 25°C with shaking. The cells were collected by centrifugation and resuspended in 20 ml of sucrose buffer. The cell debris was removed by centrifugation, and the supernatant was dialyzed against 2 times 2 liters of 25 mM MES (pH 6.2). The protein lysate was fractionated on an SP-Sepharose column and eluted with a 0.5 M NaCl gradient. The enzyme was further purified using a G-75 gel filtration column in 25 mM phosphate buffer (pH 7.0). The purity of the final preparation was higher than 90% based on SDS-polyacrylamide gel electrophoresis.
Determination of Enzyme Kinetic Parameters-Enzyme kinetics measurements were carried out using a Beckman DU-40 UV spectrophotometer as described previously (26). A reaction mixture containing the antibiotic substrate, 1 mg/ml bovine serum albumin in 50 mM phosphate buffer, pH 7.0, was incubated at 30°C for 5 min in a total volume of 0.5 ml. A total of 5 l of a -lactamase stock solution was added, and the initial reaction velocity was measured by the change in UV absorbance. For ceftazidime, the initial reaction velocity was calculated from the first 5 min of the reaction because ceftazidime is a poor substrate. Cephalosporin C hydrolysis was monitored at 280 nm, whereas ceftazidime hydrolysis was measured at 260 nm. The change in the extinction coefficient of cephalosporin C and ceftazidime are 2390 and 8660 M Ϫ1 cm Ϫ1 , respectively. The k cat and K m values were calculated from the initial velocity data using non-linear regression fit.
Alignment of Class C -Lactamases-The -lactamase sequences used for the alignment to generate the data for Fig. 5 were chosen such that none of the sequences had 90% or greater identity to any other sequence in the alignment. The sequences, the GenBank TM accession number, and if available, the reference number are as follows: (39), Lysobacter lactamgenus (S54103), and Gacinetobacter baumannii (CAB77444) (40). The alignment was generated using the pile-up program of the Wisconsin Package version 10.0, Genetics Computer Group (GCG), Madison, WI.
Systematic Randomization of Amino Acids in the Active
Site of Class C -Lactamase P99 -As described above, derepression of class C -lactamase synthesis is a widespread source of resistance of Gram-negative bacteria to extended spectrum cephalosporins such as cefotaxime and ceftazidime. Because of the significant role of P99 and highly related class C -lactamases in antibiotic resistance, it is of interest to understand how the amino acid sequence of the enzyme determines its structure and activity. For this reason, we determined the amino acid sequence requirements for the hydrolysis of ceftazidime for a set of 21 residues that encompass the active site and substrate-binding pocket of the P99 -lactamase ( Fig. 1). Although the 21 residues under study are all in the vicinity of the active site, not all of these residues are likely to contribute equally to the structure and function of the enzyme. Some residue positions are likely to be essential, and therefore substitutions at these positions will result in a non-functional enzyme. However, other residue positions may be less important, and thus substitutions at these positions will be more freely tolerated. The location of essential residues identifies those positions in the amino acid sequence that are the most important determinants of P99 -lactamase structure and activity.
The tolerance of each residue to amino acid substitutions was determined using saturation mutagenesis followed by a functional selection for active mutants (24,25). The strategy consists of randomizing the DNA sequence of a single codon to create a random library containing all possible amino acid substitutions for the position randomized. The active site region of the enzyme was randomized in a set of 21 random libraries ( Fig. 1). Each of the 21 random libraries was used to transform E. coli, and functional mutants were selected by spreading the transformed cells on agar plates containing 0.5 g/ml ceftazidime. This is the maximal concentration on which E. coli containing the wild-type P99 gene on the plasmid used to construct the random libraries can grow on agar plates after transformation. Thus, phenotypically wild-type mutants were selected from each of the libraries. It should be noted that this concentration is not the same as the minimal inhibitory concentration (MIC) for ceftazidime of an E. coli strain containing the wild-type P99 gene, which is 3 g/ml as determined using E-test strips (Table I; see under "Experimental Procedures"). The difference is most likely due to an inoculum effect whereby fewer plasmid-containing cells are spread on agar plates after transformation than in the E-test strip experiment. The DNA sequence of at least 10 functional mutants selected from each library was determined (Fig. 2). In addition, to confirm that the selected mutants exhibited resistance levels similar to wild type, the minimal inhibitory concentration was determined for each non-wild-type mutant that was sequenced (Table I). The ceftazidime MIC values of the mutants ranged from ϳ6-fold less (0.5 g/ml) to 21-fold (64 g/ml) greater than the MIC of E. coli containing wild-type P99. Furthermore, the average MIC for all of the mutants was found to be ϳ2-fold greater (6.75 g/ml) than wild type. Therefore, the selection for ceftazidime resistance on agar plates is stringent enough to identify mutants with phenotypically wild-type activity.
In addition to the clones selected for ceftazidime resistance, several colonies were sequenced from the naive library that had not been selected on a -lactam antibiotic. This control was performed to ensure that the appropriate codons were mutagenized in the starting libraries. At least eight clones from each library were sequenced, and the results are presented in Fig. 2. Although eight clones is too few to rigorously prove that the libraries are completely random, it is enough to demonstrate that the correct codon was mutagenized and that a diverse collection of mutants are represented in each starting library. The naive library sequences are also informative when compared with the sequences from the clones selected for ceftazidime resistance. For example, sequencing of the Glu 272 naive library revealed a diverse collection of sequences (Fig. 2). After selection for ceftazidime resistance, however, only glutamate codons were present among the sequenced clones. This strongly suggests that glutamate is the only amino acid capable of providing wild-type levels of function at position 272. Based on this result, Glu 272 is interpreted to be essential for the structure and function of the enzyme. Note, however, that this result does not indicate whether Glu 272 is essential for substrate binding and catalysis. It is possible, for instance, that the glutamate residue is essential for the structural integrity of the enzyme.
In contrast to residue Glu 272 , a diverse set of amino acids was found at position Leu 293 both before and after the selection for ceftazidime resistance. This result suggests Leu 293 is not critical for the structure and function of the enzyme. Note that the Arg 349 library appears from the sequencing of the unselected clones to be biased toward the wild-type arginine codon (Fig. 2). Nevertheless, useful information could still be obtained about this position due to the different spectrum of substitutions among the ceftazidime-selected clones. Despite the bias toward arginine in the starting library, the lack of lysine in the starting library combined with the multiple occurrences of lysine in the selected library indicates that a positive charge is important at position 349 (Fig. 2).
The sequencing results from the naive and ceftazidime-selected libraries were used to place the 21 mutagenized positions into four classes (Fig. 1). First, a position is considered as essential if all of the sequences of clones selected for ceftazi- Gly 317 is not shown in the figure because it is localized immediately behind Thr 316 and cannot be seen from this view. Residues that tolerate substitutions but with the wild type residue predominating among functional mutants are colored yellow. Residues that tolerate substitutions but with a non-wild-type residue predominating among functional mutants are colored blue. Residues that exhibit no strong preference for any amino acid type are colored green. The figure was made using the Molscript program (59). dime resistance are the wild-type amino acids. Second, the position is considered as wild-type predominant if the majority of the selected clones have the wild-type sequences but some conservative substitutions are also found. Third, the position is classified as non-wild-type predominant if the sequences of the selected clones are strongly biased toward a non-wild-type amino acid. Finally, a position is considered unimportant if there is not a strong bias toward any amino acid among the selected clones.
In total, nine positions were found to contain only the wildtype amino acid after selection and are essential for hydrolysis of ceftazidime by the P99 enzyme. These positions include residues Ser 64 , Lys 67 , Tyr 150 , Asn 152 , and Lys 315 , which are believed to be involved in catalysis based on previous mutagenesis and x-ray crystallographic studies (18,(41)(42)(43)(44). In addition, residues Thr 316 , Gly 317 , and Ser 318 are classified as essential (Fig. 2). These residues reside on the -strand that composes a wall of the active site pocket of the P99 enzyme (18) (Fig. 1). The Thr 316 side chain hydroxyl group may interact with the free carboxyl group of -lactam antibiotics either directly or through a bridging water molecule (44,45). Previous mutagenesis results suggest that the proposed interaction is most important for binding and hydrolysis of cephalosporin antibiotics (45). The finding that Thr 316 is essential for hydrolysis of the cephalosporin ceftazidime is consistent with the previous result. It is unclear, however, why a serine residue does not function at this position.
The strict conservation of glycine among functional mutants at position 317 is explained by the fact that any other residue at this position would, for steric reasons, interfere with the binding of substrate (18). A similar result was obtained after randomization and functional selection of the analogous residue in the class A TEM-1 -lactamase (25). The main chain amide nitrogen of Ser 318 forms the putative "oxyanion" hole of class C -lactamases (16 -18). However, it is unclear why there is a strict conservation of the serine side chains at position 318 among functional mutants. The hydroxyl group of the analogous residue (Thr 301 ) in the Streptomyces R61 carboxypeptidase has been observed to interact with the NH group of the side chain amide of cephalothin or the carboxyl group of the dihydrothiazene ring of cefotaxime (46). By analogy, the hydroxyl group of Ser 318 in the P99 enzyme may interact with a hydrogen-bonding group from the side chain of ceftazidime.
The Glu 272 position was also found to be essential for ceftazidime hydrolysis (Fig. 2). The Glu 272 side chain constitutes a wall of a channel at the back of the active site where it takes part in a hydrogen-bonding network with His 314 and Lys 315 (18). Previous mutagenesis and enzyme kinetics results suggest Glu 272 is not important for acylation but may contribute to the deacylation process (41). These results are consistent with such a role. However, the exact role of the channel and Glu 272 residue remains to be determined.
At four of the 21 positions randomized, the wild-type residue was the most prevalent among sequenced functional random mutants, but other residues were also observed. These positions include Leu 119 , Gln 120 , Thr 319 , and Arg 349 (Fig. 2). The Leu 119 and Gln 120 residues form a wall of the active site where they can participate in substrate binding. The structure of the E. coli AmpC enzyme in complex with a boronic acid inhibitor containing the side chain of cephalothin indicates a hydrogen bond between side chain of Gln 120 and the carbonyl oxygen of the amide group of cephalothin (47). In contrast, in the structure of a boronic acid inhibitor with the side chain of cloxacillin, the side chain of Gln 120 is rotated away from the amide (47). Therefore, position 120 is quite versatile for substrate binding. Position 120 is also of interest because of the high percentage of lysine and arginine substitutions among the functional mutants. This result suggests lysine or arginine substitutions at position 120 may increase ceftazidime hydrolysis. Consistent with this hypothesis is that finding that an E. coli strain containing the P99 enzyme with the lysine substitution is somewhat more resistant to ceftazidime than that containing the wild-type P99 enzyme (Table I).
Residue Thr 319 lies on the opposite side of the active site pocket from Leu 119 and Gln 120 . Also, it is the first residue after the essential B3 strand (Fig. 3). Thr 319 forms part of a binding pocket that could directly interact with the side chain of ceftazidime and other cephalosporins (16). Consistent with this role, the threonine side chain is strongly conserved among functional mutants at position 319.
In class A -lactamases, the Arg 244 residue is thought to assist in substrate binding and turnover by interacting with the carboxyl group on C-3 of penicillins or C-4 of cephalosporins (48). An exact counterpart does not exist in the P99 -lactamase, but Asn 346 and Arg 349 are in an equivalent region of the binding site (18). It has been noted that Arg 349 is not properly oriented for interaction with substrate and that Asn 346 might be the better positional counterpart of Arg 244 (18). The functional selection results indicate a hydrogen bonding group is not required at position 346, and thus Asn 346 is not functionally FIG. 2. Summary of sequencing results from P99 -lactamase random library selections. A, the set of amino acids identified from naive random libraries. These clones were randomly chosen from the library without selecting for -lactam resistance. The different amino acids identified by DNA sequencing are listed below the wild-type sequence, which is shown in bold. The superscript number indicates the number of occurrences of an amino acid type among the sequenced clones. B, the set of amino acids identified among clones selected for resistance to 0.5 g/ml ceftazidime. equivalent to Arg 244 of the class A enzymes. In contrast, only arginine or lysine residues are consistent with function at position 349, which is consistent with this residue making an important interaction with the carboxyl group of the substrate.
To confirm that a residue that does not have hydrogen bonding potential can efficiently function at position 346, P99 enzymes containing either the Asn 346 3 Ala or the Asn 346 3 Ile substitutions were purified to homogeneity, and kinetic parameters were determined for cephalosporin C and ceftazidime hydrolysis (Tables II and III). Cephalosporin C has been shown to be an excellent substrate for the wild-type P99 enzyme (12), and it is also an excellent substrate for the Asn 346 3 Ala and Asn 346 3 Ile enzymes. The catalytic efficiency of the Asn 346 3 Ala enzyme for ceftazidime hydrolysis was also similar to the wild-type enzyme (Table III). The catalytic efficiency of the Asn 346 3 Ile enzyme is also similar to wild type, but the k cat and K m values are ϳ8and 6-fold higher, respectively, than those of the wild-type enzyme. Interestingly, the Asn 346 3 Ile mutant also exhibits significantly higher levels of resistance to ceftazidime than wild-type (Table I) suggesting that k cat strongly influences the MIC value. In addition, these results indicate that a side chain with hydrogen bonding potential is not required at position 346. Therefore, residue 346 is not the functional counterpart of Arg 244 of the class A enzymes.
Ala 220 and Tyr 221 reside at the floor of the active site pocket within an -loop region (residues 189 -226) (Fig. 3) (18). At these positions, a non-wild-type residue predominated among the functional mutants selected for ceftazidime resistance (Fig. 2). Serine was the most common substitution at position 220, whereas alanine predominated at position 221. The fact that a non-wild type residue predominates among functional mutants suggests the substitution results in increased ceftazidime hydrolysis. Consistent with this hypothesis, E. coli strains containing the Tyr 221 3 Ala or Gly mutants exhibit significantly higher levels of resistance to ceftazidime than wild-type P99 (Table I). To confirm that substitutions at position 221 have increased catalytic efficiency, the Tyr 221 3 Gly enzyme was purified to homogeneity, and kinetic parameters were determined for cephalosporin C and ceftazidime hydrolysis (Tables II and III). Large increases in K m were observed for both cephalosporin C and ceftazidime. The k cat value, however, was decreased 8-fold for cephalosporin C but increased 800-fold for ceftazidime. The net result was a large decrease in k cat /K m for cephalosporin C but a large increase in catalytic efficiency for ceftazidime hydrolysis. Therefore, a non-wild-type residue was selected at position 221 because the wild-type amino acid is not the optimal residue for ceftazidime hydrolysis.
The remaining six residues, Arg 204 , Asp 217 , Ser 289 , Leu 293 , Ser 343 , and Asn 346 , did not exhibit a strong bias toward any specific amino acid among the mutants selected for ceftazidime resistance (Fig. 2). All of these residues are on the periphery of the active site pocket (18). The results suggest these residues are not critical for binding or hydrolysis of extended spectrum cephalosporins such as ceftazidime. It has been noted that the Asp 217 residue on the -loop structure is the only possible counterpart to the deacylation residue Glu 166 of class A enzymes (18). Our results are consistent with previous mutagenesis studies indicating the residue does not play an important role in deacylation (49). In addition, the finding that Ser 289 can be substituted and retain function is consistent with a recent study demonstrating that the kinetic parameters of substituted enzymes are not greatly different from wild-type for several cephalosporin substrates (50).
The Leu 293 position has not been mutagenized previously, but the results presented in Fig. 2 indicate it is not important for ceftazidime hydrolysis. To ensure that the mutants selected from the Leu 293 library do, in fact, function similar to the wild-type enzyme, the Leu 293 3 Cys enzyme was purified, and kinetic parameters for cephaloridine and ceftazidime hydrolysis were determined (Tables II and III). The kinetic parameters for hydrolysis of both substrates were similar to those obtained for the wild-type enzyme indicating that the mutants selected ceftazidime resistance from the Leu 293 library exhibit catalytic properties similar to the wild-type enzyme.
Sequence Conservation in Evolution Versus Sequence Conservation among Functional
Mutants-Sequence alignments of members of a gene family indicate the evolutionary sequence conservation of an amino acid residue position. This information is very useful in determining whether a residue is critical for the structure and function of a protein. In the experiments presented above, an indication of sequence conservation is provided by randomization of a position followed by selection and sequencing of functional random mutants. Is the information gained from these approaches the same? To answer this question, a set of 18 class C -lactamases were obtained from sequence data bases and used to generate a sequence alignment. Sequences with 90% or greater identity were excluded
Molecular Determinants of Class C -Lactamase Function
from the alignment. A comparison of sequence conservation based on the gene family alignment versus the functional selection data is shown in Fig. 4. There is a close agreement for many of the residues including Ser 64 , Lys 67 , Tyr 150 , Asn 152 , Glu 272 , Lys 315 , Thr 316 , Gly 317 , and Ser 318 . Many of these residues are directly involved in catalysis, and thus it is not surprising that they are strongly conserved both in the gene family and in the functional selection experiments. At several residue positions, a wider spectrum of substitutions was observed for the ceftazidime functional selection than in the gene family. These residues include Leu 119 , Gln 120 , Arg 204 , Tyr 221 , Leu 293 , Thr 319 , and Arg 349 . There are two possible explanations for this observation. First, the ceftazidime selection may not be sufficiently stringent, enabling mutants with partial function to be selected. This would lead to a wider spectrum of observed substitutions but would not be indicative of the actual tolerance of the position to substitutions. The purification and characterization of the Leu 293 3 Cys substitution was performed to address this possibility. The finding that this enzyme exhibits catalytic properties similar to wildtype P99 suggests the ceftazidime selection is stringent. The MIC data for all of the selected mutants (Table I) is also consistent with a high stringency for the ceftazidime selection. A more likely explanation is that the class C -lactamase family members have been under a more diverse selective pressure than simple ceftazidime resistance. Bacteria containing enzymes of the class C family have likely been under selective pressure for resistance to a number of different cephalosporins. It is known that hydrolysis of cephaloridine and cephalosporin C by the P99 enzyme is a diffusion-controlled reaction (12). The fact that the enzyme has achieved catalytic perfection for these substrates suggests that these or similar substrates have provided the selection pressure and thereby directed the evolution of the P99 enzyme (12). In contrast, ceftazidime and other extended spectrum cephalosporins are relatively poor substrates for the P99 enzyme (51). Therefore, the active site of the P99 enzyme has been highly optimized for the hydrolysis of cephalosporin C but not ceftazidime. A highly optimized active site would be more sensitive to substitution, and consequently, a wide range of substitutions at a position may not be consistent with cephalosporin C hydrolysis. However, because the P99 active site is not optimized for ceftazidime hydrolysis, many substitutions may be allowed, and in fact, several may result in increased ceftazidime hydrolysis.
-Loop Substitutions That Alter Class C -Lactamase Substrate Specificity-The most striking difference between sequence conservation in the gene family versus the ceftazidime selection is at Tyr 221 , which is localized in the active site -loop structure (Fig. 3). Tyrosine is conserved at position 221 among all of the class C -lactamases. However, tyrosine was never observed among 19 ceftazidime-resistant clones selected from the Tyr 221 library. Instead, alanine was the predominant amino acid among the functional mutants. MIC measurements indicated that the Tyr 221 3 Ala and Tyr 221 3 Gly mutants were significantly more resistant to ceftazidime than E. coli containing the wild-type P99 gene. The Tyr 221 3 Gly enzyme exhibits strikingly different catalytic characteristics than the wild-type P99 enzyme (Table II). The values for k cat and K m with ceftazidime as a substrate are 800-and 60-fold higher, respectively, than those for the P99 enzyme. The enzyme therefore has a catalytic efficiency (k cat /K m ) for ceftazidime hydrolysis that is 18-fold higher than the wild-type enzyme. The k cat /K m value is an apparent second-order rate constant for the reaction of free enzyme and substrate to enzyme and product (52). This value has been shown to correlate strongly with MIC values, and therefore, the higher k cat /K m value is consistent with the higher ceftazidime resistance of the mutant (53). The deacylation process is rate-limiting for hydrolysis of oxyimino -lactams such as ceftazidime by class C -lactamases (51,54). Hence, the large increase in k cat may reflect an increase in the rate of deacylation. The increase in k cat is balanced somewhat by the large increase in K m . Because of the mechanism of class C -lactamases, however, an increase in K m does not necessarily indicate less efficient substrate binding. When deacylation is the rate-limiting step, an increased rate of deacylation will also increase the value of K m (54).
Extended spectrum cephalosporins such as ceftazidime are poor substrates for both class A and class C -lactamases. The evolution of resistance to ceftazidime has occurred via amino acid substitutions in the class A TEM-1 and SHV-1 -lactamases (55). A number of substitutions have been identified in enzymes from resistant clinical isolates, and these substitutions have been found to increase k cat and lower K m for ceftazidime (56). In contrast, only a single natural isolate with increased ceftazidime resistance due to changes in the P99 class C -lactamase has been identified (57). This enzyme contains an insertion of three residues after position 207 in the -loop of the P99 -lactamase. Replacement of the 3-residue insertion with 1-4 alanine residues indicated it is the insertion of amino acids and not the identity of the amino acids that is critical for changing the specificity of the enzyme (51). It is of interest that the insertion has a similar effect on enzyme kinetic parameters as the Tyr 221 3 Gly substitution; both k cat and K m values are greatly increased (51). This suggests that the Tyr 221 3 Gly substitution and the 3-residue insertion mutant act via a similar mechanism. The structure of the insertion mutant of P99 -lactamase has been solved (58). It leads to a wider opening to the substrate binding cavity and more flexibility in the -loop. It has been suggested that this could facilitate hydrolysis of oxyimino -lactams by making the acyl- FIG. 4. Comparison of sequence variability among functional P99 -lactamase mutants and 18 aligned class C -lactamases. The residue number is indicated between the horizontal lines. Above the residue positions are the different amino acid residues that appear at these positions in the alignment of class C -lactamases. The number of occurrences of each type is indicated. The class C sequences used are listed under "Experimental Procedures." Below the residue positions are the different amino acid residues that were identified among the functional random mutants.
enzyme intermediate more open to attack by water and thereby increasing the rate of deacylation (58). By analogy, the Tyr 221 3 Gly substitution may act in a similar manner.
As stated above, only one natural mutant of P99 that leads to increased hydrolysis and thus resistance to extended spectrum cephalosporins has been identified (57). Based on the randomization and selection experiments, mutations that convert Tyr 221 to Gly or Ala will lead to greatly increased ceftazidime resistance (Table I). However, these amino acid substitutions can occur only via 2-base pair changes in the coding sequence. Because this is expected to be a rare event, it may explain why mutations within the coding region of class C enzymes such as P99 leading to increased ceftazidime resistance are not commonly observed.
|
2019-03-22T16:12:41.245Z
|
2001-12-07T00:00:00.000
|
{
"year": 2001,
"sha1": "3851ff76bf10859d9b5b87212b64738bf3966c60",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/276/49/46568.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7c8fe3d32362b906cdf07e12d5a4c4399c42863b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
267274818
|
pes2o/s2orc
|
v3-fos-license
|
A mid-pandemic night's dream: Melatonin, from harbinger of anti-inflammation to mitochondrial savior in acute and long COVID-19 (Review)
Coronavirus disease 2019 (COVID-19), a systemic illness caused by severe acute respiratory distress syndrome 2 (SARS-CoV-2), has triggered a worldwide pandemic with symptoms ranging from asymptomatic to chronic, affecting practically every organ. Melatonin, an ancient antioxidant found in all living organisms, has been suggested as a safe and effective therapeutic option for the treatment of SARS-CoV-2 infection due to its good safety characteristics and broad-spectrum antiviral medication properties. Melatonin is essential in various metabolic pathways and governs physiological processes, such as the sleep-wake cycle and circadian rhythms. It exhibits oncostatic, anti-inflammatory, antioxidant and anti-aging properties, exhibiting promise for use in the treatment of numerous disorders, including COVID-19. The preventive and therapeutic effects of melatonin have been widely explored in a number of conditions and have been well-established in experimental ischemia/reperfusion investigations, particularly in coronary heart disease and stroke. Clinical research evaluating the use of melatonin in COVID-19 has shown various improved outcomes, including reduced hospitalization durations; however, the trials are small. Melatonin can alleviate mitochondrial dysfunction in COVID-19, improve immune cell function and provide antioxidant properties. However, its therapeutic potential remains underexplored due to funding limitations and thus further investigations are required.
Introduction
Severe acute respiratory coronavirus 2 (SARS-CoV-2) is the causative agent of the viral disease known as coronavirus disease 2019 .This illness was first identified in December 2019 in Wuhan, China, and has since spread throughout the globe, culminating in a pandemic (1)(2)(3).COVID-19 is a systemic disease that may present in a broad variety of clinical manifestations, ranging from patients who are asymptomatic to those who have significant respiratory symptoms and even conditions that are life-threatening (3)(4)(5).There are several underlying mechanisms and interactions with pre-existing conditions, such as obesity among others, that drive the pathogenesis of the disease, which includes the activation or dysregulation of localized (for example, vascular) and widespread inflammation, ultimately resulting in the failure of several organs and eventually, mortality (2,4,(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16).
With the pandemic now characterized passed the acute phase, attention is shifting to post-acute sequelae of COVID-19 (PASC), is often referred to as 'long COVID' and possible preventative and therapeutic approaches are warranted (17,18).PACS comprises from a variety of symptoms and clinical manifestations, which may include persistent tiredness, respiratory symptoms (including dyspnea, cough, A mid-pandemic night's dream: Melatonin, from harbinger of anti-inflammation to mitochondrial savior in acute and long COVID-19 (Review) chest tightness), joint rigidness, impaired smell and headache, whereas respiratory, cardiovascular, neurological, cognitive, psychiatric and gastrointestinal manifestations continue to be the most common and potentially gravest, presentations of PASC (17,(19)(20)(21).Recent evidence suggests that a number of these manifestations may be linked to an unfavorable impact of the disease on the mitochondrial function of various tissues and organs (18,22).Considering the numerous mechanisms and pathophysiological processes that spread from the deregulation of the immune system in acute COVID-19 and the potential mitochondrial basis of long COVID, an ideal and efficient therapeutic option could be a molecule which functionally behaves as a 'Swiss Army Knife', such as melatonin (23,24).Indeed, since the SARS-CoV-2 was classified as a pandemic, numerous studies have proposed that the use of melatonin should be investigated as a treatment option that is both safe and likely to be effective with regard to treating the infection (17,(25)(26)(27).Its usage is justified not just by its superior safety profile, but also from its innumerable beneficial actions as already reviewed extensively elsewhere (27)(28)(29)(30) and it has been demonstrated to even possess broad-spectrum antiviral drug characteristics (31,32).Moreover, various potentially harmful and costly repurposed medicines, such as colchicine, glucocorticoids, remdesivir and several others, have been advocated for or utilized as therapeutic options (25,27,(33)(34)(35)(36)(37).Additionally, despite their importance, even the presently available vaccinations have major adverse effects on occasion (38,39).Furthermore, as the virus has evolved, the efficiency of the immunizations has reduced, several strains have already been found, and more are expected to emerge, reducing the efficacy of vaccinations even further (40).All these factors underlie the need for further therapeutic options despite the various preventive and already utilized medicinal options.
The present review provides a summary of the features of melatonin that provide support to its use in the treatment and/or prevention of SARS-CoV-2 infection and its complications.The present review initially presents several actions of melatonin in health and disease, followed by the key pathophysiological mechanisms of COVID-19 and the potential mechanisms through which melatonin would interact and mitigate them, with a focus on long COVID and the mitochondrial functions of melatonin.
Finally, the results of the available clinical trials examining the use of melatonin in individuals with COVID-19 are summarized, and future steps on further examining the use of melatonin are proposed.
Melatonin in health and disease
Melatonin is an ubiquitous molecule that can be found in all living organisms of the animal kingdom, with traces even found in higher plants, such as fruits, seeds and leaves.The term 'melatonin' originates from the Greek words 'melas', which means black or dark, and 'tonos', which means color or tune.Melatonin is ultimately used to describe the hormone that is responsible for darkness (41)(42)(43)(44).It has been preserved over the course of evolution, perhaps for these and numerous other additional features, and it is regarded to be an evolutionarily old antioxidant, as it has the ability to scavenge free radicals and stimulate antioxidant enzymes (44)(45)(46)(47).Melatonin is primarily synthesized and secreted (predominantly released at night) by the pineal gland via the process of hydroxylation of the essential amino acid tryptophan, whereas tryptophan hydroxylase is responsible for the formation of 5-hydroxytryptophan (42,43,45,(47)(48)(49).Serotonin, also known as 5-hydroxytryptamine, is the neurotransmitter that is produced as a result of this process.Melatonin is the immediate precursor of serotonin (42,43,45,47,48).Other organs, including the retina, kidneys, gastrointestinal system, skin and lymphocytes, produce a modest amount of melatonin (42,43,45,47,48).The role of melatonin in various biosynthetic metabolic pathways is evident, with different species having distinct biosynthetic pathways and genes that encode the enzymes involved in the process of its biosynthesis (42,43,45,47,48).Hydoxyindole-O-methyltransferase, an enzyme that is indirectly controlled by the photo-neural system, is responsible for regulating the production of melatonin (42,43,45,47,48).Melatonin is primarily synthesized at night and is bound to albumin and orosomucoid glycoprotein and through the process of crossing the blood-brain barrier, it is able to go to all tissues in the body and regulate brain function (43,50,51).Melatonin production peaks at 3 months of age and decreases by 80% by the adult stage (43).
Melatonin is primarily considered to govern physiological processes, such as circadian rhythms in humans, the sleep-wake cycle, and it may be used as a natural sleep aid (43,45,(52)(53)(54).It is a pleiotropic hormone that regulates several biological processes, including the release of other hormones, apoptosis and immunological responses (32,49,55,56).The effects of melatonin are mediated in various cells via either the melatonin receptors type 1 and type 2, G-protein coupled (membrane-independent pathway) or indirectly (membrane independent) with nuclear orphan receptors from either the RAR-related orphan receptor α/Z receptor family or through other pathways, as extensively reviewed elsewhere (57).The oncostatic, anti-inflammatory and antioxidant characteristics of melatonin indicate that it may have potential use in the treatment of a variety of disorders (32,43,58).Both the preventative and therapeutic benefits of melatonin have been the subject of substantial research in a variety of neurological conditions, including Alzheimer's disease, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, multiple sclerosis and epilepsy (47,(59)(60)(61)(62).In lipopolysaccharide-induced depression, melatonin has been shown to exert antidepressant effects, which are mediated via the regulation of autophagy (63).Additionally, it exhibits anti-aging properties and has the potential for use in the management and treatment of age-related disorders in human beings (55,64,65).
Melatonin has been widely investigated for its anti-proliferative and anti-apoptotic properties on cancer cells, revealing its oncostatic effects.Melatonin also reduces the loss of cells, which is a significant benefit (66,67).Melatonin, which has been found in both in vitro and in vivo studies, has been shown to inhibit the development of tumors through membrane-independent and membrane-dependent mechanisms.Melatonin has an effect on cancer during the initiation phase, such as through DNA repair, and in the development, progression and metastasis phases, of the tumorigenesis process (66)(67)(68).
Melatonin has potent anti-angiogenic, anti-proliferative and ultimately anti-metastatic properties that may be used in the treatment of a wide range of malignancies, particularly those that have a high risk of cancer spreading to other parts of the body.Additionally, it exerts synergistic effects with conventional therapy, which increases the vulnerability of cancer cells to apoptosis (66)(67)(68).Melatonin significantly reduces the adverse effects of cardiotoxic drugs in patients with cancer and has been shown to have a beneficial effect on coagulopathy (49).Melatonin has been found to improve cardiac function and lower blood pressure in patients who have hypertension, according to clinical data from human studies and various lines of evidence from animal studies, which have been reviewed elsewhere (52,60,(69)(70)(71).Melatonin, a substance that neutralizes free radicals, has been utilized to mitigate the harmful effects of certain chemical compounds, such as methamphetamine (42,50,60,(72)(73)(74).The use of melatonin as a possible anti-viral drug for the treatment of viral illnesses, such as Ebola and COVID-19 has been suggested (27,31,75).As extensively reviewed elsewhere (31), melatonin exhibits a plethora of potential antiviral actions in various viral models (31,75), including the regulation of viral phase separation and epitranscriptomics in long COVID-19 (17).
Melatonin is also a key factor in the regulation of energy homeostasis, which includes the regulation of body weight, insulin sensitivity and glucose tolerance of the body (45,85).It regulates energy metabolism, affecting intake, flow and expenditure in the energy balance, which in turn may be critical for preventing a variety of dysmetabolic conditions, particularly obesity, which in turn can affect the outcome of patients with COVID-19 (11,(86)(87)(88)(89).In addition to this, it synchronizes the needs for energy metabolism with the daily and yearly cyclical environmental photoperiod by means of its chronobiotic and seasonal effects (45,85).In experimental ischemia/reperfusion research, particularly in cases of myocardial infarction and stroke, melatonin has been shown to successfully prevent oxidative damage and the pathophysiological repercussions of such damage are essential (43,82,90,91).Of utmost importance is to further present the free radical scavenging properties of melatonin, as these protect against mitochondrial DNA damage induced by reactive oxygen species (ROS) displaying another of its significant effects on mitochondrial homeostasis (24,92,93).In preclinical studies, the administration of melatonin has been shown to increase the activity of several antioxidant markers/enzymes, including glutathione peroxidase and superoxide dismutase 2 (SOD2).The latter was achieved by promoting the function of sirtuin 3, that deacetylates SOD2, essentially facilitating its activation (24,92,(94)(95)(96)(97). Whether melatonin is present in the mitochondria has been debatable (24,92); however, experimental evidence demonstrates up to 100-fold higher levels of melatonin within the mitochondria post-administration on mitochondrial membranes (98).It appears that the highest concentration of melatonin occurs in the mitochondria, where the highest amount of ROS and oxidative stress occur (99).High amounts of melatonin in the mitochondria may be due to oligopeptide transporters 1/2 or mitochondria generating their own melatonin, with research indicating the existence of such enzymes in brain mitochondria (92,94,(100)(101)(102).The effects of melatonin on mitochondria may be mediated via MT1/2 receptors, resulting in decreased ROS generation, higher antioxidant capabilities, and therefore, in less neural apoptosis, activating nuclear factor erythroid 2-related factor 2, as shown in preclinical models (24,92,103,104).Melatonin additionally prevents stress-induced cytochrome c release from mitochondrial outer membranes (100).Finally, melatonin appears to increase classes of oxidative phosphorylation (OXPHOS) proteins, thereby preventing damage (105).All these mitochondria-related features of melatonin are of key relevance, apart from the acute phase of COVID-19, which is strongly associated with oxidative stress, but also long COVID, which will be discussed in the following section.Based on novel data, melatonin is related to the mitochondrial dysfunction/downregulation of vital mitochondrial markers.The physiology of melatonin is summarized in the schematic diagram in Fig. 1.
Pathophysiology and long-term repercussions of COVID-19
Although individuals with COVID-19 often have modest symptoms, 20% develop substantial to severe illness that requires hospitalization (106).The most common include respiratory system abnormalities; however, several other organs may also be affected (3,7,(10)(11)(12)33,34).The features of the host, viral dynamics and immune response are associated with the severity of the disease and in general, severe COVID-19, as well as a higher mortality rate are linked to an older age, high body mass index, and comorbidities such as cardiovascular diseases, diabetes or cancer (3,8,10,11,87,107,108).
The pathophysiological symptoms of COVID-19 are partly mediated by the cell entrance of the virus, which is enhanced by the binding of the viral spike peptides to the angiotensin converting enzyme 2 (ACE2) receptors in diverse organs (2,7,8,109).In humans, ACE2 is expressed in numerous organ systems and tissues, including the lungs (e.g., the pneumocytes of alveolar sacs), hepatic, cardiac tissue, kidney, gastrointestinal endothelium, adipose tissue (AT) and vascular endothelium (3,49,110,111).This wide distribution likely explains the multisystem involvement of the infection, while also enhancing the magnitude of the illness in patients afflicted by SARS-CoV-2 (49).Interstitial pneumonia, the most prevalent lung involvement in patients with COVID-19, if left untreated, may lead to a hypoxic status, resulting in acute respiratory distress syndrome and/or systemic inflammatory response syndrome and fatal multiorgan failure (3,6,13,15,37,108,112,113).These sepsis-related consequences occur from a pathophysiological perspective, have the same underlying backgrounds, ignited by the cytokine storm and hyperinflammatory statuses with significant oxidative damage caused by the reaction of the host to SARS-CoV-2 (49,114).
It is possible that the widespread extrapulmonary damage observed in patients with COVID-19 may be attributed to the presence of ACE2 receptors on cells other than those that lining the respiratory alveoli (113).Other organ involvement results in symptoms that are particular to the organ; for example, gastrointestinal involvement may cause symptoms such as nausea, vomiting, diarrhea and abdominal pain (113).Hepatic damage, as evidenced by increased levels of circulating liver enzymes, is also prevalent (3).There are several symptoms that may be associated with peripheral and central nervous system involvement, and these include headaches and dizziness, hyposmia or anosmia (indicative of encephalopathy), neuralgia and Guillain-Barré syndrome (115,116).Hospitalized patients are more likely to experience thromboembolic events, which have been established as an independent risk factor for a poor prognosis, and acute coronary modalities, cardiomyopathies, several types of arrhythmias, pericarditis and various thromboembolic events (49,117).Infections caused by SARS-CoV-2 may also result in coagulopathies, thrombocytopenia being the most prevalent, which play a crucial role in the development of extrapulmonary complications (8,49).In critically ill patients, deep venous thrombosis and/or pulmonary embolism are frequent, with pulmonary embolism being more prevalent in patients in intensive care units (49).Inflammation, immunological responses, coagulation cascades and the dysregulation of the renin-angiotensin system may cause acute kidney damage in 25% of hospitalized patients (8,49,118).Finally, AT from individuals with obesity is hypothesized to exhibit higher amounts of ACE2, perhaps serving as a SARS-CoV-2 repository with postponed viral shedding and may presumably contribute to long COVID (3).
Long COVID refers to patients who have experienced persistent impairments following infection with COVID-19, including various organs and tissues (18,(119)(120)(121)(122).A previous retrospective analysis of 193,113 participants found an elevated risk for respiratory impairment and pulmonary function impairment after 6 months in these patients (123).The most prevalent manifestation is impaired diffusion capacity for carbon monoxide (DLCO) (124).Survivors with a critical illness had a greater risk of DLCO impairment, lower residual volume and lower total lung capacity (124,125).Notably, the risk of developing long COVID appears to differ depending on the various strains.Studies have found a lower risk of complications, intensive care unit admission, ventilation requirement and mortality rate in omicron-infected individuals compared to those infected with other variants (126).Furthermore, as compared to the delta variant, the omicron variant has been shown to be associated with a lower likelihood of developing long COVID (127).
Mutations in antigenic sites are essential for antibody and immunological evasion, and chronic symptoms in patients with long COVID-19 may be partly due to a lessening of the antibody response to vaccination or to variant resistance (17,122,128,129).Of note, >100 persistent symptoms were recorded by participants at least 4 weeks after infection, according to a scoping analysis that included 50 trials (130).It is possible for the majority of 'long-haulers' to have a relapse as a result of either physical or mental stress, and cognitive impairment or memory issues are common regardless of age (18,131).The establishment of a viral reservoir in individuals with PASC may potentially be a possible explanation for the improvement in clinical symptoms that occurred following the administration of the SARS-CoV-2 immunization (132).Reservoirs of viruses are cells or anatomical locations where the virus may persist and accumulate with better kinetic stability than the primary pool of viruses that are actively reproducing (17,133,134).There is a increasing evidence of an association between the presence of viral RNA in probable SARS-CoV-2 reservoirs in extrapulmonary organs and tissues, and the continued manifestation of symptoms in PASC (17,18,133,134).Patients who have been diagnosed with COVID for a long period of time often have reactivated viruses, which may cause mitochondrial fragmentation and disrupt energy metabolism (18,(135)(136)(137).In addition, there is evidence of oxidative stress, abnormal amounts of mitochondrial proteins and deficits in tetrahydrobiopterin (138,139).
In addition to the dysregulations of inflammatory responses, COVID-19 has been connected to mitochondrial function.Mitochondria play a critical role in the control of immune responses and cellular metabolism (22,(140)(141)(142)(143).The shape of the mitochondria is altered by infection, which results in a reduction in the number of OXPHOS proteins, a reduction in the number of mitochondrial inner membrane protein import systems, and an increase in the release of mitochondrial reactive oxygen species (144)(145)(146).The SARS-CoV-2 virus is capable of binding to a variety of host proteins, with mitochondrial proteins accounting for up to 16% of the total (22,(147)(148)(149).Human cells and tissues that have been infected display a decrease in the amount of proteins and transcripts of OXPHOS genes, an increase in glycolysis, a suppression of OXPHOS, an increase in mitochondrial ROS production, inflammation factors, and an increase in hypoxia inducible factor-1α (HIF-1α) and its target genes (22,144,(150)(151)(152)(153)(154)(155).A disruption in the process of mitochondrial protein synthesis may lead to an imbalance in the proportion of mitochondrial proteins that are coded by nuclear DNA and mitochondrial DNA, which has the potential to activate the integrated stress response and have a number of unfavorable repercussions (22).Recently, Guarnieri et al (22) demonstrated that once the viral titter peaks, this causes a systemic reaction from the host, which includes the regulation of mitochondrial gene transcription and glycolysis, ultimately resulting in an antiviral immune defense mechanism.Nevertheless, despite the fact that lung clearance and lung mitochondrial function recovery were documented, mitochondrial function in the heart, kidney, liver and lymph nodes continues to be damaged, which may result in severe COVID-19 pathology (22).
Melatonin, which is well-known for its antioxidant and anti-inflammatory qualities, has the potential to assist in overcoming the cytokine storm that is associated with virus-related infections, such as SARS-CoV-2, and may also be able to prevent mitochondrial-related chronic consequences of the disease.The anti-inflammatory and antioxidant properties of melatonin may potentially be beneficial for the treatment of possibly chronic inflammation in patients with long COVID-19.These views are discussed in the following section.The effects of melatonin on the pathophysiological mechanisms of COVID-19 are summarized in the schematic diagram in Fig. 2.
Mechanisms through which melatonin can alleviate COVID-19
Melatonin supplementation has the potential to target and benefit the host by reducing the exaggeration of the innate immune system, which is essential for improving tolerance against the invasion of pathogens (156).There is a substantial association between the immunological response of the host, particularly the innate immune network, and the symptoms and the results of viral infections with the host (156,157).The overwhelming inflammatory response that is triggered by the cytokine storm is responsible for the majority of the detrimental effects caused by SARS-CoV-2 (36,114,156,157).Consequently, this excessive production of cytokines is harmful to organs and tissues, which ultimately results in oxidative damage to several organs (36,114,157,158).A considerable improvement in the outcomes of patients with SARS-CoV-2 infection may be achieved by downregulating the innate immune response and reducing the inflammatory reaction.This provides evidence for the use of this treatment method in the treatment of patients with severe COVID-19 (77,159).
Melatonin is a potent free radical scavenger and antioxidant that directly detoxifies a wide range of ROS and reactive nitrogen species (RNS).These ROS and RNS include hydroxyl radicals, peroxynitrite anion, hydrogen peroxide, superoxide anion radicals and hypoochlorous acid (25,27,50,93,160).Its electron-donating metabolites outperform traditional antioxidants, such as vitamins C and E, carotenoids, and NADH in reducing other oxidizing compounds (156,161).Additionally, melatonin has an advantageous cellular distribution due to its solubility in both water and lipids, and it may form hydrogen bonds with proteins and DNA to provide protection (60,161).Additionally, it upregulates the gene expression levels of several antioxidant enzymes, thus indirectly enhancing the cellular antioxidant capacity (161,162).By interacting on the mitochondrial metabolism, melatonin is also able to inhibit the production of ROS and RNS (60,156).
Melatonin is a potent anti-inflammatory chemical that functions by rescuing the peroxynitrite anion, which leads to the inhibition of inflammation that is not specific to any one substance, such as carrageenan or zymosan (79,163,164).Its anti-inflammatory mechanisms are diverse, including the suppression of the activity or downregulation of pro-inflammatory enzymes, such as cyclooxygenase-2, inducible nitric oxide synthase, eosinophilic peroxidase and matrix metalloproteinase 2 (MMP)2, which are responsible for the generation of inflammatory mediators (156,(165)(166)(167). Furthermore, melatonin has the ability to inhibit the advancement of the NLR family pyrin domain containing 3 (NLRP3) inflammasome, which ultimately results in the activation of caspase-1 and the maturation of IL-1β and IL-18.This ultimately leads to pyroptosis, a damaging consequence of inflammation (168)(169)(170).Melatonin is able to effectively prevent the production of NLRP3 inflammasomes and reduce inflammation, both of which are connected to COVID-19.This affect is achieved by its interaction with signal transduction pathways (167)(168)(169).Melatonin has the ability to decrease the phosphorylation of IκBα, therefore reducing the translocation of NF-κB into the nucleus.This, in turn, helps to control the cytokine storm that occurs following infection with COVID-19 and may be associated with damaging inflammation (171)(172)(173)(174).The downregulation of melatonin also stimulates autophagic capacity, which is often accompanied by a reduction in the creation of inflammasomes.This may speed up the process of tissue healing from inflammation (174,175).
Melatonin is a hormone that controls the immune system, reducing the excessive response of both the innate immune system and fostering the development of adaptive immunity (156,176).Some examples of pathogen associated molecular pattern receptors are Toll-like receptors (TLRs), Nod-like receptors (NLRs), AIM2-like receptors, GMP-AMP synthase (cGAS) and AIM2.These receptors are responsible for driving the innate immune system, which is the initial line of defense against the invasion of pathogens (156,177).Innate immune cells are able to eliminate infections with the assistance of these receptors, which are able to identify RNA, DNA, proteins and lipids that are associated with pathogens (156,177).However, their excessive responses often result in injury to the tissues.Melatonin is able to suppress the activation of TLR4, TLR9 and cGAS, which results in a reduction in the innate immune response and a reduction in the damage to tissue that is caused by infections, ischemia/reperfusion and other disturbances (156,(178)(179)(180).
Innate immune cells are directly affected by melatonin, principally via the negative regulatory functions that it has (156,181).It does this by preventing ERK phosphorylation, which in turn prevents neutrophil migration and the tissue damage that is associated with it (182).The administration of melatonin lowers mast cell activation, TNF-α and IL-6 production, and IKK/NF-κB signal transduction in activated mast cells (155,(182)(183)(184)(185). Treatment with melatonin reverses the transformation from M2 anti-inflammatory macrophages to M1 pro-inflammatory subtypes, which assists in the elimination of SARS-CoV-2 and suppresses the dysfunctional hyper-inflammatory response that is mediated by M1 macrophages (156,186).Whens physiological circumstances are met, melatonin has the potential to boost innate immunity, thus maintaining its protective effects against the invasion of pathogens (31,187).
Melatonin may also have an effect on COVID-19 infection by preventing the virus from entering cells and replicating after first entry (17,25,156).There are three enzymes that are responsible for the entry of SARS-CoV-2 into cells: ACE2, transmembrane protease serine 2 and A disintegrin and metalloprotease 17 (188)(189)(190).It is possible that melatonin can target these molecules in order to delay the entry of the coronavirus into the cells (189).The progression of COVID-19 may be controlled by the circadian system, while the melatonin circadian rhythm may also be responsible for this regulation (155,(188)(189)(190).It is also possible that melatonin may influence ACE2 activity in an indirect manner by binding to calmodulin or MMP9 (191).Recent research has indicated that melatonin has the potential for use as a therapeutic agent on ACE2.It has been found that transgenic mice exhibit greater vulnerability to SARS-CoV-2 infection, as well as delayed clinical signs and an enhanced survival (192,193).In addition, melatonin has the potential to decrease the activation of CD147 during a SARS-CoV-2 infection via inhibiting the production of HIF-1Α (194).Research has demonstrated that melatonin may reduce the reproduction of some viruses, such as swine coronaviruses and Dengue virus, with the effectiveness of this effect being dose-dependent (195,196).Melatonin may suppress SARS-CoV-2 replication; however, to date, no animal research has shown this to be true (197).It is possible that melatonin inhibits viral replication by blocking growth factor signaling (27,198,199).Due to its uniqueness and lack of presence in host cells, the major protease (Mpro) of SARS-CoV-2 has emerged as a possible target for the development of replication inhibitors (156).According to the crystal structure of the SARS-CoV-2 Mpro and PF-07321332 complex, melatonin binds to the catalytic amino acid residues of C145 and H41 via pi-sulfur/conventional hydrogen bonds and carbon-hydrogen bonds.This suggests that melatonin works as an effective Mpro inhibitor (156,194,200,201).In the following section, the limited evidence of the beneficial effects of melatonin on patients with COVID-19 is discussed, building on these potential advantages derived from previous clinical or preclinical research.
Clinical evidence for COVID-19 and melatonin
Previous research on other viral diseases, together with the possible antiviral properties of melatonin, has led to its suggestion as a possible therapeutic agent for COVID-19 (17,49,202).Melatonin has been tested in clinical studies for the treatment of COVID-19.The results revealed that the drug improved sleep quality, reduced the duration of hospitalization and was useful as a preventative measure (155,180,202,203).However, the studies are restricted owing to inadequate financial assistance (melatonin is affordable and non-patentable) (156).
Only a small number of trials have studied the safety and effectiveness of melatonin and its therapeutic value in COVID-19, and they were only recently evaluated in a meta-analysis (202).The most notable findings were that patients using melatonin had a much higher clinical improvement rate than the control groups (202).Melatonin administration also resulted in a reduced death rate, reduced C-reactive protein (CRP) concentration, and length of hospital stay than the controls (202).The study concluded that melatonin had significant benefits on patients with COVID-19 when administered as adjuvant treatment, boosting clinical improvement and shortening recovery time owing to shorter hospital stays and mechanical ventilation durations (202).Other research included the following observations: The case group exhibited lower levels of IL-4 and IFN-γ in their plasma, as well as lower levels of signal transducer and activator of transcription (STAT)4, T-bet, STAT6 and GATA binding protein 3 expression in comparison to the control group (203).In their study, Alizadeh et al (204) discovered that the case group exhibited a reduction in CRP levels both before and after the ingestion of melatonin.On the other hand, the control group did not exhibit a significant reduction in CRP levels.A different case group exhibited an improvement in clinical signs and symptoms, such as cough, dyspnea and tiredness, while simultaneously exhibiting a decrease in CRP levels in comparison to the control group (205).When compared to the control group, the low dosage of melatonin resulted in a reduction in CRP levels, lung involvement, a shorter time to discharge from the hospital, and a shorter period after returning to baseline health (206).According to the findings of another study that examined the quality of sleep and other outcomes of patients with COVID-19, both oxygen saturation and sleep quality increased (207).Chavarría et al (208) demonstrated that melatonin supplementation in patients with moderate symptoms resulted in decreased levels of CRP, IL-6, procalcitonin and lipid peroxidation, and elevated nitrite levels.In addition, the levels of numerous pro-inflammatory indicators, such as IL-1β, TNF-α, malondialdehyde, nitric oxide, superoxide dismutase, ASC and CASP1, were found to be lower in persons who were administered melatonin in comparison to the group that served as the control (209).Finally, patients with COVID-19 and insomnia who received prolonged-release melatonin exhibited improvements in their sleep, a reduction in the number of episodes of delirium, a shorter length of hospitalization, a shorter stay in the sub-intensive care unit, and a shorter duration of therapy with non-invasive ventilation (210).The benefits associated with the use of melatonin in COVID-19 clinical studies are illustrated in Fig. 3.
Conclusions and future perspectives
COVID-19 remains a critical global health concern.Acute COVID pathophysiology linked to the cytokine storm and oxidative stress, and long COVID research have yielded mitochondrial dysfunction among other mechanisms, all of which can be alleviated by providing melatonin (17).The treatment options that have been proposed include, in addition to enhancing the function of immune cells, the elimination of autoantibodies, immunosuppressants and antivirals, as well as agents that possess antioxidant properties, mitochondrial support and the generation of mitochondrial energy (18,159).A number of these could be achieved by including the use of melatonin as an adjuvant therapeutic option.However, despite promising and with positive outcomes based on a small number of clinical trials, its actions need to be investigated further, as an ample amount of the therapeutic potential of melatonin remains underexplored, also due to funding limitations (27,202).On the other hand, further clinical studies that are well-designed are warranted in order to validate these findings (202).Of utmost interest would be the design of trials with various time points primarily examining the acute phase anti-inflammatory properties and on a longer term, the preventive potential against mitochondrial damage and long COVID pathology (17).Finally, the factors influencing the effects of melatonin, including dosage also need to be thoroughly explored.
Figure 2 .
Figure 2. Summary of the pathophysiological processes related to acute and long COVID-19 and sites of potential action of melatonin (symbolized with *µ) based on its physiopathological properties.Please refer to relevant parts of the text for further details.ACE2, angiotensin converting enzyme 2; ARDS, acute respiratory distress syndrome; MOF, multiorgan failure; OXPHOS, oxidative phosphorylation; ROS, reactive oxygen species; SIRS, systemic inflammatory response syndrome.Parts of this image were derived from the free medical site http://smart.servier.com/(accessed on September 15, 2023) by Servier, licensed under a Creative Commons Attribution 3.0 Unported Licence.
Figure 3 .
Figure 3. Schematic illustration summarizing the beneficial outcomes of melatonin supplementation from clinical studies in humans.ASC, apoptosis-associated speck-like protein containing a caspase recruitment domain; CASP1, caspase-1; CRP, C-reactive protein; GATA, GATA binding protein 3; IFN-γ, interferon γ; IL, interleukin; STAT, signal transducer and activator of transcription; T-bet, T-box expressed in T-cell; TNF, tumor necrosis factor.Parts of this image were derived from the free medical site http://smart.servier.com/(accessed on September 15, 2023) by Servier, licensed under a Creative Commons Attribution 3.0 Unported Licence.
|
2024-01-28T16:11:09.371Z
|
2024-01-26T00:00:00.000
|
{
"year": 2024,
"sha1": "487df3561e45cbaae8ddfb5ecf5cd6b9d23d580a",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2024.5352/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4abd8f56b023a3852eecadcb9892cdf1bc4690b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
236966677
|
pes2o/s2orc
|
v3-fos-license
|
Nurses Who Assume the Role of Advocate for Older Hospitalized Patients: A Qualitative Study
Introduction Patient advocacy, acting on behalf of patients’ unmet needs, is fundamental to nursing, and the perception of the need for advocacy motivated this study. Nurses experience moral discomfort, which results from a divergent view regarding medical or caregivers’ decisions about patients’ clinical proceedings, in which patients’ involvement in making those decisions is either doubtful or absent. Objectives The aim of this study is to assess the need for advocacy and explore the perspectives of nurses engaged in the care of older patients. Methods The methodological orientation is that of a qualitative design, by using a purposive and criterion sampling. The sample was of 14 nurses of a ward of general medicine. Focus group as collecting data tool was used, followed by a thematic analysis. Results Nurses demonstrated a high level of moral sensitivity to ethical problems in clinical practice and on occasions, the courage to bring the problem to the physicians or patients’ family’s attention, or help patients develop self-determination. However, it is difficult to advocate because of insufficient communication between professionals, insufficient knowledge of ethics, and the emotional burden it places on nurses which results in emotional resignation in the face of interprofessional teams’ lack of consideration of nurses’ opinions. Conclusion This research highlighted nurses’ need for advocacy to promote patients’ rights, wishes, and values. It is essential for nurses to be aware of their level of moral sensitivity and develop a strategy to regain courage to engage in advocacy. Therefore, ethics education and interprofessional ethical leadership is desired, which inspires healthcare professionals’ work and allows the foundations of an ethical decisionmaking process to be laid through patients and their families’ active involvement.
Introduction
In clinical practice, nurses experience moral discomfort related to emotions, reflections, and dilemmas that arise with respect to older patients' care. This moral discomfort is attributable to nurses' perspectives, and leads to disagreements with medical or caregivers' choices about patients' clinical conditions, in which patients' involvement in such decisions is doubtful or absent. Nurses have reported this distress often, which suggests that they suffer when coping with ethical dilemmas related to older patients' end-of-life care. This discomfort is known as "moral distress", to describe the psychological, emotional and physiological suffering that nurses and other health professionals experience when they act in ways that are inconsistent with deeply held ethical values, principles or commitments (Corley, 2002;Deschenes et al., 2020;McCarthy & Gastmans, 2015;Woods, 2014). The result of such repeated diversity of views about what is appropriate for patients' care can lead to psychological disequilibrium and emotional exhaustion (Corley, 2002;Malloy et al., 2009;Oh & Gastmans, 2015). Caring for frail, older persons also involves being confronted with them at the end of their lives and asking questions about the appropriateness of intensive and aggressive care (Perin et al., 2018). This can cause moral dilemmas in care practice, when, for example, aggressive, futile treatments are adopted that prolong suffering without a clear clinical rationale (Eriksson et al., 2014;Haahr et al., 2020;Perin et al., 2018). Therefore, in the care nurses provide at the end of an older persons' life, they are likely to encounter situations that cause them moral distress, with all of the consequences that entails, physically, psychologically, and professionally (Deschenes et al., 2020;Perin et al., 2018). Nurses are confronted increasingly in clinical practice with vulnerable patients who struggle to express their autonomy, which draws attention to the potential need to advocate for their expressed or unexpressed wishes. People with dementia and their carers often experience uncertainty in decision making, leading to difficulties when creating an advanced care plan. It is therefore necessary for healthcare professionals to demonstrate empathy and provide an understanding of the decisions that may need to be made along the trajectory of dementia (Sellars et al., 2019). Mahlin (2010) pointed out that, although patients are not considered vulnerable automatically, it may be difficult for them to express their views and choices fully, given the combination of illness, hospitalization, and subjection to a potentially dangerous medical establishment. The understanding of the decisions that should be made is possible through "entering the patient's world", where the nurse will develop the nurse-patient relationship as a strategy to encourage patients to participate in self-care (Strand˚as & Bondas, 2018). Therefore, this highlights the need to provide patients with all of the resources and information necessary to be well-informed about their health condition so they can make their own decisions (Leitungsgruppe des NFP 67 Lebensende & Schweizerischer Nationalfonds zur F€ orderung der Wissenschaftlichen Forschung, 2017; Ufficio federale della sanita`pubblica, 2020).
Given the future care in which we will be confronted increasingly with complex patients in old age (Eurostat, 2020;Ufficio federale di statistica, n.d.;World Health Organization, 2018), it is important to support nurses' capacity to serve as advocates and understand their emotions and experiences while caring for patients. Indeed, as pointed out by Reed et al. (2018), advocacy action is motivated by the emotional responses of nurses to the end-of-life vulnerability people experience. Our research was designed to explore nurses' experiences as advocates and highlight which strategies may be implemented to improve the ethical reflections on clinical practice and nurses' ability to advocate.
Objectives
To assess the need for advocacy of nurses engaged in the care of older patients (65 and older) and their perspectives.
Research Design/Methods
The methodological orientation is that of qualitative research design, by using a focus group as a data collection tool, in order to obtain an understanding of nurses' advocacy roles, based on their perspectives and experiences. The data has been analysed with the method of thematic analysis. According to Braun and Clarke (2006), a thematic analysis is a flexible and useful research tool that allows a rich and complex interpretation of data. Compared to other qualitative methods, a thematic approach allows researchers to explore in depth the meaning attributed to nurses' advocacy within a specific context.
Setting
A ward of general medicine with a majority population of older adults in a regional (cantonal) hospital in southern Switzerland.
Sampling
The participants were enrolled through purposivecriterion sampling, based on "participant's experience with the phenomena under study" (Moser & Korstjens, 2018). The sample was of 14 nurses from the ward of general medicine who "vary in characteristics and in their individual experiences" (Moser & Korstjens, 2018).
The principal investigator is the clinical nurse specialist of the ward involved in the study and the working relationship with the participants represents a value, because of the common objective of understanding the phenomena, in order to improve the quality of care of patients. The principal investigator, in their dual role as researcher and team member, has from the outset reflected on their own role at every stage of the research, enabling an objective and detached view of the data.
The clinical nurse specialist was the only researcher member of the team, while the other two researchers were external to the institution.
The study was presented to the hospital's nursing director. After consent was obtained, the study was presented to the head nurse of the ward involved, who also expressed interest. Then a face-to-face approach was used to recruit participants, and no one refused to participate, highlighting the importance of the phenomena within this context. Only two participants couldn't participate, because of personal reasons.
Data Collection
Data were collected from June to August 2019 through focus groups that included structured, open-ended questions (Table 1).
"A focus group is a group of individuals selected and assembled by researchers to discuss and comment on, from personal experience, the topic that is the subject of the research" (Powell & Single, 1996). On developing focus group questions, a funnel design was used, with a view to a discussion that moves from broader to narrower topics (Morgan & Scannell, 1998, p. 53). The groups began by reading a hypothetical clinical case study to stimulate discussion on advocacy. The fourth of the key questions asked participants to write about a situation they had experienced, which generated 14 narratives on which a thematic analysis was conducted. Before the focus groups were conducted, a qualitative methodologist trained the principal investigator specifically to moderate focus groups. Participants were divided into three groups of four to five participants each, taking into account the personal characteristics of each one. The focus groups lasted approximately two hours and depended on how much they shared in the discussion on the topic. Field notes were made during and after the sessions, as "the collection of these first reflections can be valuable in guiding later stages of reflection and analysis" (Phillippi & Lauderdale, 2018).
Data Analysis
All focus groups were audio-recorded and transcribed verbatim. Braun and Clarke (2006) six-step procedure was followed for thematic analysis: 1. Familiarizing yourself with your data: two researchers analysed each transcript, which was read several times and analysed sentence by sentence. 2. Generating initial codes: two researchers identified first codes separately, with the support of tables. 3. Searching for themes: the two researchers brought together the initial codes and began to name themes and subthemes. 4. Reviewing themes: the list of themes was reviewed in order to ensure internal consistency. 5. Defining and naming themes: themes and subthemes were defined and described. A third researcher with expertise in qualitative research checked the analysis and helped develop the themes to ensure reliability. 6. Producing the report: the final report was reviewed by three researchers. Researchers discussed data saturation, and they concluded that it was reached after the third focus group, given that no further new themes emerged. A selection of the most relevant quotes for themes and subthemes was made to ensure confirmability.
Ethical Considerations
The research used audio recording, and participants were informed that all data would be anonymous and confidential. Informed consent was obtained. Moreover, the study was analysed by the National Ethics Committee, who declared that the research was conducted in accordance with national legislation (Req-2020-00093).
Results
Three focus groups were conducted with a total of 14 participants (Table 2).
At the beginning of the focus groups, the participants were presented with a hypothetical clinical case that required them to assume the role of advocate. During discussions on the hypothetical case, participants were more detached yet once they began to talk about similar cases which they have themselves experienced, more emotions were elicited. From the analysis emerged eighteen subthemes, grouped in seven major themes: i. Engaging in advocacy; ii. Living an ethical problem; iii. Living emotions; iv. Factors that facilitate advocacy; v. Factors that hinder advocacy; vi. Advocacy's effects; vii. Lack of advocacy's effects (Table 3).
Engaging in Advocacy
Engaging in advocacy represents for nurses the identification of situations of frailty/vulnerability of patients, in which the patient's self-determination could be undermined. Nurses engage in advocacy when they raise ethical issues on an interprofessional level. This major theme groups together themes concerning patientrelated aspects (frailty and self-determination), ethical aspects and inter-professionalism.
Frailty/Vulnerability. Nurses indicated that they need to engage in advocacy when they perceive patients' frailty and vulnerability, which is manifested primarily in suffering, fear of hospitalization, and the consequent desire for protection and defence.
"Seeing someone in pain! This is suffering! That in the sense that one can also approach death without suffering, if it does not manifest a discomfort, but if there is a manifest suffering, someone must take charge of the situation, because one of our tasks is to alleviate suffering as much as possible." (N3) ". . .then there are patients who are also scared, who are in the hospital, so they have little lucidity at that moment. . ." (N5) ". . .the patient is in a situation of weakness, because first of all, physicians are usually in a hurry, they intimidate patients, they make them talk little and usually they don't even dare to tell the doctor. . ." (N6) Participants stated that another aspect that determines a hospitalized patient's frailty is their lack of In some cases, weak social and family situations or families' failure to accept patients' clinical condition also determined patients' vulnerability.
"On the part of the family it could be a non-acceptance of the situation, and therefore it takes behind a very important work on the understanding of the situation and of what could be put in place, considering the idea of the patient himself. . ." (N4) Patient Self-Determination. The nurses believed that advocacy is necessary when patients' capacity for selfdetermination is not respected, when the diagnosis is kept from them at family members' request, and the right to informed consent is also lost. The participants considered it important to help patients develop self-determination, and indicated that the lack of time to reflect and make decisions for themselves is critical.
". . .a patient must be the first to be informed, even if the sons say no. . . she still has the power to say what to do with her own life. Two different paths would have been taken if the mother would have known about the suspected neoplasm. . ." (N7) "He must be asked, he must participate in his life, he must participate in his death. . ." (N8) ". . .the patient has to decide in a short time, but he needs to think about his situation, calmly." (N5) Raising Ethical Issues. The nurses stated that advocacy skills must be exercised when ethical issues arise in cases of potential excessive/futile therapy where patients' will is not considered, or when patients with dementia are unable to express their wishes with respect to therapeutic decisions.
"I do not judge whether the decision is right or wrong. . . but I must raise the issue. If I don't take this sentence back to a physician. . . if all this doesn't happen, I've done my job badly. . ." (N12) "In my opinion with geriatric patients it is a little more difficult if they have a cognitive disorder because they cannot express themselves. It's difficult to act on their behalf, to understand what they want and what they would like to do." (N6) Nursing and Team Collaboration. Finally, the participants indicated that advocacy is a skill that must be developed at the nursing and team level.
"Theoretically it should be a team effort, all together they should act for the good of the patient, so in collaboration with the physician. . ." (N5)
Living an Ethical Problem
When reflecting on situations that require advocacy, nurses refer to morality, injustice and ethical principles, considering the influence of culture on these aspects and the need to ensure patients' rights. This major theme includes themes concerning ethical aspects, preserving patient safety as well as cultural implications linked to the context in which the research was carried out.
Refer to Ethical Principles. In describing the ethical problem when advocacy is needed, the nurses referred to the ethical principles of justice, autonomy, beneficence, and non-maleficence.
"I believe that it is also a question of morality, of fighting against injustice, that is what bothers me the most.
"When I think of ethics, I think of the two principles of doing good and not harming, which seem the same but are not the same at all! In the sense of asking oneself the question: what is doing good? above all it is not harming. . . for some people it would seem that you are doing good, while for patients', for them it is harm. So sometimes it's more important not to harm people than to do them good. . ." (N3) Cultural Aspects' Influence. The focus group discussions drew attention to the fact that cultural aspects and beliefs may influence ethics, medical decisions, or families' choices and patients' wishes. This lays the foundation for ethical conflict attributable to diverse visions influenced by one's own value system.
"Ethics is a very delicate subject, it is as wide as a concept. . . influenced by religion, beliefs. . ." (N3) "The cultural aspect counts, but we must also be good not to use it as a preconception. . ." (N12) Ensuring Patients' Rights. According to the participants, patients have the right to information (informed consent) and consequently, to reflect and make decisions for themselves (self-determination). Therefore, it is important that nurses support patients' ability to selfadvocate.
"With an adult who is able to discern, the physician informs her, unless she asks them to talk to her sons because she does not want to know anything." (N10) "I would better explain to the patient that she has the right to assert her opinion. . ." (N6)
Living Emotions
In this major theme the only subject is that of experiences and emotions, aspects with important content that deserved to be highlighted and, because of their specificities, could not be integrated into other major themes.
Nurses Experience Different Emotions. It emerged that the emotions the nurses feel when faced with ethical problems are ones which create their need to advocate. Among the emotions they listed were powerlessness, frustration, anger, indecision, sadness, and discomfort. These emotions erode the nurses' state of wellbeing to such an extent that one participant defined it as nurses' "emotional frailty." They described the climax of this as emotional resignation, in that protracted frustration over time leads the nurse not to defend his/her patient because it will achieve nothing.
"Emotions. . . a very frail part of us, are our frailty." "The question is direct and she (patient) expresses that she doesn't want to suffer. When she asks the question, we are doing the opposite of what she wants, and that would make me feel uncomfortable at that moment." "It's like a losing battle. . . so you don't even get bitter blood anymore and throw in the towel." (N1)
Factors That Facilitate Advocacy
The nurses identified the consideration of their opinions by other healthcare professionals, as well as the development of a common team view on care situations, in which the support of superiors and the trusting nursepatient/family relationship play an important role, as factors which facilitate advocacy.
Interprofessional Collaboration. On the one hand, interprofessional collaboration was illustrated best in the development of a team's common vision and superiors' support, and on the other in physicians' greater consideration of nurses' opinions.
". . . to confront each other, to understand that it is not just you who see it that way. So yes, it gives you more strength to support your opinion." (N5) "The support of superiors! When there's a head nurse, for example, who holds your side, who's convinced with you, it's seen as more important." (N3) "If there is a physician who collaborates with nurses, who also listens to our opinion, then we can achieve a common goal of defending the patient's rights." (N1) Trusting Nurse-Patient/Family Relationship. A trusting relationship established between nurses and patients was considered fundamental, and it is the nurses' role to listen to their patients' needs and wishes so they are able to act as their advocate. Nurses consider that family members' support is a great help in advocacy and when professionals alone are unable to advocate, they help family members do so.
"We are more present with a patient and often a physician does not see what we see. . ." (N8) "(suffering) Whether it is physical, psychological, precisely not knowing, when you see that the patient is suffering, he expresses it to you, because in the end, if he tells someone, more often he tells us!" (N5)
Factors That Hinder Advocacy
Participants state that the divergent view and the lack of involvement in the patient's decision-making process by the physician makes them feel that they are losing their professional autonomy, over time feeling resigned when faced with these situations. The strategy they identify is clear and transparent communication and the opportunity to confront their opinions. Nurses also consider workload as an obstacle to advocacy.
Lack of Interprofessional Collaboration. The participants identified ineffective interprofessional collaboration as a cause of divergent views between nurses and physicians in which a nurse perceives the loss of professional autonomy and lack of involvement in patients' decision-making.
"In my opinion we are (nurses and physicians) often on different tracks, physicians have one goal and we have another one and we cannot meet each other." (N6) ". . . our role has become more marginal in the interaction with the physician. . . so when there is something important to say, maybe you are blocked by this hierarchy and they (physicians) don't listen to you. . ." (N3) "I think we're back to just following orders." (N5) The Weight of Emotions. The feeling that the participants indicated was the most common obstacle in engaging in advocacy was emotional resignation, in that protracted frustration over time develops in countless situations in which nurses' opinions are not acknowledged and/or considered.
"But in my opinion, it is also about the frustration that you may have, that you carry with you from previous situations. . . you find yourself in front of a wall, you try once, you try twice, try three times, at the fourth you say enough, you don't even try because you know it is useless." (N2) Communication Among Different Professionals. The participants' reflections revealed that they often do not know the way to raise an ethical problem and communicate it so that it will be heard. Some believed that healthcare professionals should communicate more and transparently, and compare their different opinions so they can make clinical decisions that respect patients' wishes.
". . . it also depends so much on knowing how to argue" "Communicating is the best thing. . . being transparent at the moment, even if not nice and pleasant, but feeling better." (N8) ". . .we should be able to communicate more with each other, maybe within debriefings." (N9) Lack of Time. The nurses believed that their increased workload and duties leave them with insufficient time to dedicate to speaking with patients; therefore, with only a superficial knowledge of them, they cannot identify their wishes.
Advocacy's Effects
From data analysis, it emerged that advocacy's effects are related to being able to engage in advocacy and defend patients' rights and wishes, contributing to the well-being of nurses, patients and their families.
On Nurses. The participants stated that serving as an advocate makes them feel proud and satisfied to have been able to meet their patients' needs, giving them a feeling of serenity, wellbeing, peacefulness, and a reduced sense of frustration and helplessness. Furthermore, the focus group discussions indicated that even when they engage in advocacy without achieving their goal, they feel gratified for having advocated for patients' rights or wishes.
"It is a matter of pride that brings value to our actions." On Patients (From a Nurse's Perspective). The participants said that advocacy leaves patients and families satisfied, as patients feel appreciated, respected, and welcome to express their needs. Further, patients feel supported in developing their self-determination, all of which help strengthen the nurse-patient relationship.
". . . the situation is less tense. . . the situation of the patient and family is calm, more relaxed, they finally got what they wanted, and that is the greatest satisfaction. . ." (N3)
Effects of Lack of Advocacy
The results underline that a lack of advocacy has as much an effect on nurses and patients as its presence. In fact, participants point out that failure to engage in advocacy causes several negative emotions among nurses, with consequences in nurse-patient relationships.
On Nurses. Failure to assume the role of advocate triggered multiple emotions in nurses that may cause emotional resignation and loss of pleasure in their work. These included discouragement, anger, remorse, frustration, a sense of helplessness, lack of motivation, dissatisfaction, a sense of futility, guilt, disappointment, resentment, and annoyance.
"Disappointment, anger, resentment, sometimes you wish, maybe when you think back to the situation, you wish you'd said something in that situation . . . and you feel resentful." (N10) On Patients (From Nurses' Perspective). The nurses indicated that patients who are undefended feel frustrated, insecure, and sad because they feel misunderstood and disregarded, which makes them even more vulnerable. This leads patients to withdraw with a consequent and inevitable break in the nurse-patient relationship.
"They feel disregarded, what has been said has gone unheard." (N7) "It can make them sad inside, because they are poorly understood." (N8)
Narratives' Data Analysis
The narratives' data analysis revealed a single theme, "Applying advocacy" with 5 subthemes. Nurses' moral sensitivity emerged in all narratives, in which they identified situations that required advocacy, and in some cases, the moral courage that led them to defend their patients. The narratives also showed the way superiors' support through listening to nurses' opinions was crucial, while in some cases the development of a common team view of the problem was effective. When nurses failed to advocate because physicians did not listen to them, they felt angry, helpless, frustrated, sad, and distressed.
Discussion
Participants in this study repeatedly emphasised the importance of advocacy in professional activity and the definitions that emerged were similar to those of the International Council of Nurses, where advocacy is considered to be an integral nursing professional competence. As the nursing code of ethics states: "The nurse promotes an environment in which the human rights, values, customs, and spiritual beliefs of the individual, family and community are respected . . . and ensures that the individual receives accurate, sufficient, and timely information in a culturally appropriate manner on which to base consent for care and related treatment" (International Council of Nurses, 2012). An aspect considered important is the respect of the patient's perspective related to self-determination helps in laying the foundations for developing a relationship with the person. Arcadi and Ventimiglia (2017) stated that in the nurse/patient relationship, it is impossible to guarantee a positive outcome in the absence of a relationship based on trust. The participants believe that the relationship of trust is fostered by the time spent in contact with the patient. Indeed, proximity promotes professional intimacy, as a component of the therapeutic nursepatient relationship which encourages closeness, selfdisclosure, reciprocity and trust through emotional and/or physical forms (Antonytheva et al., 2021). Both MacDonald (2007), who examined the nature of the relationships between nurses and patients and their significant role in influencing the engagement in advocacy, and Foley et al. (2000), who considered the relationship of proximity, considered the time the nurse spends in contact with patients. The participants believed that the time spent interacting with patients is essential to become familiar with their needs and wishes, as well as their role as a reference during the hospital stay, as highlighted also by Koloroutis and Willems Cavalli (2008, p. 125). The nurses considered building relationships with both patients and their families a factor favourable to advocacy as it is the family who supports the nurse in advocating when their joint visions of the clinical proceedings overlap. The participants' experiences showed that in cases in which they failed to advocate, they helped and encouraged patients and families to develop this ability nonetheless, while respecting the principle of patients' autonomy. Indeed, the nursepatient relationship is a "story of health enhancement" that requires active participation and commitment from both nurses and patients, that strengthens not only health but also the patient's own resources for health and well-being (Strand˚as & Bondas, 2018). The effects on nurses and patients of assuming the role of advocate, and respectively, not doing so, were analysed thoroughly from a nursing perspective. Being able to engage in advocacy generated a sense of pride, serenity, wellbeing, tranquillity, and satisfaction in the participants because they had been able to satisfy patients' needs. It was also found that engaging in advocacy even without achieving the goal leads to gratification for having defended the patients' rights or wishes. On the other hand, not engaging in advocacy causes them to experience depression, anger, remorse, frustration, a sense of helplessness, lack of motivation, dissatisfaction, sense of futility, guilt, disappointment, resentment, and annoyance that could lead to emotional resignation and the loss of pleasure in their work. Such emotions eroded the nurses' wellbeing, which one participant described as "emotional frailty". This emotional state could lead to psychological disequilibrium and emotional exhaustion, as pointed out by Corley (2002) and Oh and Gastmans (2015). With respect to patients, the participants stated that engaging in advocacy leads to patient and family satisfaction, as patients feel justified, respected, appreciated, and supported in developing self-determination, as pointed out also by Baldwin (2003), all of which strengthen the nurse-patient relationship. Another important aspect is related to not being able to advocate, and patients who are undefended feel frustrated, insecure, and sad because they feel misunderstood and disregarded, which makes them even more vulnerable. This leads patients to shut down, with a consequent and inevitable breakdown in the nurse-patient relationship. The participants identified the characteristics of patients who need to be defended and the situations in which advocacy is required. These characteristics included their state of frailty/vulnerability, defined as manifest suffering attributable to hospitalization (Arcadi & Ventimiglia, 2017;Baldwin, 2003). This concept of vulnerability is broadened in the mind of the nurse as both the patients' and families' caregiver, as they may also present with socially frail conditions. The increased life expectancy and aging population confront us with situations in which caregivers themselves are older, or in which patients are even another older person's primary caregiver. These family social dynamics, characterized by significant frailty, prompt families to adopt a protective perspective toward their loved ones, such that they request that patients not be informed of their diagnosis to prevent additional emotional distress. In these situations, nurses feel they need to advocate for patients' rights to informed consent and help them develop the capacity for self-determination, while respecting their right to freedom of choice, as legislation dictates (Ufficio federale della sanita`pubblica, 2020). In this study participants also believed that having advance directives could decrease such situations, as decisions would be based on the wishes patients expressed when they were able to discern and during which they had the opportunity to reflect on their own state of health and future developments in their life. Current legislation, Article 377 of the Swiss Civil Code, confirms this aspect with respect to patients who are incapable of discernment (Il Consiglio federale, 2020). This assumption stems from the urgency the nurses highlighted, who argued that patients may not have time to reflect on the decision they need to make when diagnosed. In this respect, the nurses believed that if the population were aware of their serious implications and developed advance directives, it would represent an important cultural change. In this study nurses consider that cultural aspects, such as religion and beliefs, influence patients', families and healthcare professionals' views on a clinical situation. This makes sense in a national context that has four national languages, sees increased plurilingualism due to immigration from European and non-European countries and also reflects increased religious diversity (Ufficio federale di statistica, 2016). The participants consider cultural aspects important and believe decisions should be made according to patients' own value systems. Indeed, de Vries et al. (2019) support this assumption, arguing that cultural aspects such as ethnicity, religiosity, and spirituality, as well as the level of health literacy, influence people's choices greatly when they draw up plans, which consist not only of advance directives on resuscitation, but also of wishes during the end-of-life phase. In this respect, according to Killackey et al. (2020), an Advanced Care Plan (ACP) should be implemented as a continuous process of exploring values and goals and should involve the input of patients, family members, and a variety of healthcare providers. However, in this study the origin of an ethical problem was linked to the divergent views among nurses, other healthcare professionals and patients' families, which lays the foundation for a reflection on the problem's multidisciplinary nature, which will require an interprofessional approach to be resolved successfully, as highlighted by Woods (2014) and Tracy and O'Grady (2019, p. 313). Nurses supported this approach strongly, and identified advocacy as the responsibility of the nursing and multidisciplinary team through active involvement of patients and families. Therefore, it is necessary for nurses, physicians, and families to share their perspectives by involving patients actively in decision-making, as indicated in PNR research's recommendation no. 3 on shared decision making (Leitungsgruppe des NFP 67 Lebensende & Schweizerischer Nationalfonds zur F€ orderung der Wissenschaftlichen Forschung, 2017, p. 51).
Participants believed that advocacy skills are required when ethical issues are raised, i.e., in cases of patients' unmet or disregarded needs, as pointed out also by Josse-Eklund et al. (2014), of possible excessive/futile therapy with respect to patients expressed will, or in the case of patients with dementia who are unable to express their own will regarding therapeutic decisions. By raising ethical issues and bringing the wishes patients expressed to the caregivers and families' attention, nurses play the role of mediator, as they serve as those who defend the rights and wishes patients expressed and thus, play an active role in their decision-making (Baldwin, 2003;Tiscar-Gonzalez et al., 2020).
This research studied the origin of the ethical problem the participants described as the diverging views between nurses and physicians, as highlighted also by Tracy and O'Grady (2019, p. 313), or between nurses and patients' families on the basis of decisions for clinical proceedings. These divergent perspectives and decisions cause nurses to experience varied emotions that determine their perception of the need to advocate for their patients. The participants identified the problem causing these emotions, which they described in terms of justice, disrespect for patients' autonomy in making decisions, and whether the care undertaken was necessary and good for the person assisted, or posed a risk of engaging in excessive/futile therapy; thus, the nurses tended to describe the problem from Beauchamp and Childress' theoretical perspective on ethical principles (Johnstone, 2016, p. 36). It has also emerged that it is important for nurses to be able to ensure patients' rights and respect their wishes, even if those may conflict with the nurses' value system.
Participants identified the nursing team's development of a common vision of the problem and superiors' active support in engaging in advocacy as factors favourable to advocacy, as suggested also by Abbasinia et al. (2020), superiors and colleagues' support makes nurses feel more confident in defending patients' wishes before other healthcare professionals or families. As a result, nurses' need for interprofessional collaboration emerged, where a nurse's opinion is considered to a greater extent so they can play an active role in patients' decision-making process. Indeed, health professionals' collaboration is essential, because no single health professional can meet all patients' needs (Matziou et al., 2014). The failure to consider nurses' perspectives not only fails to promote interprofessional collaboration, but also causes nurses to perceive the loss of professional autonomy, to feel belittled continuously as a professional, and frustrated. The findings revealed that protracted frustration leads to emotional resignation, which causes nurses to lose their motivation to engage in advocacy. The participants had the opportunity to reflect on this obstacle, and identified their difficulty in communicating ethical issues that physicians or patients' families need to consider. Indeed, Eriksson et al. (2014) pointed out that the decision-making problem is based on communication barriers.
In this study, nurses perceive that the increased workload in the hospital prevents them from spending enough time with patients, which is perceived as an obstacle to advocacy, as found also by Dadzie et al. (2017). Recently it was highlighted that communication and information sharing, care planning, discharge planning and decision, emotional and psychological care including spiritual support are all among the categories of care which nurses missed; factors associated with missed care were related to staffing levels and/or labor resources skill mix, material resources not being available, patient acuity and teamwork/communication (Chaboyer et al., 2021). The perception of workload as an obstacle to advocacy is the basis for a reflection and further research on why this happens, on the challenges faced in managing hospital activities and on the nursing skills that need to be developed, considering that according to Eriksson et al. (2014) the time limitation is also a reason for the lack of discussion of deeper ethical dilemmas, daily experienced thoughts, and evaluations.
The data collected in our research confirmed the presence of situations in which nurses are confronted with ethical problems related to patients' end-of-life decision making. This is supported by the findings of a national research programme that investigated end-of-life, and found that in one in four cases in Switzerland, the physician did not talk to patients about the end-of-life decision, although they were still able to discern it. In half of these cases, physicians consulted the family or were aware of the person's wishes regarding the end of life (Leitungsgruppe des NFP 67 Lebensende & Schweizerischer Nationalfonds zur F€ orderung der Wissenschaftlichen Forschung, 2017, p. 21). Respecting a dying person's dignity means respecting his/her freedom and self-determination and protecting the lives of particularly vulnerable people (Leitungsgruppe des NFP 67 Lebensende & Schweizerischer Nationalfonds zur F€ orderung der Wissenschaftlichen Forschung, 2017, p. 11). Therefore, it is an element of dignity to allow patients capable of discernment to choose freely to determine their circumstances in the final phase of their lives. With regard to this aspect, the participants believe that it is necessary to act to defend patients' rights and wills through advocacy, even if they encounter difficulties in assuming this role.
In light of the connotation that the concept of advocacy takes on in the studied context, it is essential to develop a plan of interventions on the basis of the needs that emerged, which will allow the development of awareness, knowledge and advocacy skills. To reduce the emotional distress associated with living and coping with an ethical conflict, strategies like daily debriefings and consultations with a psychologist about the complexity of relationships with older patients and their families could be implemented (Choe et al., 2018;Dufrene & Young, 2014;Pileggi et al., 2014). According to Helmers et al. (2020) and Stolt et al. (2018), educational interventions are those used most in the development of ethical reflections in clinical practice, and these could address the needs that emerged from the participants of this study, regarding their knowledge about ethics and current legislation related to the care of older hospitalized patients. Furthermore, the nurses' narratives of their experiences allowed for a focussed reflection and analysis, and lead the basis for increasing awareness and knowledge on the topic, as supported by Foley et al. (2002) and Woods (2012).
Nurses consider family conferences to be privileged moments in which they can bring the wishes their patients have expressed to the physicians and the family's attention and, as outlined by Tiscar-Gonzalez et al. (2020), by means of a dialogue that highlights the responsibilities of each. Bianchi et al. (2019) pointed out that the achievement of interprofessional collaboration is highly desirable, as it is a resource not only for patients, but also for professionals, which can increase their professional satisfaction and skills attributable to exchanging views with other professionals. Further, the participants considered the hospital ethics committee's support crucial in the face of ethical dilemmas.
Strengths and Limitations
This research facilitated an awareness and deeper knowledge of the issue and highlighted nurses' need to engage in advocacy in a hospital context, in order to promote patients' satisfaction and safety with full respect of their values, wills, and rights. The study was conducted in a single ward of a regional hospital. Hence, future research that investigates the perspectives of nurses in more departments and hospitals in this context, and includes those of physicians, patients, and their families, would be interesting and useful. Further, as this was a qualitative study, the results can be generalised only to similar experiences in similar contexts. In order to ensure the validity and significance of the findings, the researcher adopted strategies of data triangulation. Furthermore, considering the saturation of data achieved and that the results of this research are consistent with previous literature in nursing (Abbasinia et al., 2020;Arcadi & Ventimiglia, 2017), the findings may be considered relevant also in other contexts.
Conclusions
A strong level of moral sensitivity on the part of nurses faced with ethical problems in the care of older patients approaching the end-of-life emerged from this study, as their narratives told of situations in which it was necessary and appropriate for them to intervene as patients' advocates. Sometimes moral courage was used, to raise the ethical problem and bring it to other professionals' attention or make families aware of the wishes patients expressed. Therefore, it is essential for nurses to be aware of their personal level of moral sensitivity to allow them to develop a plan of measures to regain their moral courage to engage in advocacy. Thus, interprofessional ethical leadership that inspires and supports healthcare professionals' daily work and lays the foundations for ethical decision making through patients and families' active involvement is necessary. This consideration underlines the need for the development of educational and managerial strategies, by introducing the nurses to different moral theories and ethical decision-making procedures in a context supported by values where nurses' leaders can increase the involvement of nurses through a transformational leadership style, in order to support patient advocacy and improve the quality of care (Goethals et al., 2010;Johnstone, 2016, p. 121). After an in-depth reflection on these aspects, a replication of this study is recommended, including multiple acute care settings admitting older adults in several study centres, in order to obtain a deeper understanding of the concept and to identify the most effective strategies in promoting ethical decision making in healthcare.
Implications for Nursing/Clinical Practice
Understanding the importance of advocacy and being aware that it is a responsibility of the nursing profession is extremely helpful for nurses and nursing leaders. Some of the concrete ways of ensuring that nurses' advocacy for older adult patients is always supported and encouraged in the health organizations are to sustain an ethical leadership among nursing leaders and cultivate moral sensitivity. Educational strategies on ethics, assertive communication and promotion of ethical decisionmaking models are essential in the healthcare setting. Starting from this, nursing leaders should be careful to ensure that the conditions are in place to express advocacy in clinical care settings.
|
2021-08-11T05:23:49.560Z
|
2021-07-28T00:00:00.000
|
{
"year": 2021,
"sha1": "9f7bda6ce5f83accbe597bec83d339459401a9e6",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23779608211030651",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f7bda6ce5f83accbe597bec83d339459401a9e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
146526783
|
pes2o/s2orc
|
v3-fos-license
|
Strengthening integrated learning : Towards a new era for pluriliteracies and intercultural learning
Abstract The expansion of Content and Language Integrated Learning (CLIL) on a global scale has brought to the fore challenges of how alternative, more holistic approaches to learning might transform classrooms into language-rich transcultural environments. Integrated approaches can offer learners opportunities to engage in meaning-making and language progression through cognitively challenging and culturally-embedded sequenced activities, as reflected in the 4Cs Framework (Content, Cognition, Communication, and Culture). These emphasise classroom language as well as learners’ needs to access the variety of language that helps them learn an additional language effectively—as represented in the Language Triptych. However, it is well documented that complex contextual variables make it difficult to realise CLIL’s potential. Recent research by the Graz Group into how to better integrate the 4Cs’ components has led to development of the Pluriliteracies Framework, in which conceptualization and communication come together and learners are encouraged to language (or articulate) their learning in their own words. This demands new ways of conceptualizing, planning, and sequencing activities that support learners in accessing new knowledge whilst developing existing and new language skills must be shared and understood by teachers. The Pluriliteracies model is evolving, and there is a clear need for further work.
Context AnD ChAllenges
There have been great advances in a relatively short period of time in terms of creating a dynamic theoretical and practice-oriented foundation for the development of Content and Language Integrated Learning (CLIL) across Europe and increasingly on a global scale.Changing paradigms in educational contexts-often, but not exclusively, based on rapid technological advancement-are leading to unprecedented changes in how our education systems are evolving and how the complex processes involved in learning are being acted out in classrooms.Increasingly, moves towards ensuring our young people in formal schooling are skilled in knowledge construction and meaning-making in order to equip them for an uncertain future, are debated and experimented at length.Fullan and Langworthy (2014) conceptualise a case made for 'new pedagogies' where: Teaching shifts from focusing on covering all required content to focusing on the learning process, developing students' ability to lead their own learning and to do things their learning.(p.17) It is against this dynamic backdrop where learning and learners are prioritised that approaches to more integrated language curriculum -where meaning-making connected to deepening content learning is also transparently connected to language progression -are rapidly increasing across the world.Content and Language Integrate Learning is one such approach where, according to Dalton-Puffer (2007): Curricula of so-called subjects (e.g.geography, history, business studies) constitute a reservoir of concepts, topics and meanings which can become the object of 'real communication' where natural use of the target language is possible.(p. 3) CLIL started to gain momentum in the 1990s within the European context as a move towards profiting from inherent multilingualism across nations and ensuring that those linguistically and culturally-rich environments for learning are fully utilised and exploited (Lasagabaster and Sierra, 2010).It is well documented (Marsh, 2002;Eurydice, 2006;Coyle and Beardsmore, 2007; see also the overview by Ruiz de Zarobe, 2013) that we are now entering a new era which brings together multiple aspects of learning within which language learning must also be situated into a more coherent whole.This demands 'new thinking' in terms of pedagogy and classroom practices if CLIL is to become genuinely embedded into the regular curriculum anywhere in the world (Meyer et al., 2015).
In 2010, Coyle, Hood, and Marsh published the Cambridge University Press book CLIL, which has the following definition: Content and language integrated learning is a dual-focussed educational approach in which an additional language is used for learning and teaching of both content and language.That is, in that teaching and learning processes, there is a focus not only on language and not only on language.(p. 1) This definition alongside many other similar ones emphasises the need to integrate content learning with subject learning with an emphasis on raising awareness of and developing the required skills to successfully learn and teach in these classes.It echoes the Council of Europe's ideal (in the Languages of Schooling, 2010; see Figure 1) of a more holistic view of the languages of schooling, connecting the using and learning of foreign languages, heritage languages, and second languages illustrated by Orban's (2008, cited in Coyle, 2009) statement that 'we need to have two or more languages in order to know we have one …' As the founding principle of developing a CLIL approach lies in its flexibility to respond to specific contexts for learning, it soon became apparent that for CLIL to be effective it had to be contextembedded and content-driven yet with specifically-determined target language outcomes.Building on the premise that language is our greatest learning tool, CLIL seeks to connect learners to the realities of using different languages at different times for different purposes.This position led to experimentation with different models for CLIL.As Hugo Baetens-Beardsmore (1992) famously stated, "there is no universally applicable theory of bilingual education and no given model, no matter how successful, is for export" (p.274).
As CLIL programmes flourished there was increasing flexibility of length of programmes, language(s) targeted, the age and linguistic proficiency of the learners as well as the subject matter and content.Increasingly questions were being debated about the nature of CLILthe list is extensive, but some prominent concerns follow: • Is a programme more content-oriented than language-oriented or somewhere in between?What are the implications?Where does integration fit in?Cross-curricular themes rather than a defined discipline such as History?
Where are the resources?
•
As a teacher new to CLIL how do I know how to plan, monitor and assess teaching and learning?If the overarching goal of our teaching and learning is to envision future global citizens to communicate and learn effectively in more than one language, then it becomes increasingly clear that fundamental changes to classroom practices -based on a changing mind-set and understanding by teachers -is needed.I shall illustrate this point with a quotation I regularly use: Too much attention is directed towards finding the 'best method; even though fifty years of educational research has not been able to support such generalisations.Instead, we should ask which methods or combination of methods is best for which goals, which students and under which conditions.(Dahllof, 1991, p. 148) The conditions alluded to and the spaces which constitute them, however, are rapidly requiring those who work and learn in them to become increasingly plurilingual-defined in the Common European Framework of Reference for Languages (Council of Europe, 2000) as an individual's ability to 'use several languages to varying degrees and for distinct purposes' (p.168) across several cultures.Garcia (2009) supports this in her reference to valuing plurilingualism because 'it extends mastery of two or more standard languages to include hybrid language practices' (p.55).
The need to focus attention on the developing learners' plurilingual and pluricultural competences leads Stigler and Hiebert (1999) to remind us: If you want to improve the quality of teaching, the most effective place to do so is in the context of a classroom lesson….The challenge now becomes that of identifying the kinds of changes that will improve learning for all students… of sharing that knowledge with other teachers.(p.131) It is the emphasis on changes to classroom practices, the underlying pedagogic principles used to guide learning and teaching and the shared ownership of a vision for those evolving practices that are our greatest challenges.Unravelling the principles underlying changes to pedagogic practices required for successful CLIL will now be considered.
MethoD
The 4Cs Conceptual Framework was developed in the 1990s by Coyle et al. (2010) working with a range of CLIL teachers in a range of contexts in order to provide a guide for emphasising the fundamental elements of CLIL (Coyle, 2002(Coyle, , 2007(Coyle, , 2010;;Llinares et al., 2010).It was a means of enabling both language teachers and subject teachers to be supported in a basic understanding that CLIL was not about deciding which content or which language needed to be taught but involved a much deeper and complex conceptualisation of learning including cognitive demands and intercultural understanding.The visualisation of the 4Cs (see Figure 2) identifies key components of CLIL set within the context in which it is played out as: content, cognition, communication and culture.
Content refers to the subject or theme of the learning in any curriculum which ranges from subject disciplines such as Science, History and Geography to cross disciplinary themes such as global citizenship, sustainability, or community development.It involves curricular knowledge and understanding.
However, content cannot be considered in isolation but as part any learners' cognitive development and intercultural understanding.Cognition or cognitive development in this sense relates to the cognitive level of the learning-one of the clearest examples being the level of thinking that CLIL tasks demand in relation to the content.This can be illustrated by using Anderson and Krathwohl's (2001, pp. 67-8) revised version of Bloom's Taxonomy (1956) to plan how tasks which target the development of content understanding involve developing learners' higher thinking and problem solving skills.Rooted in social constructivist principles of learning, deep learning involves social settings where learners are enabled to articulate their learning before internalising their own interpretation of these concepts on an individual basis.These processes are fundamental to meaning-making-a case of How do I know what I know 'til I hear what I say?Planning for higher-order thinking and deep learning has not traditionally been in the repertoire of language teachers who have drawn extensively on Second Language Acquisition theories for language learning for decades.Whilst subject teachers may be familiar with concept formation and problem-solving, the way in which these link to language are less likely to be part of planning.The dilemma is exacerbated by the challenge that for many CLIL learners their linguistic level in the CLIL vehicular language is likely to be lower than their cognitive 'learning' level.Yet as a core principle in CLIL classrooms, the cognitive level at which learners operate in L1 cannot be compromised.
This leads us on to Communication since it is language that cements meaning-making and understanding (cognition) of the subject matter (content knowledge) with the language used to learn, to communicate and to externalise and internalise understanding.Communication is the language that is used to construct knowledge, used for meta-cognitive and communicative purposes as well as reflective intervention (Bruner, 1982) on learning.Perhaps it is helpful to emphasise the difference between language using and language learning since both are required in the CLIL classroom.Language teachers are familiar with language learning often based on grammatical progression and communicative development.However, I would argue that in general neither language teachers nor subject teachers are familiar with the need to consider the role of language using for learning; that is, when the language is both the medium and the message.Grammatical chronology does not provide the wealth of language required for CLIL learners to access the discourse integral to the learning Science or History when it is needed.Disciplines have their own discourse patterns (academic literacies), which are specific to that discipline as well as a requirement that meta-cognition or learning how to learn also relies on linguistic functions not in the usual experiences of the school -based language lesson.Teachers, therefore, are faced with the need to reconceptualise practices if in CLIL settings language is considered both a learning tool and a communication tool.
The Language Triptych (Coyle et al., 2010) as shown in Figure 3, goes some way to drawing attention to this dilemma by bringing together 'content-obligatory', 'content-compatible', and 'contentenriching' language into a visual which focuses attention on identifying the language needed for learning as follows: • language of learning: content-obligatory language; that is, the key phrases, expressions, lexis, and content specific language.• language for learning: content-compatible language, which focuses on all the language required for enabling learning to happen in class; for example, task-specific language (such as that required to work in a group).
•
language through learning: content-enriching language, which is the language linked to deeper conceptual understanding on an individual level (that learners need to articulate in order to reiterate their own learning).The fourth 'C' connects cultural and intercultural understanding to learning in contexts where more than one language is being used.'Culture' is a complex phenomenon open to wide interpretation (Eagleton, 2000).
Moreover, building on previous arguments, developing plurilingual competence in learners will also involve raising pluricultural awareness in order to enable individuals to work, learn and communicate successfully.In the European Centre for Modern Languages (ECML) publication Plurilingual and Pluricultural Awareness in Language Teacher Education: A Training Kit, edited by Bernaus et al. (2007), these competences lie at the core of twenty-first century learning: Cultural patterns, customs, and ways of life are expressed in language: culture-specific world views are reflected in language….(L)anguage and culture interact so that world views among cultures differ and that language used to express that world view may be relative and specific to that view.(Brown, 1980, p. 138) Moreover, building on previous arguments, developing plurilingual competence in learners will also involve raising pluricultural awareness in order to enable individuals to work, learn and communicate successfully.In the European Centre for Modern Languages (ECML) publication Plurilingual and Pluricultural Awareness in Language Teacher Education: A Training Kit, edited by Bernaus et al. (2007), these competences lie at the core of twenty-first century learning: Plurilingual and pluricultural competence is not achieved by overlapping or juxtaposing different competences; rather it constitutes a global and complex competence of which the speaker can avail himself or herself in situations characterised by plurality.(p.17) However, in CLIL contexts there is not only a sense of broader societal cultures that are inextricably connected to language use, but in addition the academic culture associated with individual subjects or disciplines.Hence the focus is also on the role of culture in learning.Within the paradigm of socio-cultural theory, culture underpins both language and cognition since it is through 'languaging' or 'putting into our own words' individual thinking that learners develop conceptual understanding.This in turn is embedded in the cultural context of learning and the ways in which particular disciplines use language.In other words, language is part of an individual's 'linguistic DNA' that is context-related and culturally mediated.
Hence, the 4Cs Framework provides a means of guiding the foundations for learning, which conceptually go beyond a simplistic emphasis on the language and content of learning, and draws upon the need to develop greater intercultural awareness and academic reading and writing skills as learners progress.
A PARADigM shiFt FoR integRAteD leARning
Over the years, whilst clearer guiding principles have emerged relating to CLIL classroom practices substantiated by a variety of research studies-both supporting CLIL and raising concerns-an increasing awareness of the need to understand better the nature of integrated learning brings into question the 'how' (see for example Coyle, 2011;Dalton-Puffer, 2007).Increasingly, questions are raised about the effectiveness of CLIL and the quality of the pupils' classroom learning.The shifting sands of the learning agenda, from knowledge transmission to meaning-making whilst using more than one language, are increasingly being brought under the microscope.Moreover, it can be argued that whilst the 4Cs Framework guides the what of CLIL it does not provide the how (of integration) (see for example Llinares et al. 2010;Meyer et al., 2015).The following questions are raised: • Content: What is content knowledge?Who owns it?How is it shared?What are the differences between meaning-making and knowledge transfer?
Cognition: Can progression in meaning-making using cognitive, social, and linguistic resources be separated from content and language use? • Communication: How can we support language learning and using in a CLIL context where language mediates and structures learning in culturally determined ways?• Culture: How can using a broader societal and academic subject lens that puts cultural and intercultural references at the core be made more explicit and supported further?Fundamental questions such as these require a paradigm shift-one where interconnectedness is at the core and where there is a shared understanding of integrated learning.The Graz Group (2014), a team of CLIL researchers funded by the European Centre for Modern Languages (Council of Europe),1 is currently tasked with reframing integrated approaches through a new dynamic model (Meyer et al., 2015).This brings together two crucial processes: learner progression in knowledge construction and meaning-making; while language using and development make these happen.
However, changing the pedagogic focus brings into question debates that have dominated the CLIL agenda for several decades.These essentially are to do with the role and nature of language in integrated learning such as redefining the place of grammar, the conflict between a focus on meaning and a focus on form, and the role of language error correction.Mohan and Beckett (2003) take a hard line: We are not aware of any evidence or explicit and detailed claims that the correction of errors of grammatical form is a sufficient However, Van Lier (1996) usefully suggests that: We should not let ourselves be trapped inside a dichotomy between focus on form and a focus on meaning but rather a focus on language… in practice it becomes impossible to separate out form and function neatly in the interactional work that is being carried out.(p.203) Yet if concept development and knowledge construction are at the core of CLIL, these require different kinds of language that do not depend only on grammatical knowledge and understanding which underpin much of the tradition of language learning.Moreover, the type of language required involves an awareness and understanding of the academic discourses that drive them.This is also referred to as academic literacies.Yet in subject or content learning, the development of academic literacy skills is not usually made transparent in more subject-oriented classes-especially in the foreign or second language.It would seem, therefore, that conceptual progression and the language used to enable that to happen are rooted in neither the traditions of language learning nor subject learning and hence are rarely explicitly taught at any level or context.Vollmer's (2008) work into the development of academic language with both L1 and L 2 learners in CLIL settings corroborates this view.
Both groups of learners show considerable deficits in their academic language use….thespecific competences in handling the language dimensions adequately and in expressing their thoughts and findings appropriately or functionally according to the genre(s) demanded are equally low, they show a serious lack of command over a sensitivity for the requirements of academic language use, both in L2 and in L1. (p. 272) The need to shift the pedagogic paradigm in which CLIL is situated emerges as a priority, as Wolfe and Alexander (2008) summarise: "Argumen-tation and dialogue are not alternative patterns of communication; they are principled approaches to pedagogy" (p.15).
An evolving PluRiliteRACies APPRoACh
According to Bonnet (2012), a deeper understanding of how effective integration of content learning and language learning can be conceptualised is starting to emerge.For example, the Graz Group (see previous reference) has chosen an alternative lens through which to explore integrated learning.With a particular focus on the development of academic literacies to support progression in conceptual understanding, academic discourse is used as a filter for cultural and intercultural learning that draws on literacy development.Literacy, in this sense, can be defined as 'control of "secondary discourses"' (Gee, 1989, p. 542) and across languages is the ability to 'think about and analyse texts critically, master sophisticated language and convey appropriate content and recognise how meanings are made within a wide range of texts ... and discourse communities' (Crane, 2002, p. 67).
However, when literacy development transcends languages then a pluriliteracies approach begins to take shape: A pluriliteracies approach focuses on developing literacies for purposeful and appropriate meaning-making in subject disciplines/ thematic studies across languages and cultures.It is predicated on the principle, that the primary evidence of learning is language (Mohan) which in turn mediates and structures knowledge in culturally determined ways.(The Graz Group, 2014) By putting plurliteracies at the heart of our approach to learning, there is a focus not only on enabling and empowering the learner to purposefully communicate across languages and cultures (academic as well as social) but also on promoting the essential role of language in shaping students' thinking and learning.From this perspective, integration consists of two inter-related continua: conceptual development and language development (see Figure 4).Conceptual development Polias (2007, p. 46) and Veel (1997) where four major activity domains are identified to demonstrate progression in learner knowledge construction.For example, if learners are working on scientific concepts, the four progressive domains are as follows: doing science (procedure, procedure recount); organising science (descriptive and taxonomic report); explaining science (sequential, causal, theoretical, factorial, and consequential and exploration); arguing/challenging science (exposition, discussion).Each of these progressive domains is built on the principle that progression will demand not only will increasing cognitive demands but also linguistic and culturally embedded language in order to move along the communicating continuum.
Drawing on the work of Halliday (Halliday and Mathiessen, 2004), a Systemic Functional Linguistic framework helps to identify the kind of interpersonal language needed to articulate learning in different academic settings -such as Science or History; that is, the use of language to carry out 'understand and express attitudes towards the academic content' (Llinares, Morton, & Whittaker, 2012, p. 220).Progressing along the conceptual continuum, as illustrated in Figure 4, involves language which connects to developing the language or genre referred to above in the four domains, the mode which learners are required to use (for example, speaking, writing, and image), the style required (for example, formal/informal) and the purpose.From this perspective, the communication continuum provides a language model for how form (language) and meaning (content) are inter-related complex resources rather than seeing language progression as a transition from errors to correct form.
This theoretical model (see Figure 4) focuses on the spaces that are created at the intersection of the two continua.These spaces progress from novice or beginner to intermediate and expert.
The pluriliteracies model therefore is built on the following tenet: If the ability to successfully navigate multimodal representations of knowledge is indeed fundamental to the process of meaning-making and knowledge construction and thus to the acquisition of subject-specific literacies required to progress along the knowledge pathway.(Meyer et al., 2015) This approach challenges the dominate language learning model based on grammatical chronology and instead takes an alternative pathway for identifying the kind of language which learners will need in their CLIL context.However, this does not mean that grammar has no role to play, but that grammar is no longer the filter through which language is selected for learning.This model is evolving and being developed and experimented by teachers and their learners in diverse contexts.Interestingly, findings so far indicate that approaches involving literacies impacts on the learners' first as well as additional languages, thus reinforcing the principle that CLIL teaching is 'good teaching' impacting across the curriculum.The challenge for us all as CLIL teachers, teacher educators, researchers and learners is to develop together pedagogic approaches that integrate content and language in ways that lead to independent successful learners able to be pluriliterate citizens in tomorrow's world.A quotation from Fullan and Langworthy (2014) opened this article and similarly will draw it to a reflective and challenging conclusion.This article suggests that CLIL has a genuine contribution to make.
Our schools and our pedagogies need to inspire and to ensure that all students are capable of independent learning and purposeful action in the world, and have not only the foundation but also the practical experiences and technical skills to create valuable futures for themselves and their societies.(p.78).
|
2018-12-01T18:21:21.802Z
|
2015-10-27T00:00:00.000
|
{
"year": 2015,
"sha1": "7ddc7fd2d25de3e2ca5646d10bb813ffcdb7562f",
"oa_license": "CCBY",
"oa_url": "https://laclil.unisabana.edu.co/index.php/LACLIL/article/download/5915/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2d729fa6aa49e6757e22f041f8029c273a47e7cc",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
85335492
|
pes2o/s2orc
|
v3-fos-license
|
EFFECTS OF WEED SPECIES COMPETITION ON THE GROWTH OF YOUNG COFFEE PLANTS
Este trabalho teve como objetivo avaliar os efeitos da competicao de sete especies de plantas daninhas sobre o crescimento de plantas de cafe, cultivadas em casa de vegetacao. Aos 30 dias apos o transplantio das mudas de cafe, em vasos contendo 12 L de substrato e com area de 6,5 dm2 na superficie do solo, fez-se o transplantio e/ou a semeadura das especies daninhas nesses vasos, em seis densidades (0, 1, 2, 3, 4 e 5 plantas por vaso). Os periodos de convivencia, desde o transplantio ou emergencia das plantas daninhas ate a colheita das plantas, quando do florescimento das plantas daninhas, foram de 77 dias - Bidens pilosa, 180 dias - Commelina diffusa, 82 dias - Leonurus sibiricus, 68 dias - Nicandra physaloides, 148 dias - Richardia brasiliensis e 133 dias - Sida rhombifolia. Foram avaliados a altura de plantas, o diâmetro do caule, o numero de folhas e a massa seca da parte aerea das plantas de cafe. Os efeitos da competicao de N. physaloides e S. rhombifolia sobre as plantas de cafe foram os menores comparados aos causados pelas demais especies de plantas daninhas, uma vez que apenas leves decrescimos - em todas as caracteristicas avaliadas - no cafeeiro foram observados. As demais especies de plantas daninhas causaram severa reducao no crescimento do cafeeiro, principalmente com o incremento da densidade delas. O grau de interferencia variou com a especie e com a densidade das plantas daninhas.
INTRODUCTION
Competition (which represents the negative effect of plant interaction) is the most studied type of interference between plants (Radosevich et al., 1996).Competition is a biological interaction between at least two plants for limited resources (manly light, water and nutrients) (McNaughton & Wolf, 1973).Resource limitations can be caused by unavailability, poor supply, or proximity to neighbouring plants, which ultimately can aggravate an already insufficient resource or create a deficiency where ample resource was available for a single individual (Radosevich et al., 1996).Actually, competition among weeds and crops affects both types of plants; nevertheless, weeds almost always have a deleterious effect on crops (Pitelli, 1985).
In addition to these concepts, knowing the factors that affect weed competition degree and their quantification is very important in interference studies.Such information will allow growers to intervene and alter competition balance, helping the crop to obtain the resources it needs (Blanco & Oliveira, 1978).The degree of competition can be measured as the percentage of economic crop yield reduction induced by the weeds (Pitelli, 1985).Weedy, or critical competition, period and weed density are very important among the several factors affecting competition degree.The former addresses the time period in the crop life cycle in which weed competition occurs and during which weeds should be controlled to prevent yield losses (Blanco & Oliveira1978;Pitelli, 1985).The latter, representing the number of plants per unit of area, is also important in competition studies because of the relationship among crop yield, number of individuals, and resources available in a particular area (Blanco, 1972;Radosevich et al., 1996;Radosevich, 1987).
Cof fee plantations, especially Coffea arabica L., are the most important crop in Brazil because of their high economic value and employment generating capacity (Embrapa, 2004).In addition, Brazil ranks first in world coffee production and export.A cultivated area of 2.337 million hectares with 5.387 billion coffee plants and a production of 2.148 billion tons have been estimated for 2004/2005(Conab, 2004)).Coffee is a perennial crop grown in rows and may be productive up to 30 years.As a result of weed competition, coffee yield and quality are seriously decreased and weed control is one of the major cultural operations entailing high cost.Different crop yield losses due to weed competition have been observed, such as 77% (Blanco et al., 1982), 55% (Oliveira et al., 1979), 65% (Eshetu, 2001), 52% (Pereira & Jones, 1954), 28% (Merino et al., 1996) and 24% (Moraima, et al., 2000).In addition to yield losses, several other harmful effects of weed competition on this crop are discussed elsewhere (Friessleben et al., 1991;Toledo et al., 1996;Njoroge, 1994;Ronchi et al., 2001;Silva & Ronchi, 2003), including weeds as an alternative host to the coffee strain Xilella fastidiosa, which causes coffee leaf scorch (Leite Júnior & Nunes, 2003;Lopes et al., 2003) and possesses a greater nutrient competitive potential than the coffee plants (Gallo et al., 1958;Ronchi et al., 2003).
Weed competition critical periods in coffee plantations have been determined under different coffee production conditions and locations.In Brazil, such period was shown to last from October to March (Oliveira et al., 1979;Blanco et al, 1982) and in Venezuela, from May to September (Moraima et al., 2000).In both locations and also in India (Pereira & Jones, 1954), such critical periods comprise the rainy season, coinciding with crop fructification.On the other hand, a critical period of weed competition occurring during dry season (from November to April) was reported in Cuba (Friessleben et al., 1991) andEl Salvador (Merino et al., 1996).Although substantial data are available on weed critical competition for this crop (at the reproductive stage) little is known about weed density, which is an important factor also affecting competition degree or intensity.Moreover, just after transplanting in the field, young coffee plants seem to be highly sensitive to weed competition since weed control in the coffee rows is an agronomical practice usually employed by growers (Ronchi et al., 2001;Silva e Ronchi, 2004).Nevertheless, the ef fects of weed competition on young coffee plants have been scarcely studied (Dias et al., 2004).
Several methods have been developed to study competition among different species of
Effects of weed species competition on the growth ... plants, each constituting a bioassay in that the response of a species is used to describe the interference of the other.The additive method is perhaps the most common approach used to study weed-crop relationships (Radosevich, 1987;Cousens, 1991;Radosevich et al., 1996).In the additive method, two (or more) plant species (the crop and the weed) are grown together.The density of one species, e.g., the crop, is almost always kept constant, while the density of the other is varied.The species whose density is not changed acts as a comparative indicator for the aggressiveness and competitiveness of the other species.The objective of this study was to use the additive method to determine the competition effects of several weed species on the growth of coffee plants.We hypothesized that the degree of weed competition against young coffee plants depends on weed species and densities.
General
The experiment was conducted in a greenhouse in Viçosa (20º45'S, 42º55'W; 650 asl), south-eastern Brazil.Plants of Coffea arabica L. cv.Red Catuaí, with five leaf pairs were transplanted in 12 L pots filled with a mixture of soil and organic matter (3.5:1, v/v).The soil was a yellowish Red Podzolic, 51% clay, pH 4.9, with an organic matter content of 2.95%, and fertilized with 1.0 kg m -3 of P 2 O 5 and 3.6 kg m -3 of dolomitic limestone.Fifteen and 60 days after transplanting, 3.0 g N were applied to each pot.Plants were irrigated daily with a automatic sprinkle system to maintain pot capacity and to prevent competition for water.
Treatments and data collection
Six weed species (Table 1) commonly found in Brasilian coffee plantations (particularly Bidens pilosa, Commelina decumbens and Leonurus sibiricus; Rochi et al., 2001) were grown separately in each pot, containing one coffee plant.Each weed species was established at six densities (0, 1, 2, 3, 4 and 5 plants per pot -six treatments), with four replicates.Pots were distributed in the experimental area in a completely randomised pattern.Each experimental plot was constituted by one pot, in which the soil surface area was 0.065 m2 .Thus, the weed density range established in the pots relates to the density in the fields aproximately from zero up to 75 plants per square meter.Thirty days after the coffee plants were transplanted, seed weeds (except the seedlings of Commelina diffusa, which were obtained from a stem segment) were sowed by hand directly in the pot and the densities were established by thinning them out, after weed species emergence.The weedy periods (Table 1) for each species constituted the periods between weed emergence (or transplanting for C. diffusa) and pre-flowering or flowering stage, when the experiments were discontinued.This stage was chosen because nutrient absorption and accumulation (hence, competition) show their maximum levels when plants are about to enter their reproductive phase (Singh & Singh, 1938).At that time, coffee plant height, steam diameter (5 cm above ground) and leaf number were determined.Both weed and coffee plant shoots were harvested above soil level and oven-dried for 72 h, at 70 ºC to determine shoot dry matter.
Statistical analyses
Data fitness for analysis of variance was accomplished by graphic analysis of the residues, including the Hartley test to check for error homogeneity (Neter et al., 1990).Coffee plant height, stem diameter, leaf number and shoot dry matter of both coffee plant and weeds were submitted to ANOVA and then to regression analysis.Thus, it was fitted to data significant models using weed species densities as the independent variable.Although linear models (Y = β 0 -β 1 X) were used, negative exponential models (Y = β 0 .e-β1X ) were preferentially tested since they best represent the decrease in plant yield or growth with increasing weed density (Cousens et al., 1984;Aldrich, 1987;Radosevich, 1987).Correlations (Pearson's parametric method) of coffee plant height, stem diameter, leaf number and shoot dry matter with shoot dry matter of several weed species, density of only one plant per pot, were tested using the F-test, at P < 0.01.All the statistical analyses were performed using the SAEG System (SAEG, 1997).
Coffee stem diameter
No significant effect (P>0.05) was observed from increasing densities of Brachiaria decumbens, Commelina diffusa, Nicandra physaloides and Sida rhombifolia on the stem diameter of coffee plants, grown in the same pot with coffee plants for 98, 180, 68 and 133 days, respectively (Table 2).In contrast, as Bidens pilosa and Richardia brasiliensis densities increased, coffee stem diameter decreased.Such a decrease was also observed for Leonurus sibiricus competition, but crop stem diameter decrease was exponential (Table 2).Among these weed species, B. pilosa decreased coffee stem diameter the most.From the regression equation shown in Table 2, it was possible to estimate a stem diameter reduction of 29% after a weedy period of 77 days, with five plants of B. pilosa per pot, as compared to the weed-free crop treatment.Large differences between the weed species were found for coffee plant diameter (and also plant height, leaf number and dry matter) when weed species density was zero (data not shown).This occurred because the growth period duration differed between the weed species.Four replicates of zero weed densities were established for each species.
Coffee plant height and leaf number
Only B. pilosa and C. diffusa caused a decrease in coffee plant height, showing a linear effect (P<0.01) as weed densities increased (Table 2).Among all the evaluated weeds, only N. physaloides and S. rhombifolia did not significantly (P>0.05)promote a decrease in leaf number (Table 2).The strongest reduction of leaf number was caused by C. diffusa competition: based on equation in Table 2, for the relationship between leaf number of coffee plants and C. diffusa density, a reduction of 88% was estimated when comparing coffee plants without competition to competition under five plants of C. diffusa.
Shoot dry matter
Besides the significant (P<0.01)negative exponential effect of C. diffusa density on coffee shoot dry matter (Table 2), leaf abscission contributed to the low levels observed for that characteristic.N. physaloides and S. rhombifolia were the species whose increasing density in the pots did not significantly (P>0.05)decrease coffee shoot dry matter (Table 2).Nevertheless, shoot dry mass of those weeds increased exponentially (P<0.01) as their densities increased, reaching the maximum density value of five plants per pot (Table 3).On the other hand, there was a significant effect of L. sibiricus and R. brasiliensis densities on the reduction of coffee plant dry matter (Table 2), though weed dry matter did not increase as its density (Table 3).Overall, these results explain why no significant (P>0.05)linear correlation was found between shoot dry mass of the coffee plants and of N. physaloides, S. rhombifolia, L. sibiricus and R. brasiliensis (Table 4).
Effects of weed species competition on the growth ... Shoot dry matter of the coffee plants was exponentially reduced (P<0.01)with increasing B. pilosa density (Table 2), with the opposite occurring to shoot dry matter of the latter (Table 3).This result produced a significant (P<0.01) and negative linear correlation (r=-0.70) between coffee and B. pilosa shoot dry matters (Table 4).Taking into account Table 2 equation and the densities of zero and five plants per pot, it was coffee shoot dry mass reductions due to weed presence could be estimated.These values were 46, 61, 64 and 72% due to competition for L. sibiricus (during 82 days), R. brasiliensis (148 days), B. pilosa (77 days) and C. diffusa (180 days), respectively.
Unexpectedly, data relative to shoot dry mass of the coffee plants as a function of B. decumbens density fitted to a statistical model was different from that observed (linear) for the other species.When fitted to a square model: initially, shoot dry mass accumulation decreased as weed density increased, reaching a minimal value (16.7 g) at B. decumbens density of 3.44 plants per pot (estimated value), and finally with shoot dry mass rising again (Table 2).This was probably due to the fact that B. decumbens shoot dry mater slightly decreased at densities greater than 3.8 plants per pot (Table 3).
DISCUSSION
The additive method herein applied to study weed-crop relationships simulates a situation in which the crop is exposed to several levels of weed infestation.This method relates the weed infestation levels to crop yield reduction and can assist coffee growers in deciding whether weed control is economical (Radosevich, 1987).Moreover, the method allows the evaluation of the competitive potential of different weed species, indicating the most aggressive ones at a specific density in a given crop.
The harmful effects of weed competition on coffee plant growth or weed competition degree varied greatly depending on both the weed species and their density.N. physaloides and S. rhombifolia had little or no effect on the growth of coffee plants, even at the highest density employed in these experiments.On the other hand, B. pilosa, C. diffusa, L. sibiricus and R. brasiliensis markedly reduced the growth of coffee plants, as indicated by stem diameter, plant height, leaf number and shoot dry matter.Moreover, this reduction was linear or exponential with weed density increase.Indeed, C. diffusa (and C. benghalensis) followed by B. pilosa are weeds widely dispersed in Brazilian coffee fields (Blanco et al., 1982;Ronchi et al., 2001).In Cuba, Friessleben et al. (1991) reported that weed competition imposed to two-or three-year -old fieldgrown Coffea arabica, at the critical period of weed competition (during crop fructification), significantly reduced stem and crown diameter, plant height, number and length of plagiotropic branches, node formation on primary branches and coffee yield.Moreover, stem diameter (which was reduced by 22%) was found to be the best indicator of weed competition for coffee plants not older than three years (Friessleben et al., 1991).Oliveira et al. (2002) reported that competition of Commelina spp.led to a reduction in leaf number, plant height and stem diameter of C. arabica, after this weed had been grown at several densities during 150 days following coffee transplanting into pots.
Adverse weed effects on coffee growth were brought about probably through competition mainly for essential nutrients (Gallo et al., 1958;Njoroge, 1994) and light (Blanco et al., 1982;Castro & Garcia, 1996) Effects of weed species competition on the growth ... competition for light since they quickly developed simultaneously in leaf area and height (showing a dense canopy), factors that allow weeds to be better light competitors (Walker et al., 1988).
Despite the direct effect of competition (for light and nutrients) on coffee leaf number, other factors could have contributed that strongly reduced leaf number in coffee plants under C. diffusa interference, such as the attack of Cercospora coffeicola, whose symptoms were evident on coffee leaves at the end of its weedy period.According to Zambolim et al. (1997), water and nutritional stresses predispose coffee plants to severe attack from that pathogen, leading to other symptoms, such as leaf shedding.Therefore, one may suppose that young cof fee plants under C. diffusa competition were more sensitive to C. coffeicola attack; hence, this was an indirect negative effect of weed interference in coffee plants.In addition, the long weedy period (six months) of C. diffusa could have favoured coffee leaf number reduction.It is likely that the crop had been too long under either direct or indirect effects of weed competition.
Dry matter accumulation of individual plants was found to decrease with increasing weed density, so that the final dry matter production per pot was about the same, even at low or high weed densities.According to Radosevich et al. (1996), this phenomenon occurs because the amount of growth by individual plants decreases in a plastic manner as density increases: at low density, total yield per area unit was determined by fewer larger plants, while at high density, it is determined by many small ones.In this experiment, pot size probably contributed to nutrient competition due to of root growth constraint under a small soil volume.Moreover, taking into account that interference among neighbouring plants occurs after a specific weed density has been reached (Aldrich, 1987), in addition to crop-weed competition, intraspecific competition among individuals of the same weed species had certainly also occurred, mainly at higher densities.
Among the weeds studied, B. pilosa was the only species that caused a constant decrease in all the coffee plant characteristics evaluated, as density increased.Moreover, B. pilosa was the only weed whose shoot dry matter correlated significantly and negatively (P<0.01) with the coffee plant parameters, including stem diameter, leaf number, plant height and shoot dry matter (Table 4).Therefore, B. pilosa stood out for possessing the highest competitive potential against coffee plants, probably due to its high nutrient uptake ability (Ronchi et al., 2003).Thus, even at low densities within the crop row, during the early growth phases (after transplantation), B. pilosa may cause an initial crop growth reduction, delaying crop establishment and the time taken by the plants to reach maturity, probably also reducing their bearing capacity.
Under the present experimental conditions, it is highly recommended controlling weeds within crop rows, especially if they occur at high densities in coffee fields, in order to prevent weed competition, probably for nutrients (Ronchi et al., 2003) and light, and hence, crop growth reduction.In this study the effect of weed competition may have been overestimated due to pot size.Although, under field conditions, soil volume restriction to root growth is probably much lower than that observed in the pots, the occurrence of common biotic (pathogen and insect attack) and abiotic (water deficit) stresses might aggravate weed competition against coffee plants.Moreover, weed densities in young field coffee plantations are usually found to be much higher than those studied here, which could lead to a high degree of competition as reported here.Although weeds possess some important agronomical characteristics (e.g.recycling nutrients from the soil profile), they should not be allowed to thrive near coffee plants because they reduce their growth.Further research on weed competition against young coffee plants under field conditions is of major importance to improve coffee crop management.
Table 1 -
Weed species and weedy periods between them and coffee plants
Table 3 -
Regression models fitted to statistically significant weed shoot dry matter using weed density (X) as independent variable *, ** represent the significance of the F-test at P < 0.05 and P < 0.01, respectively.
, since soil moisture was almost constantly available.In this study, B. decumbens, B. pilosa and L. sibiricus probably imposed the strongest
Table 4 -
Simple linear correlation between stem diameter (STD), plant height (PLH), leaf number (LEN) and shoot dry matter (SDM) of coffee plants and shoot dry matter of several weed species, at the density of only one plant per pot
|
2019-01-03T22:40:35.028Z
|
2006-09-01T00:00:00.000
|
{
"year": 2006,
"sha1": "2e1069398b1e0cbaff03dd7e12eb7b083b9eecfa",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/pd/a/BQYZjYGRRycSN8YjN3xsL7p/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e1069398b1e0cbaff03dd7e12eb7b083b9eecfa",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology"
]
}
|
216653492
|
pes2o/s2orc
|
v3-fos-license
|
A TYPOLOGY OF “INFRASTRUCTURE OF THE MIDDLE” IN UNIVERSITY FOOD PROCUREMENT IN ENGLAND AND CANADA: ELABORATING THE “TO” IN “FARM TO CAFETERIA”
RESUMO ABSTRACT This article introduces a new term – “infrastructure of the middle” – and explains how it helps understand how sustainability transition will happen in the food system. The evidence comes from 67 interviews with leaders of university food procurement initiatives in England and Canada. As founder and former president of the civil society organization which played a central role in the Canadian example, I bring a perspective informed by praxis, both as a practitioner and as a scholar applying Sustainability Transition Theory. I adapted the term infrastructure of the middle from Kirschenmann et al.’s concept of “agriculture of the middle”, which describes the midsize farms and ranches most at risk in a globalized food system. Infrastructure of the middle refers to the resources and networks that create a critical mass, enabling mid-size sustainable food producers to meet the needs of foodservice clients, especially public sector institutions.
INTRODUCING "INFRASTRUCTURE OF THE MIDDLE"
This article introduces a new term -infrastructure of the middle -and explains how it helps understand how sustainability transition will happen in the food system. The evidence comes more than 60 interviews with leaders of university food procurement initiatives in England and Canada. As founder and former president of Local Food Plus, the civil society organization which played a central role in the Canadian example, I bring a perspective informed by praxis, both as a practitioner and as a doctoral candidate writing about an application of Sustainability Transition Theory (STT).
I adapted the term "infrastructure of the middle" from Kirschenmann et al.'s concept of "agriculture of the middle", which describes the mid-size farms and ranches most at risk in a globalized food system. These farms and ranches "operate in the space between the vertically-integrated commodity markets and direct markets" (KIRSCHENMANN et al., 2008, p. 3). They are big enough to meet the quality needs of large-volumes purchasers, but not so big that they are locked into commodity production for the global industrial food system (Idem).
In this article, I use the term "infrastructure of the middle" to emphasize the essential role of infrastructure in connecting midsize farmers to regional public institutionsan opportunity for large-volume sales. Usually, such institutions rely on global distribution and foodservice corporations, which typically exclude mid-size farmers and processors. Infrastructure of the middle refers to the resources, facilities and networks that create a critical mass, enabling alternative food producers to meet the needs of high volume, high profile foodservice clients, especially public sector institutions. Like mid-size farmers, infrastructure of the middle is disappearing (CONSTANCE et al., 2014;NOLAN, 2010;WALKOM, 2008WALKOM, , 2013, and needs to be strengthened if sustainable local food is to become the norm. Infrastructure is commonly defined as "the basic physical and organizational structures and facilities (e.g. buildings, roads, power supplies) needed for the operation of a society or enterprise" 1 . With food systems, this usually refers to roads, warehouses, processing and distribution facilities. Infrastructure of the middle, by contrast, is an expansive term that also includes "soft" infrastructure. In effect, infrastructure of the middle encompasses the moving parts of a socio-technical system needed for food system transformation.
This article will present a typology for infrastructure of the middle, and place it in the context of SST. I extend the range of STT to public sector food procurement and argue that public sector procurement -specifically at universities -is a key tool for sustainability transition. The STT framework used in this article is a modified version of the Multi-Level Perspective (MLP), an approach to sustainability transition elaborated by GEELS (2002GEELS ( , 2004GEELS ( , 2005GEELS ( , 2007GEELS ( , 2010GEELS ( , 2011. I have modifed the MLP with a "social practices approach", which puts greater emphasis on agency (RAUSCHMAYER; BAULER; SCHÄP- KE, 2015;WALKER, 2007WALKER, , 2010. I will first explain why universities are critical to sustainability transition in food, then present the typology, and illustrate how the typology can be applied to successes of university food procurement in England and Canada.
THE UNIVERSITY AS A SITE OF SUS-TAINABILITY TRANSITION
Scholars have noted a recent flourishing of alternative food projects, networks, businesses and movements which promote more sustainable local food systems (ACK-ERMAN-LEIST, 2013;BLAY-PALMER et al., 2013;FEAGAN, 2008;DU-PUIS, 2011;HINRICHS, 2003;MURDOCH, 2006;MOUNT, 2011), However, alternative food channels and food represent a tiny percentage of food sales 2 (AGRICULTURE AND AGRI-FOOD CANADA, [n.d.]; ELITZAK, [n.d.]). University procurement is pivotal at this juncture because it presents an opportunity for "scaling up" the volume of sustainable local food across the food system (BARLETT, 2011;FRIEDMANN, 2007;MURDOCH, 2006;MORLEY, 2014;SONNINO, 2008;ROBERTS;ARCHIBALD;COLSON,2014)"This paper reports on a relationship between the University of Toronto and a non-profit, non-governmental (\"third party\", and "scaling out" new procurement models that make scaling up viable. Creative public procurement to advance sustainable local food systems is overwhelmingly based in the education field (MORGAN; SONNINO, 2007SONNINO, , 2008. Besides providing a rich site for development of food system transition theory, publically-funded universities are common to both England and Canada. Universities differ from other public sector institutions in that they have neither a monopoly over a service nor a captive population (as is the case in prisons, hospitals or elementary schools). Thus, universities are subject to popular and client pressure in ways few public institutions are. Universities must respond to a client group -students -who increasingly demand values beyond price (including fair labour practices, environmental stewardship and animal welfare, among others) in food procurement and university policy generally (GRIGG; PUCHALSKI; WELLS, 2003;MGONIGLE, 2006;PARK;REYN-OLDS, 2012;RAYNOLDS, 2002;ROBERTS;ARCHIBALD;COLSON, 2014).
Universities are also uniquely place-specific and place-dependent. Frequently named after the city in which they are located, universities are often connected with the communities surrounding them in numerous ways (SHAW; ALLISON, 1999). Increasingly, uni-versities are understood as "anchor institutions", which have been identified as "among a region's biggest employers and purchasers of goods and services" (DRAGICEVIC, 2015, p. 5). Such institutions have economic power that can be converted into "anchor missions", defined as "the deliberate and strategic use of resources to benefit communities" (Idem). With the decline of manufacturing in Europe and North America, such institutions play a pivotal role in local economies. In terms of food procurement, they can provide significant and stable markets for food businesses, showcase new options to the public, and open "more sustainable spaces of possibility" (MARSDEN; FRANKLIN, 2013).
THE MULTI-LEVEL PERSPECTIVE
The Multi-Level Perspective has its roots in sociological work on technological change, and focuses on the interplay of socio-technical systems, social groups in society who maintain these systems, and regimes or rules that guide these social groups (GEELS; KEMP, 2007). The MLP identifies three components in the process of transition or socio-technical "regime shift" -niches, regimes and landscapes. The central point of the MLP is that the interplay of these three components, at different levels and in different phases, leads to socio-technical system change.
According to the MLP, niches are protected spaces where innovations can be nurtured. Theoretically, when managed strategically, innovative niches may rise to challenge a regime (GEELS, 2002). Regimes are defined as the critical level, setting out "the specific rules of the game" (SPAARGAREN; LOE-BER; OOSTERVEER, 2012). The landscape is the broader context -social, technical and environmental -that can influence the relationships between niches and regimes. The landscape level represents the material context of society (how cities, roads, energy infra-structure, etc. are configured), as well as a mix of additional factors such as climate change, wars, oil prices, water availability, and cultural values (GEELS, 2002). Geels calls the MLP a "process theory", in that the analyst "needs to trace unfolding processes and study event sequences, timing, and conjunctures" (GEELS, 2011, p. 35).
An essential concept underlying STT is that transitions require intervention to break the momentum of old patterns or "path dependence" and "sunk investments" (GEELS, 2010). Agency -in the form of people who develop and use policies and programs that construct sustainability initiatives -is essential. Transitions are structural changes that lead to new power relations, new players and new technologies.
TOWARDS A TYPOLOGY OF "INFRA-STRUCTURE OF THE MIDDLE"
The concept of infrastructure of the middle is anticipated by Renting et al. in their 2003 exploration of "short food supply chains" (SFSC) in rural development (RENT-ING;BANKS, 2003). SFSCs, they write, serve to "resocialize and respatialize food, thereby allowing consumers to make new value judgements about the relative desirability of foods based on their own knowledge, experience, or perceived imagery" (RENTING; BANKS, 2003, p. 398). They argue that the word "short" is relevant in three ways. SFSCs "'short-circuit' the long anonymous supply chain" of the industrial food system; they create transparency which can provide information about quality and values (environmentally sustainable practices, humane treatment of animals, and fair labour practices, for example); and they shorten relations between where food is produced and where it is consumed, and thereby personalize the responsibility of produc-ers and consumers (RENTING; BANKS, 2003).
SFSCs arose from "the active construction of networks by various actors in the agrifood chain, such as farmers, food processors, wholesalers, retailers, and consumers" (RENTING; BANKS, 2003, p. 399). With this phrase, Renting et al. anticipate the human agency and social construction, both of which are key to the expanded notion of infrastructure of the middle presented in this article.
The concept of infrastructure of the middle addresses a deep-rooted problem in both the scholarly literature and the public discourse about sustainable local food systems. Both discourses understate the central roles of human agency and infrastructure in the transition to sustainable local food systems. Public discourse can be summarized by the titles given to typical programs featuring sustainable local food -"farm to school", "farm to cafeteria", "farm to fork" and "field to table", for example ( In this discourse, an entire and complex set of tasks within the food system is covered by the one little word "to". While much of the early alternative food projects did feature direct producer to customer relationships 3 , foodservice on any significant scale requires the inclusion of many intermediaries. Yet the notion of direct relationships imbues the mindsets of both practitioners and scholars. As a result, a discussion of infrastructure is absent from scholarly articles (IKERD, 2011;YOUNGBERG;DEMUTH, 2013).
Many discussions of infrastructure in recent scholarship highlight the central role of hubs (BLAY-PALMER et al., 2013;CLEVE-LAND et al., 2014;HORST et al., 2011;LEBLANC et al., 2014;LERMAN;FEEN-STRA;VISHER, 2012;MOR-GAN;ROGOFF, 2014;STROINK;NELSON, 2013). I emphasize that food hubs are best understood as one part of the infrastructure necessary for a sustainable local food system, and that they must be supported and allied with other actors with relevant capacities. Each of the elements in my typology of infrastructure of the middle refers to an actor with particular capacities. I suggest that the emphasis should be on the universe of relationships, rather than on the hub.
This article attempts to establish the centrality of infrastructure of the middle and identify its key elements. Each of these elements is a "disruptive innovation" within the existing regime, in that each presents "a different package of attributes valued only in emerging markets remote from, and unimportant to, the mainstream" (CHRISTENSEN, 2003, p. 6). In effect, infrastructure of the middle refers to a new "nexus of practice" for food system transformation (SHOVE; WALKER, 2007). This typology establishes the elements present in successful sustainable local food initiatives at the institutional level.
Based on my experience and analysis, I identify ten actors with distinctive capacities which comprise infrastructure of the middle capable of food system transformation.
1. Anchor institutions. Anchor institutions, defined as "large public or nonprofit institutions rooted in a specific place, such as hospitals, universities or municipal governments" (DRAGICEVIC, 2015, p. 5), are essential because they use the clout of their purchasing power to create long-term stable markets that attract mid-size farmers and processors. In addition, anchor institutions are respected players in society, and lend credibility to initiatives to scale up sustainable local food systems, thereby propelling these initiatives from the margins towards the mainstream.
2. Civil Society Organizations. 4 Civil society organizations (CSOs) are prime movers. This is major shift because the food sector is generally considered the purview of the private sector. However, evidence suggests that much work related to the development of sustainable local food systems has been initiated by civil society organizations. (BLAY-PALMER et al., 2013;CAMPBELL;MACRAE, 2013;FRIEDMANN, 2007;MOR-LEY, 2014;ORME et al., 2011) Government has not invested significantly in infrastructure for sustainable local food. The heavy lifting traditionally performed by government has been performed by CSOs. CSOs are essential connectors, facilitators and strategists. (BLAY-PALMER et al., 2013;FRIDMAN;LENTERS, 2013). They also can develop the range of scarce professional skill sets around food procurement and sustainability that are not always easy to find in the public sector (MORGAN; MORLEY, 2014).
3. Tools to measure progress towards sustainability. Scaling up means selling to people with whom there is no direct relationship, frequently through a third party aggregator or distributor. Tools, often in the form of certification schemes, offer a way to identify values and best practices beyond personal relationships, as well as protecting producers from greenwashing and dilution of their values proposition. Standards and certification schemes establish guidelines that create opportunities for dialogue, learning, and continuous improvement among practitioners. They are a way to measure progress. These tools must be flexible, science-based, affordable, and relatively easy to explain, implement and modify.
4. Individual champions. Although alternative food networks have been developing since the 1990s (GOODMAN; WATTS, 1997;MURDOCH;MORGAN, 1999), my practitioner experience, as well as independent scholarship (MORGAN; MORLEY, 2014), indicate that the food movement is at a stage where individual champions play an indispensable role in establishing and maintaining the relationships necessary for sustainable local food initiatives. Champions are the ones who break down silos within an institution to make a new approach to food procurement possible. In a university setting, for example, they can initiate conversations among foodservice, waste management, student recruitment and fundraising -parts of the institution that rarely talk to one another -to discuss how sustainable local food procurement can be leveraged to benefit them all. In addition to being committed to sustainability principles, champions must hold a position of some authority, and possess a range of social skills. They must also be collaborative, solutions-oriented, pragmatic and models of competency. 5.Self-catered/Self-operated foodservice or domestic foodservice contractors. (The term "self-catered" is more common in the UK, while "self-operated" or "selfop" is more common in North America.) In a mature system, infrastructure of the middle would feature self-operated foodservice units or mid-size domestic foodservice contractors. Currently, global foodservice contractors are the norm. However, their business modelbased on volume purchases of standardized low-cost food from anywhere -is incompatible with sustainable local food systems. This is because sustainability involves inserting other values into purchase criteria, and local food inherently restricts placeless volume purchases. Global foodservice corporations have rules and regulations that discriminate against midsize producers. Minimum volume requirements or minimum insurance requirements, for example, can exclude mid-size farmers. Self-catered/self-operated foodservice is more open to mid-size producers and offers greater flexibility. Reclaiming foodservice also begins to displace the path dependent thinking which assumes that food is an ancillary, rather than an essential, service of the institution.
6. Innovative private sector companies. Infrastructure of the middle is rich in B2B (business to business) relationships, which have been identified as fundamental to the growth of local economies (SHUMAN, 2015), much as they are to conventional economies. They include processors, distributors, aggregators, and other food businesses. Many are innovators, interested in reconfiguring resources, not just mobilizing them (MARSDEN, 2010;SMITH, 2005). Unlike global corporations, these "new food-economy SMEs" (BLAY-PALMER; DONALD, 2006) are regionally-based and independent. They must be collaborative, open to exploring new approaches, and interested in differentiating themselves in the marketplace. 7. Public policy and public education capacity. In pioneering scenarios, this role may be played by a CSO or an anchor institution. But in a mature system, the function of public policy development, public education, and the promotion of food literacy is performed by an actor with dedicated capacity, such as a food policy council. This is essential because it contests the hegemonic activities of global food companies, which includes lobbying and public campaigns (the campaign to prevent labelling of foods containing genetically-modified organisms is one example). Finding space in a food system increasingly monopolized by global corporations (CONSTANCE et al., 2014;ETC GROUP, 2013) requires infrastructure of the middle to make the case for a sustainable local food system, and for public policy that evens the playing field. This includes policies and legislation that support "multiscalar and multidimensional strategies for regional development" (BLAY-PALMER; DONALD, 2006, p. 394), such as sustainable local procurement. Food literacy which includes sustainability is a key component of food system transformation because an engaged and educated consumer is more likely to choose products that foster sustainable local food systems.
8. Marketing and promotion. Few businesses of the middle have the capacity to do significant marketing and promotion, yet they are in competition with an industry that spent $4.6 billion in 2012 on fast food advertising alone. Indeed, McDonald's advertising spend was 2.7 times that for fruit, vegetables, bottled water and milk combined (HARRIS et al., 2013). Marketing and promotion capacity is essential to motivate and justify alternative procurement initiatives. It can encourage the involvement of new actors, create transparency, and move towards normalizing the products and values of sustainable local food systems, thereby establishing the purchase of sustainable local food as an everyday habit.
9. Connection to community and environment. Infrastructure of the middle puts the culture back in agriculture, while challenging "agribusiness" at the level of its fundamental presumption -that food is essentially a private sector activity that belongs in the private sphere, removed from public interest issues such as sustainability. Externalizing the costs of agribusiness onto society and the environment flows easily from this presumption. By contrast, the underlying assumption of sustainable local food systems is that food is a public policy issue. Infrastructure of the middle has the potential to respond to the demand for foods that reflect such public goods as identity, heritage, environment, and so on.
10. Food hubs. Blay-Palmer et al. argue that food hubs are "vehicles for sustainable transformation of the dominant food system". They define food hubs as "networks and intersections of grassroots, community-based organizations and individuals that work together to build increasingly socially just, economically robust and ecologically sound food systems that connect farmers with consumers, as directly as possible" (BLAY-PALMER et al., 2013, p. 524). Hubs are spaces of aggregation, transformation and collaboration. They offer opportunities to pool resources to provide hard infrastructure such as warehouses, loading docks, processing facilities and meeting spaces. But they can also be part of soft infrastructure, in that they are spaces for relationship-building, and clearing houses for innovation and information-sharing. Hubs are essential to the development of infrastructure of the middle because they can provide both hard and soft infrastructure that few infrastructure of the middle businesses can bear alone.
TWO EXAMPLES OF INFRASTRUC-TURE OF THE MIDDLE IN ACTION 5
The next section will illustrate the typology of infrastructure of the middle using data collected in the UK and Canada between 2013 and 2015. It will examine two specific approaches to increasing procurement of sustainable local food in universities -both developed by CSOs -the Food For Life Catering Mark developed by the Soil Association in England and Certified Local Sustainable certification developed by Local Food Plus in Canada.
An IntroductIon to the SoIl ASSocIAtIon And the Food For lIFe cAterIng MArk
The Soil Association, which describes itself as "the UK's leading membership charity campaigning for healthy, humane and sustainable food, farming and land use", developed and manages the Food For Life Catering Mark. The Catering Mark was designed to support the work of the Food For Life Partnership, a program designed to transform food culture in British schools through tastier, healthier and more sustainable meals, combined with an emphasis on food literacy, growing and cooking. The Catering Mark provides third party certification to foster increasingly sustainable and healthy food. It offers a ladder for improvement, with bronze, silver and gold awards to encourage progress. By moving through the three levels, foodservice operators demonstrate an increased commitment to four principles: 1. food freshly prepared on-site; 2. ingredients sourced sustainably and ethically when possible; 3. ingredients sourced locally when possible; and 4. healthy eating made easy. More than 1.2 million certified meals are served each day.
An IntroductIon to locAl Food PluS And the certIFIed locAl SuStAInAble StAndArdS
Local Food Plus (LFP) certification encourages farmers to move toward more sustainable practices. The launch of the University of Toronto-LFP partnership in 2006 represented the first time that a Canadian university made a formal commitment to purchase sustainable local food. Participating cafeterias agreed to purchase 10% of the dollar value of their food in the first year from Certified Local Sustainable farmers and processors, with a 5% increase each year going forward.
LFP standards are based on five guiding principles -1. Employ sustainable production systems to reduce or eliminate synthetic pesticides and fertilizers, and conserve soil and water; 2. Provide healthy and humane care for livestock; 3. Provide safe and fair working conditions for on-farm labour; 4. Protect and enhance on-farm biodiversity and wildlife habitat; and 5. Reduce on-farm energy consumption. LFP certification is unique in its effort to combine local with sustainable practices. Farmers must achieve a score of 75% or better to be entitled to call their operation "Certified Local Sustainable" and use the LFP certification seal.
APPLYING THE TYPOLOGY OF INFRA-STRUCTURE OF THE MIDDLE 6
Both programs shift responsibility for sustainability transition in the food system away from reliance on individual consumer purchases. For the universities involved, certification helped them set goals, and keep abreast of sustainability trends. For the farmers, processors and distributors, certification encouraged them to adopt more sustainable practices to gain and hold university contracts. For producers already Certified Organic, the programs opened significant and stable markets.
In both the UK and Canada, all ten dimensions of the typology of infrastructure of the middle were present.
1. Anchor institutions. Universities in both countries qualify as anchor institutions. The English case studies are Nottingham-Trent University and the University of the Arts London (UAL). Nottingham-Trent is a university of about 27,000 students in the Midlands city of Nottingham with a self-catered food service. UAL is a multi-campus university of about 26,000 students in downtown London. The Canadian case study is the University of Toronto, one of the largest universities in North America, with about 85,000 students over three campuses. At the time of this research, it had both self-operated units and cafeterias operated by Aramark, a global foodservice company.
2. Civil Society Organizations. There were entrepreneurial CSOs in place actively promoting institutional procurement of sustainable local food.
3. Tools. Both CSOs had sophisticated certification tools to measure progress towards more sustainable local food.
4. Champions. Both the UK and Canadian cases studies feature champions in many key roles --university administrators, heads of sustainability and foodservice, and chefs, for example. Partnering food suppliers also benefitted from in-house champions. 5. Self-catered foodservice or a domestic provider. In both countries, the facilities that achieved the best results were self-catered/self-operated units or domestic caterers, rather foodservice provided by tranantional corporations.
6. Innovative private sector companies. All three universities worked closely with innovative private sector companies, including farmers, processors and distributors. Several of these organizations saw their university sales as part of a strategy to differentiate themselves in the market.
7. Public policy and public education capacity. In England, the Soil Association has a public education function to present emerging research and policies that enhance sustainability. This was also part of LFP's mandate in Canada.
8. Marketing and promotion. In both England and Canada, there was significant promotion at the universities themselves, as well as by the CSOs through signage, mainstream and social media, trade show booths, participation in food celebrations and fairs, and public speaking. The Soil Association also holds an annual Catering Mark Awards dinner to recognize champions who have contributed to the success of the mark.
9. Connection to community and environment. Sustainability requirements were important and prominent features of both certifications. Public policy goals were explicitly recognized in both countries.
10. Food hubs. The universities themselves acted as physical hubs, receiving and preparing food, and bringing together various actors in new ways. The CSOs acted as virtual hubs (Campbell; Macrae, 2013), forming critical relationships, providing tools, expertise and support.
SUSTAINABILITY TRANSITION THEORY AND INFRASTRUCTURE OF THE MIDDLE
Kirschenmann et al.'s insight expressed in the concept of "agriculture of the middle", while powerful, flows from the productionist paradigm of mid-20 th century industrial agriculture -a paradigm that puts primacy on agricultural production, rather than on the supports and services necessary for a community-based food system. Infrastructure of the middle gives prominence to the vast middle ground -the metabolic, geographic, sociological, and indeed physical rift (WITTMAN, 2009) -separating farmer from eater and eater from farmer. The concept of infrastructure of the middle, which includes social as well as physical infrastructure, can begin to heal this separation by re-embedding the economy into society. Moreover, there is a growing realization that both the economic and social spheres must be embedded in the environmental sphere, the life support system of the planet.
The concept of infrastructure of the middle acquires its theoretical significance from the MLP's identification of the centrality of the niche-regime interaction, and the socio-technical systems required for transition. However, the MLP does not adequately capture the level of contestation involved in establishing niches and challenging the regime. A more appropriate term than niche might be "beachhead" or "toehold" to reflect the more tenuous nature of the niche's challenge to the existing regime around food procurement. The MLP also underemphasizes the complexity of the landscape, which includes factors such as government subsidies, regional and national regulations and legislation, tax law and international trade agreements, not to mention the unpredictable impact of climate chaos and changing weather patterns. As well, the MLP does not adequately recognize the importance of individual champions to allow the toehold to become established in the first place, and protect and nurture it within the foodservice regime.
The typology presented here attempts to deepen the conceptualization of the MLP in particular, and STT in general, by challenging their implied narrative -that transition arises from incremental niche expansions within a regime. By contrast, the narrative made explicit by infrastructure of the middle indicates that the transition to sustainability requires confrontation because it inherently challenges the privilege and path dependency of the mainstream foodservice regime. As such, sustainability itself represents a disruptive innovation in foodservice.
CONCLUSION
The shift to sustainable local food procurement requires new approaches to university food procurement, as well as a critical analysis of the dominant role of transnational corporations in university and public sector foodservice. Three global foodservice corporations -Sodexo, Aramark and Compass -and one global distributor, Sysco, have risen to prominence since the 1980s, during what food system analyst Philip McMichael describes as "the third food regime" (MCMI-CHAEL, 2013). This third regime is characterized by the "unprecedented market power and profits of monopoly agrifood corporations, globalized animal protein chains, growing links between food and fuel economies, a 'supermarket revolution', liberalized global trade in food, increasingly concentrated land ownership, [and] a shrinking natural resources base" (HOLT GIMÉNEZ; SHATTUCK, 2011, P. 111;cf. MCMICHAEL, 2013).
One of the stated goals of Renting et al.'s work on SFSCs is to assess whether the growth of SFSCs constitutes a countermovement with the potential to challenge industrial agriculture, or a series of short-term local initiatives (RENTING; BANKS, 2003). Using the language of the MLP, this article argues that when SFSCs are conceptualized as infrastructure of the middle, and linked with public institutions such as universities, niches or toeholds can be created that begin to give mid-size farmers the critical mass they need to contest a commodity-based food system and challenge the existing global agro-industrial regime. However, the process is much more disruptive and confrontational than the MLP suggests. As Blay-Palmer and Donald note, "large firms are reformulating the rules of the game for small suppliers, transforming traditional supply chains, making it more difficult for smaller players to maintain their presence in the market or for new players to enter it" (BLAY-PALMER;DONALD, 2006). This article argues that the missing link in scaling up and out sustainable local food systems is not the inability of farmers to produce food, but the weakness of the infrastructure of the middle -the connective tissue. As Senge notes, "transforming systems is ultimately about transforming relationships among people who shape those systems" (SENGE; HAMILTON; KANIA, 2015, p. 6) and involves embodying an ancient understanding of leadership; the Indo-European root of "to lead", leith, literally means to step across a threshold -and to let go of whatever might limit stepping forward (SENGE; HAM-ILTON; KANIA, 2015, p. 2). The concept of infrastructure of the middle is crucial because it embeds public sector food procurement in communities, nature, and economies. As such, it has the potential to be the midwife of an emerging sustainable local food system.
|
2020-04-16T09:14:17.265Z
|
2017-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "76164772e64695a056c2aca8af2d4e01dd47cb62",
"oa_license": "CCBY",
"oa_url": "http://raizes.revistas.ufcg.edu.br/index.php/raizes/article/download/457/439",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b7a2d52a70adac40b077d513f24e6384979d0fe8",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
260711057
|
pes2o/s2orc
|
v3-fos-license
|
Notulae to the Italian alien vascular flora: 8
In this contribution, new data concerning the distribution of vascular flora alien to Italy are presented. It includes new records, confirmations, exclusions, and status changes for Italy or for Italian administrative regions of taxa in the genera Bunias, Calocedrus, Calycanthus, Celosia, Clerodendrum, Convolvulus, Crassula, Cyclamen, Datura, Dicliptera, Eragrostis, Erigeron, Gamochaeta, Gazania, Impatiens, Kolkwitzia, Notulae to the Italian alien vascular flora: 8 65 Leucaena, Ludwigia, Medicago, Muscari, Nigella, Oenothera, Opuntia, Paulownia, Petroselinum, Phyllostachys, Physalis, Pseudosasa, Quercus, Reynoutria, Roldana, Saccharum, Sedum, Semiarundinaria, Senecio, Sisyrinchium, Solanum, Sporobolus, Tulipa, Vachellia, Verbena, and Youngia. Nomenclatural and distribution updates published elsewhere are provided as Suppl. material 1.
Nomenclatural and distribution updates published elsewhere are provided as Suppl.material 1.
Keywords
Alien species, floristic data, Italy
How to contribute
The text for the new records should be submitted electronically to Chiara Nepi (chiara.nepi@unifi.it).The corresponding specimen along with its scan or photograph has to be sent to FI Herbarium: Museo di Storia Naturale (Botanica), Sistema Museale di Ateneo, Via G. La Pira 4, 50121 Firenze (Italy).Those texts concerning nomenclatural novelties (typifications only for accepted names), status changes, exclusions, and confirmations should be submitted electronically to: Gabriele Galasso (gabriele.galasso@ comune.milano.it).Each text should be within 2,000 characters (spaces included).
In Italy, this species was already cultivated in botanical gardens in the late 18 th century (e.g., in Pavia, see Anonymous 1785) and was first recorded as a casual alien in 1897 (Penzig 1897;Béguinot and Mazza 1916).Bunias orientalis now occurs as a casual alien in most of the northern regions, with the exceptions of Piemonte and Friuli Venezia Giulia, where it is considered naturalized (Galasso et al. 2018a).In Emilia-Romagna, it is known as casual in the province of Ferrara (Piccoli et al. 2014).On July 2 nd , 2019, a large population was discovered in the locality Casino of the former municipality of Nibbiano (now Alta Val Tidone), province of Piacenza (WGS84: 44.945631N, 9.332003E).Here, fruiting individuals form a thick stand of 3,350 m 2 , with a 75% cover, on waste land colonized by Artemisia vulgaris L., Cirsium arvense (L.) Scop., Elymus repens (L.) Gould subsp.repens, and Sambucus ebulus L.More than 1,200 rosettes were counted across a mowed wheat field of 31,000 m 2 in locality Casa Castellina (WGS84: 44.946498N, 9.330900E) and further individuals were observed along the nearby roadsides.This species, similarly to other European countries (see e.g., Clement and Foster 1994), was likely introduced as a grain impurity.The pronounced tendency to invasiveness in these localities needs to be monitored.
Calocedrus decurrens has been already recorded as casual in Lombardia, Umbria, and Sardegna (Galasso et al. 2018a).Some young individuals originated by seeds from nearby cultivated plants were found in Firenze, at the Cascine Park.
Calycanthus floridus is an ornamental species native to southeastern North America and introduced in Italy in 1788 (Maniero 2015).In Italy, it is known as casual alien only in Toscana (Galasso et al. 2018a).Some young individuals of the species have developed as epiphytes on the trunk of a young Phoenix canariensis H.Wildpret, settling among the remains of fibrous tissue present among the stumps of the leaf rachids.The plants developed from seeds produced by a shrub cultivated in a flowerbed at a short distance.The area is located in a rather sheltered position due to the presence of groups of Pinus halepensis Mill.subsp.halepensis and alignments of buildings that limit insolation and reduce the influence of the eastern sea winds, creating a cooler microclimate.
F. Scafidi, G. Domina M. Mugnai, A. Misuri, G. Ferretti (FI).-Casual alien species new for the flora of Toscana.This species was already recorded in Italy as naturalized, mostly in northern Italy.Some young individuals were found at the railway station of Panicaglia, probably originating from adult fruiting plants of a neighboring garden.Given the ephemeral condition of the occurrence site, we consider this species as casual for Toscana.
Clerodendrum trichotomum
M. Convolvulus sabatius was first recorded from Puglia near Giovinazzo (Bianco 1969) and then collected in Salento (Marchiori et al. 1993), Bari and Monopoli (Perrino et al. 2013).These collections were all attributed to C. sabatius subsp.sabatius.Our gatherings, from Giovinazzo and Lecce, show long spreading hairs on stems, leaves and calyx and are, therefore, attributed to C. sabatius subsp.mauritanicus, according to Carine and Robba (2010) and Wood et al. (2015).Consequently, we consider C. sabatius subsp.sabatius as recorded from Puglia by mistake (Bartolucci et al. 2019 Crassula muscosa is native to southern Africa and is widely cultivated as ornamental.In Italy, it is known as casual alien in Toscana, Campania, and Sicilia, while it is considered naturalized in Liguria, Calabria, and Sardegna (Galasso et al. 2018a).Some individuals of this species grow as epiphytes on the trunk of a Phoenix canariensis H.Wildpret inside the city.The plants have developed among the residues of fibrous tissue between the remains of the cut leaf rachids, in a partially shaded position.Individuals may have arisen via vegetative propagation from fragments of plants grown for ornamental purposes in nearby buildings.
N. Olivieri Cyclamen persicum is a widely cultivated plant, whose native range extends from Algeria to the eastern Mediterranean.It is reported in Italy as a casual alien for Lombardia (Banfi and Galasso 2010), Sardegna (Lazzeri et al. 2015), and Lazio (Nicolella 2018).Well-developed specimens were first recorded in 2000 in Viale G. Odino in the centre of Genova.Recently other specimens have been found at three different sites, both in the city centre (Via Fieschi,WGS84: 44.403397N,8.935548E,36 m) and in more peripheral sites (Via V. Bocciardo,WGS84: 44.404441N,8.993866E,168 m;and Via Tortona).All grow in the cracks of sidewalks, without any other species nearby.One of them was in bloom when recorded (April 2019).
Cyclamen persicum
A. (Verloove 2008).It is reported as casual alien in almost all regions of northern and central Italy (Lombardia, Veneto, Trentino-Alto Adige, Friuli Venezia Giulia, Liguria, Emilia-Romagna, Umbria, Lazio, Abruzzo, Campania, Puglia, and Calabria), as naturalized alien for Toscana and Sicilia, and as invasive alien for Sardegna (Galasso et al. 2018a).In Marche, a single individual was observed with abundant flowers and fruits in a stony bank along the Metauro River, far from gardens and urban centre.This species has been long confused with the related D. inoxia Mill., less common in Italy, which differs from D. wrightii for the type of indument (Verloove 2008) Dicliptera squarrosa is an ornamental plant native to South America, which presents several forms, separated mostly geographically and hardly forming discrete units (Wasshausen and Wood 2004).This species is currently widely available for sale worldwide and is largely used also in Italy.We found one flowering individual clearly escaped from cultivation close to the Querceta railway station.According to some authors (J.Wood, pers. commun.), the forms cultivated in Europe should be referred to Dicliptera suberecta (André) Bremek., currently considered as a synonym of D. squarrosa (Zuloaga et al. 2008) (Martini and Scholz 1998).Until now, it was reported in Italy as naturalized alien in northern regions (Piemonte, Liguria, Lombardia, Veneto, Trentino-Alto Adige, Friuli Venezia Giulia, Emilia-Romagna) and Calabria, and as casual alien for Valle d 'Aosta, Lazio, Campania, and Puglia (Galasso et al. 2018a).A large number of individuals were detected by S. Montanari (pers. commun.)-Naturalized alien species new for the flora of Marche.
Erigeron karvinskianus is an American perennial species native to Mexico and Guatemala which occurs all over western Europe, probably escaped from floriculture.To date, it is present in almost all the Italian territory, with the exception of Valle d'Aosta, Molise, Basilicata, and Sardegna (Galasso et al. 2018a).In all the recorded localities, this species was also observed near road edges and in unmanaged flowerbeds, mainly colonizing the gaps in walls, where it seems to be more competitive than other species, such as Cymbalaria muralis G. Stinca).-Status change from naturalized to invasive alien for the flora of Campania.
Erigeron karvinskianus was reported as naturalized for Campania by Galasso et al. (2018a).However, we found this alien plant, in dense and extensive populations, mostly on walls of limestone and tuff blocks of several sites in the Sorrento peninsula.In these environments, it easily spreads by abundant seed production and competes strongly with endemic species, such as Campanula fragilis Cirillo subsp.fragilis.Therefore, this species should be considered invasive in Campania.
A. Gamochaeta pensylvanica is native to North America.In Italy, its first record by Moraldo and La Valva (1989) for Campania, was erroneously attributed by these authors to G. purpurea (L.) Cabrera (Soldano 2000) and then recorded in the same region by Stinca et al. (2016Stinca et al. ( , 2018)).The origin of the introduction of this species in Italy is uncertain.Probably, G. pensylvanica arrived in Italy through the importation of potting soil used in plant nurseries.Currently, according to Galasso et al. (2018a), G. pensylvanica is a naturalized alien species in Campania, Piemonte, Lombardia, Emilia-Romagna, and Sicilia, whereas it is casual in Toscana, Lazio, and Puglia.In Calabria, this species was observed for the first time in 2008 in locality Catona (Reggio Calabria).Gazania linearis has its native range in South Africa and Lesotho.Since it has been cultivated as an ornamental plant since the 19 th century, it has become an invasive plant in several regions of the world (Hassler 2019).In Italy, according to Galasso et al. (2018a), this species is a casual alien to Toscana, Molise, and Puglia, whereas it is doubtfully recorded for Sardegna.Impatiens parviflora is native to central and eastern Asia and represents one of the most widespread aliens in central Europe, being the only alien plant widespread in European forests (Godefroid and Koedam 2010;Hejda 2012).In Italy, this species is reported as naturalized in Friuli Venezia Giulia, Emilia-Romagna, Liguria, Toscana, and Lazio, and as invasive in Valle d'Aosta, Piemonte, Lombardia, Trentino-Alto Adige, and Veneto (Galasso et al. 2018a).During a field survey conducted in the Tuscan Apennines, we noticed a large population of this species.The plants are particularly dense, totally covering the herbaceous layer in shady sites and showing a preference for dry, acidic and nutrientpoor soil conditions, as also highlighted by Godefroid and Koedam (2010).Accordingly, we retain the status of invasive species as more appropriate for I. parviflora in Toscana.
F. Roma-Marzio, M. D'Antraccoli, L. This subspecies is native to central America and southern Mexico, and it was introduced in many countries for several purposes, sometimes becoming invasive (Hughes 1998a(Hughes , 1998b)).In Italy, it has been reported as naturalized in Sicilia (Raimondo and Domina 2007;Pignatti et al. 2017;Galasso et al. 2018a).In Sardegna, it has been ob-served since 2006 in the industrial area of Sestu, where some plants are growing not far from the cultivated parental plants.Some saplings and young trees have also been observed in the surroundings of Monserrato, in fallow land and roadsides close to Via C. Cabras.
A Ludwigia hexapetala is a herbaceous perennial plant native to central and South America; its habitat includes lakeshores, ponds, ditches, and streams.The large tolerance of this species to the variations of hydrological and climatic conditions, as well as the strong ability to colonize both beaches and swamps, make it a noxious invader of aquatic ecosystems in North America and in Europe, where it is reported (as included in L. grandiflora (Michx.)Greuter & Burdet) in the list of invasive alien species of Union concern (Regulation (EU) n. 1143/2014).It was recorded for Italy by Galasso (2007), based on specimens collected in Lombardia and Veneto and, later, as invasive for Emilia-Romagna (Alessandrini et al. 2017).This species is already established around the coasts of Bracciano Lake, where large populations with hundreds of plants regularly develop flowers and fruits.Nowadays, it occurs with dense populations on about 2 km of the coast near Vigna di Valle, together with other aliens, such as Amorpha fruticosa L., Datura wrightii Regel, Eclipta prostrata (L.) L., Oenothera glazioviana Micheli, Physalis peruviana L., Salvia hispanica L. (see also Galasso et al. 2018bGalasso et al. , 2018cGalasso et al. , 2019)).Moreover, it is widespread near Trevignano Romano (Roma), loc.Pantane, where it was wrongly reported as L. peploides (Kunth) P.H.Raven subsp.montevidensis (Spreng.)P.H.Raven (Azzella and Iberite 2010).Some individuals can be observed on the east coast of the lake (Lungolago di Polline).
S. Buono, M.M. Nigella sativa grows in many countries of the temperate regions, where it is cultivated for its aromatic seeds (Zohary 1983).In Italy, it was already cultivated in Ancient Rome (Arrigoni and Viegi 2011), and it is currently reported as a casual alien in Sardegna, extinct in Piemonte, and not recently recorded for Friuli Venezia Giulia and Toscana (Galasso et al. 2018a).In the latter region, Arcangeli (1882) Oenothera speciosa is a showy perennial alien introduced as ornamental, native to prairies in the United States of America (Missouri and Nebraska) and northern Mexico (Wager et al. 2007;Keener et al. 2019).In Italy, this species is reported as casual alien for Lombardia, Veneto, Toscana, and as naturalized for Emilia-Romagna (Galasso et al. 2018a).For Marche, the occurrence of a Oenothera with pink flowers near Senigallia was reported by G. Mazzufferi (pers. commun.).The same data was later verified and recorded by Montanari and Marconi (2010), but no precise locality information was provided.In Rosciano, several specimens have been observed for some years along roadsides and uncultivated areas, where they are slowly spreading.
L. Opuntia scheeri is a species native to Mexico, often cultivated as an ornamental plant.It was recorded for the first time in Italy in 1994 (Guiggi 2008), and currently occurs in several regions of northern Italy (Piemonte, Lombardia, Trentino-Alto Adige, Veneto, Emilia-Romagna: Galasso et al. 2018a).Both the records reported here refer to individuals growing close to inhabited areas and derived most likely from cultivated plants.Paulownia tomentosa is an ornamental plant native to China and introduced to Europe.It is usually cultivated in parks and gardens, but it is also used for timber production thanks to its fast growth and high-quality wood.The size of plantations in Italy has been increasing rapidly since 1989 (Mezzalira and Colonna 2002).This species occasionally escapes cultivation and becomes invasive, growing rapidly in disturbed areas.It is considered as invasive in the USA, and a potentially invasive species in Europe and South America, where it has been introduced (CABI 2019).We observed an abundant population at the Fontebuona railway station, close to a large cultivated plant.The population consists of numerous individuals of various ages, deriving from both seeds and root suckers.Recently (May 8 th , 2019) this species was detected in another site, on the right bank of the Arno River in loc.Riscaggio (Reggello, Firenze, WGS84: 43.7249776N, 11.4662411E).
Petroselinum crispum (Mill.) Fuss (Apiaceae)
+ (NAT) ITALIA (SAR).Status change from casual to naturalized alien for the flora of Italy (Sardegna).In Italy, Petroselinum crispum is reported for most of the regions (Galasso et al. 2018a).Although an agronomic study on populations naturalized in Trentino-Alto Adige was published recently (Fusani et al. 2016), it is considered as casual alien at national level.We detected numerous plants inhabiting steep and shady calcarenitic cliffs at Capo Sant'Elia (Cagliari, Sardegna).This population displays a well-structured partition in age classes, with seedlings, juveniles, and fruiting individuals that suggest the establishment of a naturalized population.Interestingly, the presence in this area of the phyto-toponym "su perdusemini", clearly referring to parsley, and used at least from the 18 th century to name a tower probably built during the 16 th century, suggests that naturalized populations may be present in this area since a long time.However, P. crispum was not previously recorded in the accurate flora of Capo Sant'Elia compiled by Martinoli (1950).In this context, it must be pointed out that the origin of this widely cultivated plant has not yet been identified with certainty, though it possibly originates in the eastern or central Mediterranean region (Agyare et al. 2017;Pignatti et al. 2018).It is noteworthy that Linnaeus (1753) stated its wild habitat to be Sardegna, close to springs.Phyllostachys viridiglaucescens in Valle d'Aosta was recorded for two localities (Mainetti and Banfi 2018).Surveys in 2018 [Champdepraz (Aosta), terrazzamenti abbandonati a ca.300 m dalla fraz.Chef-Lieu (WGS84: 45.68873546N, 7.65795915E), terrazzamenti abbandonati, ca.540 m, 7 October 2018, A. Mainetti, S. Ravetto Enri, V. Mezzasalma (FI); Arnad (Aosta), boscaglia a lato della strada SS26 sul confine con il comune di Hône (WGS84: 45.624517N, 7.736778E), boscaglia ripariale, ca.350 m, 7 October 2018, A. Mainetti, S. Ravetto Enri, V. Mezzasalma (FI)] revealed short oblique internodes at the base of the culms for both the localities.This is a distinctive feature of P. aurea Carrière ex Rivière & C.Rivière (Tison and de Foucault 2014), a species already reported from Valle d'Aosta (Galasso et al. 2018a).Furthermore, the identity of this plant was confirmed by a DNA fingerprinting (RAPD) analysis performed by FEM2-Environment Company (spin-off of the University of Milano-Bicocca) within the BambApp Project (BambApp 2019) (Dipartimento di Scienze Agrarie, Forestali e Alimentari, Università di Torino), using samples from a private botanical collection (T.Froese: Cravanzana, Cuneo, Italy) verified by us as reference base.Consequently, P. viridiglaucescens should be excluded from the flora of Valle d'Aosta.
A. Physalis angulata is a tropical American species that it is occasionaly cultivated for its edible fruits (Hawkes 1972).It is reported in Italy only in Lombardia, Veneto, and Lazio (Galasso et al. 2018a) In Italy, Pseudosasa japonica was reported for all northern regions with the exception of Liguria and Valle d'Aosta (Galasso et al. 2018a).Single branches per node and palmfontlike leaves clearly permitted to identify the species (Li et al. 2006;Tison and de Foucault 2014).In addition, the identity was confirmed by a DNA fingerprinting (RAPD) analysis conducted by FEM2-Environment Company (spin-off of the University of Milano-Bicocca) within the BambApp Project (BambApp 2019), using samples from a private botanical collection (T.Froese: Cravanzana, Cuneo, Italy) verified by us as reference base.The recorded population originated from agamic propagation of nearby cultivated plants.
The red oak is an American taxon, which was imported in Europe starting from the 17 th century (Magni Diaz 2004), and in Italy from 1803 (Maniero 2015).In Sardegna, it was introduced in reforestations and for ornamental purposes (Veri and Bruno 1974;Arrigoni 2006).In recent years, numerous trees and saplings were found on the eastern side of the Gennargentu Massif (Monte Idolo), all growing close to reforestations with red oak and other alien trees.
G Reynoutria bohemica is of hybrid origin between the alien species R. japonica Houtt.and R. sachalinensis (F.Schmidt) Nakai, and it has been recognized and described only at the end of the last century in the Czech Republic (Chrtek and Chrtková 1983).Like other congener species, R. bohemica colonizes ruderal environments, roadsides and waterways, and forms dense stands that shade and crowd out all other plants, thereby reducing the biodiversity of invaded plant communities and damaging habitats beyond repair (Padula et al. 2008).In Italy, it has been reported, so far, for Valle d'Aosta, Piemonte, Lombardia, Veneto, and Toscana as invasive alien, for Friuli Venezia Giulia and Emilia-Romagna as naturalized alien, and for Trentino-Alto Adige, Liguria as casual alien (Galasso et al. 2018a).In the Urbino site, which represents the first record for Marche, a large number of individuals has been monitored for several years, and a considerable increase of the population was observed.For this reason, containment measures should be taken.
Roldana petasitis is native to central America (Jeffrey 1986).According to Galasso et al. (2018a), this species is naturalized in Liguria, while in Lazio, Puglia, and Basilicata it is considered as a casual alien.Although Fiori (1927) reported this taxon as growing wild in Sicilia, Giardina et al. (2007) excluded it from this region.A few individuals of different age were found in Librizzi, growing along the roadside with other nitrophilous species typical of urban areas.The population, monitored since 2013, is particularly resilient, despite the continuous cuts made during ordinary maintenance of public flowerbeds.In Sicilia, this species occurs also in Siracusa, at Latomia dei Cappuccini, in a limestone quarry (R. Genovese, pers.commun.-Naturalized alien species confirmed for the flora of Puglia.For Italy, Saccharum biflorum was known, until now, only in Sicilia and Sardegna, whereas it was not, until recently, recorded in Puglia (Galasso et al. 2018a).A population was found also in Puglia, between a road and an abandoned field, covering a surface of about 20 m 2 .Due to its extension and to the number of the flowering stems, we can consider this species as naturalized in this locality.Sedum palmeri, commonly cultivated as an ornamental pot plant, has been recorded from many northern Italian regions, except Piemonte (Galasso et al. 2018a).Some individuals were discovered growing within the cracks of a sidewalk.This species may be more widespread across the region, especially in urban areas.
N.M.G.Ardenghi, S. Mossini + (CAS) TOS: Figline e Incisa Valdarno (Firenze), loc.C. Torrione (WGS84: 43.6586857N, 11.4246546E), interno cipresseta, 310 m, 24 February 2019, L. Pinzani (FI).-Casual alien species new for the flora of Toscana.In Italy, Sedum palmeri is recorded from Lombardia, Veneto, Friuli Venezia Giulia, Emilia-Romagna, Liguria, Lazio, Campania, and Sardegna (Galasso et al. 2018a).Various groups of individuals grow within a cypress wood.The main one is represented by more than 100 individuals.Semiarundinaria fastuosa is a bamboo native to Japan (south-western Honshu).The recorded population originated from agamic propagation from a private garden and colonized a nearby canal bank.Several branches per node, partially deciduous culm sheaths and minute auricles allowed us to identify this species (Li et al. 2006;Tison and de Foucault 2014).The identification was confirmed by DNA fingerprinting (RAPD) analysis performed by FEM2-Environment Company (spin-off of the University of Milano-Bicocca) within the BambApp Project (BambApp 2019), using samples from a private botanical collection (T.Froese: Cravanzana, Cuneo, Italy) verified by us as reference base.
Senecio angulatus is a succulent climbing plant native to South Africa, introduced for ornamental purposes in southern Europe, Macaronesia, northern Africa, California, Chile, Australia, and New Zealand.Currently, it is naturalized in Albania (Barina et al. 2011), Croatia (Milović et al. 2010), Iberian peninsula (Romero Buján 2007; Pyke 2008), and Chile (Ugarte et al. 2011) and is considered one of the most invasive species in the western Mediterranean area (Brundu et al. 1999), Mediterranean France (Brunel and Tison 2005), Australia (Ross and Walsh 2003;Randall 2007), and New Zealand (Bergin 2006).This species was introduced in Italy in 1875 (Maniero 2015).It is known as a casual alien in Lazio and Calabria, while it is naturalized in Puglia, Campania, Basilicata, Sicilia, and invasive in Liguria, Toscana, and Sardegna (Galasso et al. 2018a).In San Vito Chietino, this species grows on a brick retaining wall, located below the site of the Adriatic State Road, in a sunny and sheltered position, close to the Adriatic Sea.Here the plant is established along with Arundo plinii Turra, Ficus carica L., and Rubus ulmifolius Schott.
N. Olivieri Senecio inaequidens is native to South Africa.It was recorded in Europe for the first time in the mid-twentieth century and observed in Italy in 1947 (Carrara Pantano and Tosco 1959;Anzalone 1976).It was reported as present throughout central and northern Italy and has been rapidly expanding since the beginning of the 1980s (Pignatti 1982).Now it is widespread in all Italian regions and often considered invasive (Galasso et al. 2018a).Our recent field investigations revealed the presence of this species in all Tuscan provinces, confirming many previous observations and adding several new occurrences (Peruzzi et al. 2019).Consequently, this species is abundant and well distributed in anthropized sites of Toscana, where it is spreading notwithstanding the control actions often undertaken.Moreover, this species has been observed in some natural sites.Accordingly, we regard the status of invasive alien as the most appropriate.
A (Nicolella and Ardenghi 2013).In Italy, this species has been reported as casual alien in Lazio (Nicolella and Ardenghi 2013;Galasso et al. 2018a).In Sardegna, it has been observed starting to 2015 in the town of Olbia, where it grows in the Fausto Noce community park and neighboring areas, above all in lawns but also in flowerbeds and along paths.It probably arrived there thanks to seed dispersed in lawns.
Solanum bonariense L. (Solanaceae)
+ (CAS) LIG: Genova (Genova), lungo Via Apparizione, nel tratto pedonale (WGS84: 44.40443N, 8.98889E), bordo strada, 42 m, 20 April 2019, A. Di Turi, C. Aristarchi (FI, GE, GDOR).-Casual alien species new for the flora of Liguria.Solanum bonariense is a perennial shrub native to Uruguay, northern Argentina, and southern Brazil where it is widespread in pastures.Introduced in Europe as an ornamental, it is nowadays recorded in Italy as a casual species for Lombardia, Lazio, Campania, and as naturalized for Toscana and Sicilia (Galasso et al. 2018a).A welldeveloped specimen, growing together with Parietaria judaica L., has been recorded in a pedestrian street of Genova among houses surrounded by orchards and gardens.Solanum laciniatum is a species native to New Zealand and Australia from south-eastern Australia, Victoria, and Tasmania (Simon 1981).This species belongs to Solanum subg.Archaesolanum Bitter ex Marzell, composed of eight species occurring only in the SW-Pacific region (Poczai et al. 2011).In the Euro+Med area, S. laciniatum is recorded in Morocco, France, Spain, Israel, and Tunisia (Valdés 2012), whereas in Italy it is doubtfully occurring based on a record for Puglia (Beccarisi et al. 2015;Galasso et al. 2018a).This species is similar to S. aviculare G.Forst, that mainly differs from S. laciniatum in the shape of petals (notched in S. laciniatum and acute in S. aviculare), and in the colour of mature fruits (orange-yellow in S. laciniatum and orange-red to scarlet in S. aviculare).About six big tufts, probably originated from cultivated plants at a nearby hotel, were counted mixed with native species typical of the Mediterranean scrub.Furthermore, in the same area plants are present since 2006, as highlighted by some photos published on the Portal to the Flora of Italy (http://dryades.units.it/floritaly/index.php?procedure=taxon_page&tipo=all&id=11471).Tulipa clusiana is native to Syria and Persia, in the Middle East (Banfi and Galasso 2010), and is recorded as a casual alien in several central-northern Italian regions, and as naturalized in Piemonte, Lombardia, and Marche (Galasso et al. 2018a).In Veneto, there was only one confirmed report by Busnardo (2000) in Bassano del Grappa (Vicenza).For the Verona province, there is only a historical sample collected by Goiran (1897, 1900, VER) and a recent indication of occasional presence in Custoza (F.Prosser, pers. commun.).In the locality reported here, the population consists of thousands of seedlings, which grow both within a thermophilic grove formed by different species, such as Dioscorea communis (L.) Caddick & Wilkin, Fraxinus ornus L. subsp.ornus, Ligustrum vulgare L., Quercus pubescens Willd.subsp.pubescens, Robinia pseudoacacia L., Rubus ulmifolius Schott, and Sambucus nigra L., and inside olive groves.This species was found in two small woods about 250 meters apart, and more on two other adjacent banks.Other localities have been found on the slopes of Monte Tenda, just above the medieval castle of Soave (WGS84: 45.44145545N, 11.24924856E, 95 m), more than 2 km away from the above-mentioned sites.The native range of Vachellia farnesiana is considered to be the New World (New 1984), and in particular North America (Gilman and Watson 1993).However, its exact origin is nowadays debated (Luken and Thieret 1996;Roskov 2006).In Europe, it occurs in France, Italy, and Spain (Roskov 2006).Currently, according to Galasso et al. (2018a), it is a casual alien in Sicilia and Sardegna.In this new Calabrian locality, we observed several seedlings near the mature plants.This is the first record for peninsular Italy.
Verbena bonariensis L. (Verbenaceae)
+ (CAS) FVG: Gorizia (Gorizia), Borgo Castello, sulle mura del castello subito dopo Porta Leopoldina (WGS84: 45.942638N, 13.628783E), su mura di arenaria, 100 m, 25 April 2019, F. Roma-Marzio, P. Liguori (FI, Herb. F. Roma-Marzio).-Casual alien species new for the flora of Friuli Venezia Giulia.Verbena bonariensis is native to South America (southern Brazil, Uruguay, Paraguay, northern Argentina) and has been introduced in many countries of Africa, Asia, Australia, and Europe and in the USA (Munir 2002;Nesom 2010).In Italy, it is reported as naturalized alien in Liguria and as casual in Lombardia, Trentino-Alto Adige, Emilia-Romagna, Toscana, Umbria, and Lazio (Galasso et al. 2018a).About five plants were found on the ancient walls, probably as a result of escaped cultivated plants.Specimens were identified using the key reported by Nesom (2010) According to Shi and Kilian (2011), the Sicilian populations of Youngia japonica belong to the autonymic subspecies, native probably to China and naturalized in warm areas of all continents (Galasso et al. 2016).The single Italian record of this species in Genova (Liguria) is very recent (Galasso et al. 2016).We found approximately 30 individuals growing inside sidewalk cracks and in shady micro-soil located at the base of the walls.In the same area, the herbaceous vegetation consists mainly of several ruderal species linked to anthropic environments.Y. japonica has been observed as alien also in north-eastern Sicilia (A.Crisafulli and R.M. Picone, pers.commun.),namely in Messina along urban roads (Via F. Bisazza), in the flowerbeds and lawns of the Comando Arma dei Carabinieri (near Villa Mazzini) and in Milazzo (Messina) at C.da Scaccia in an uncultivated wet habitat.
-
VDA. -Alien species to be excluded from the flora of Valle d'Aosta.
) TOS. -Status change from naturalized to invasive alien for the flora of Toscana.
Eragrostis mexicana (Hornem.) Link subsp. virescens (J.Presl) S.D.Koch & Sánchez Vega (Poaceae)
. Nevertheless, further studies are needed to solve this issue and we prefer to provisionally maintain this record under D. squarrosa.
Hofmann Erigeron karvinskianus DC. (Asteraceae)
in an uncultivated grassy field, and the abundance of specimens suggests a naturalization of the species, which can be confirmed by monitoring the site.L.Gubellini, N.
already indicated its occurrence in Casentino as doubtful.No recent information about cultivation of this species in Puglia is available.
Phyllostachys viridis was previously reported in Italy only for Lombardia.Its identity was confirmed by a DNA fingerprinting (RAPD) analysis performed by FEM2-Environment Company (spin-off of the University of Milano-Bicocca) within the BambApp Project (BambApp 2019), using samples from a private botanical collection (T.Froese: Cravanzana, Cuneo, Italy) verified by us as reference base.These populations originated from agamic propagation of nearby cultivated plants.
|
2019-03-31T13:49:04.539Z
|
2017-05-05T00:00:00.000
|
{
"year": 2019,
"sha1": "221f25f04a9374532d8193a8982546492d474757",
"oa_license": "CCBY",
"oa_url": "https://italianbotanist.pensoft.net/article/48621/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "718d31215f1fcf8e2d3c0e5f473a9195e01e7cff",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Geography"
]
}
|
252536251
|
pes2o/s2orc
|
v3-fos-license
|
Effects of temperature on drying kinetics and biochemical composition of Caulerpa lentillifera
Caulerpa lentillifera (phylum: Chlorophyta) is a seaweed that is widely consumed by local communities across Asia. The effects of different oven - dried temperatures on the drying kinetics and phytochemical constituents of C. lentillifera were studied. This kinetic study used three different thin - layer models, i.e., Lewis, Henderson and Pabis, and Logarithmic models. Among these, the Logarithmic model was the most suitable model that could be determined the seaweed drying behaviours (R - square > 0.9923, RMSE < 0.0004). Drying the C. lentillifera at 50 o C resulted in a higher extraction yield (1.00%) with a significant concentration of total phenolic content and total flavonoid content at 1.027 mg GAE/g, and 59.655 mg RU/g, respectively.
Caulerpa lentillifera (phylum: Chlorophyta), also known as green caviar or sea grapes is widely consumed in fresh or salad form by the local communities in parts of Asia, particularly in the coastal region (Gan et al., 2011;Zawawi et al., 2015). The seaweed is green in colour and grass-like, with a delicate and succulent texture (Amata et al., 2018). The demand for C. lentillifera has increased over the year due to its high nutritive values and wide applications as cosmetic and animal feed. Nonetheless, the short storage period of raw C. lentillifera causes a supply problem in a region far away from the coast. Hence, the seaweed is normally preserved through dehydration to prolong the storage period and resolve the transportation issue, as fresh C. lentillifera contains high humidity (up to 80%) ( al., 2011).
The drying method is proven to be the most effective method in preserving seaweed to avoid progressive degradation and biochemical changes (Nurjanah et al., 2016). Various food drying techniques have been developed, including sun-, oven-, vacuum-, and microwave-drying (Zhang et al., 2010). The main objective of drying is to inhibit microbial activity on food through the removal of water (dehydration), therefore slowing down chemical degradation in food (Gupta et al., 2011). A common industrial practice for preserving seaweed is through drying at 50-80ᵒC (Djaeni and Sari, 2015). Notwithstanding, the drying process consumes high energy as well as induces physical and chemical changes (texture, colour, and chemical composition) (Guine and Barroca, 2012), ultimately affecting the consistency, colour, scent, nutrition, and phytochemistry of the seaweed (Chan et al., 1997). In this regard, it is important to design and determine the optimum drying temperature for C. lentillifera to minimise energy consumption and biochemical alteration in seaweed during the drying process (Stramarkou et al., 2017). Herein, this study aimed to identify the optimum drying temperature from the calculation using several mathematical modelling. The crude extraction yield, total phenolic, and total flavonoid content of the dried seaweed were also investigated.
Drying experiment
Caulerpa lentillifera seaweed was collected from the Blue Lagoon, Port Dickson, Negeri Sembilan (GPS coordinate:2°24'55.9"N 101°51'15.7"E). The sample was washed using running water to remove the sand particles and other residues. Then, 100 g of fresh seaweeds were placed in a tray (35×50 cm) and dried at various temperatures between 40-80 o C (Awang et al., 2021), using an oven (Model 30-750, Memmert, Germany) equipped with a temperature controller and a suction fan. The weight change was measured at 10 mins intervals using an electronic balance (Pioner, Shimadzu, Japan). The drying, cooling, and weighing processes were repeated until a constant weight was obtained. The moisture content of the seaweed was measured using a Halogen Moisture Analyzer (Model HB43, Mettler Toledo, USA). All the experiments were conducted in triplicate.
Extraction
The dried seaweed was ground into powder using a blender (Model EBM-9182, Elba, Malaysia). Approximately 10 g of dried seaweed was extracted with 200 mL of ethanol (70%) at 60 o C for 1 hr (Awang et al., 2017). The extract was filtered using a vacuum pump to remove the solid residues, followed by concentrating the crude extract using a rotary evaporator and then drying in an oven at 40 o C for further analysis.
Determination of drying kinetics
The experimental data obtained from drying experiments were fitted into three thin-layer models (Lewis, Henderson and Pabis, logarithmic model) in Table 1 to calculate the drying kinetic pattern. All experimental data were expressed in the dimensionless moisture ratio (MR) as shown in Equation (1). The drying process was assumed to be controlled by the external resistance between the samples and surrounding drying air in the oven.
where M is the moisture content, M o is the initial moisture content, and M e is the equilibrium moisture content. The goodness-of-fit for the selected models was evaluated using the determination coefficient (R 2 ) and root-mean-square error (RMSE) according to Equation (2) and (3), respectively.
Evaluation of total phenolic content and total flavonoid content
The colourimetric assay of total phenolic content (TPC) was performed using Folin-Ciocalteu reagent according to the method described by Silva et al. (2007) and Ramaiya et al. (2019) with some modifications. Briefly, 1 mL of crude extract (1 mg/mL) was mixed with 2.5 mL of 10% Folin-Ciocalteu reagent, followed by the addition of 2 mL of 7% sodium carbonate after 3
Model
Equations References (1) (3) eISSN: 2550-2166 © 2022 The Authors. Published by Rynnye Lyan Resources FULL PAPER mins. The mixture was incubated in the dark for 90 mins at room temperature and the absorbance of samples was measured at 725 nm using a UV-Vis spectrophotometer (Shimadzu UV-1800, Kyoto, Japan). Gallic acid was used as a standard to construct a calibration curve. The results are expressed as milligrams of gallic acid equivalent (mg GAE) per gram extract (g extract). The total flavonoid content (TFC) of samples was estimated according to the protocol outlined by Al-Matani et al. (2015). Briefly, 1 mL of the sample was added with 0.3 mL of 5% sodium nitrite. After 5 mins, 0.3 mL of 10% aluminium chloride was then added, followed by 2 mL of 5% sodium hydroxide. The absorbance of samples was measured at 510 nm using a spectrophotometer (Shimadzu UV-1800, Kyoto, Japan), and Rutin was used as a standard for the construction of the calibration curve. The results were expressed as milligrams of Rutin equivalent (mg RU) per gram extract (g extract). Both results of TPC and TFC were calculated using Equation (4).
Where C = concentration of gallic acid/ RU from the standard curve, V = volume of sample (mL), and m = mass of sample (g).
Results and discussion
The initial moisture content of the fresh seaweed was 81% (w/w). The time required to reach the equilibrium moisture content during the drying of seaweed was 210, 150, 100, 80, and 60 mins at 40, 50, 60, 70, and 80 o C, respectively ( Figure 1). Indicating that the drying time taken for 40 o C was the longest, probably due to the heat being applied at that temperature not being sufficient for vaporisation to take place. The negative correlation between the drying temperature and the drying time was in agreement with many previous studies that reported on processed pumpkin slices (Akpinar, 2006;Doymaz, 2007), Asian white radish (Lee and Kim 2009), and Melastoma malabathricum (Awang et al., 2021). Table 2 shows the experimental results of the drying process at different temperatures after being analysed using three thin-layer models. It was evident that the experimental data ( Figure 1) satisfactorily matched the models based on the high correlation of determination (R 2 > 0.95, RMSE < 0.007).
The quality of dried seaweed was also evaluated based on the yield of extraction, TPC, and TFC. Figure 2 shows a significant increase in crude yield of extraction from 40 to 80 o C. TPC and TFC, in contrast, decreased significantly beyond 50 o C (Figure 3). According to Badmus et al. (2019), drying at an elevated temperature reduces the TPC, TFC, and antioxidant activity of five species of brown seaweeds (Fucus spiralis, Laminaria digitata, Fucus serratus, Halidrys siliquosa, Pelvetia canaliculata). Similar to terrestrial plants, the extract of Clinacanthus nutans (herbal medicinal plant) exhibited the highest TPC and antioxidant content at 55 o C, while the bioactive compounds in the extract were degraded at high temperature (Baharuddin et al., 2018). Previous studies conducted by Altemimi et al. (2016) showed that the peach extract that had been heated at above 50 o C exhibited a significantly lower TPC. Similarly, the TPC of Schizophyllum commune only showed an elevation when heated from 30 to 42.5 o C, followed by a decrease in TPC from 42.5 to 55 o C (Yim et al., 2013). Nonetheless, these observations are not consistent in all species. For example, the study conducted by Badmus et al. (2019) showed that the TFC in dried brown seaweed, Halidrys siliquosa had no difference from the fresh one, similar to Laminaria digitata in terms of its antioxidant activity. Bener et al. (2013) found that an increase in the temperature did not improve the extraction and isolation rate of bioactive compounds but caused them to degrade instead. During seaweed extraction, the TPC decreased after the drying process could be due to the binding of polyphenols with other compounds (e.g., protein) or changes in the chemical structure of polyphenols that resulted in poor extraction using conventional methods (Mrad et al., 2012). A study by Ling et al. (2015) reported that the oven-dried (40 o C) seaweeds species Kappaphycus alvarezii has higher total phenolic, flavonoid, anthocyanin, and carotenoid content, as well as stronger scavenging and reducing abilities than those dried at 80 o C. On the other hand, drying seaweeds at a low temperature (20 o C) caused the TPC and TFC to drop by 49 and 51%, respectively (Gupta et al., 2011). These variations in phytochemical content and antioxidant activities as a result of different drying temperatures suggested that the drying temperature significantly impacts either the content or extractability of potentially antioxidant components (Cruces et al., 2016). Although we have determined that drying C. lentillifera at 50 o C (4) has an optimum drying time and may preserve the TPC and TFC, it may not apply to all seaweed species due to the variation in cell walls and physiology of seaweeds (Cox et al., 2012).
Conclusion
From the results presented here, the choice of drying temperature can significantly influence the yield and presence of phytochemical compounds in C. lentillifera. Higher drying temperatures increase the yield of extract. However, adverse effects on the phytochemical content were also observed due to degradation at higher drying temperatures. The logarithmic model was the best fit model to describe the drying process of C. lentillifera under the stipulated conditions. The optimal drying temperature of C. lentillifera can serve as a guide for the food industry in improving the quality of seaweed -based products.
Conflict of interest
The authors declare no conflict of interest.
Model
Temperature (
|
2022-09-27T15:02:57.614Z
|
2022-09-25T00:00:00.000
|
{
"year": 2022,
"sha1": "7c95c639b2bac8390086bcb1439ad148efb90f1e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26656/fr.2017.6(5).637",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "759cc89a76263072532f95fbc21a73c4d99760f5",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
221508166
|
pes2o/s2orc
|
v3-fos-license
|
Distinguishing Lymphomatous and Cancerous Lymph Nodes in 18F-Fluorodeoxyglucose Positron Emission Tomography/ Computed Tomography by Radiomics Analysis
Background. *e National Comprehensive Cancer Network guidelines recommend excisional biopsies for the diagnosis of lymphomas. However, resection biopsies in all patients who are suspected of having malignant lymph nodes may cause unnecessary injury and increase medical costs. We investigated the usefulness of 18F-fluorodeoxyglucose positron emission/ computed tomography(18F-FDG-PET/CT-) based radiomics analysis for differentiating between lymphomatous lymph nodes (LLNs) and cancerous lymph nodes (CLNs). Methods. Using texture analysis, radiomic parameters from the 18F-FDG-PET/CT images of 492 lymph nodes (373 lymphomatous lymph nodes and 119 cancerous lymph nodes) were extracted with the LIFEx package. Predictive models were generated from the six parameters with the largest area under the receiver operating characteristics curve (AUC) in PETor CT images in the training set (70% of the data), using binary logistic regression. *ese models were applied to the test set to calculate predictive variables, including the combination of PET and CT predictive variables (PREcombination). *e AUC, sensitivity, specificity, and accuracy were used to compare the differentiating ability of the predictive variables. Results. Compared with the pathological diagnosis of the patient’s primary tumor, the AUC, sensitivity, specificity, and accuracy of PREcombination in differentiating between LLNs and CLNs were 0.95, 91.67%, 94.29%, and 92.96%, respectively. Moreover, PREcombination could effectively distinguish LLNs caused by various lymphoma subtypes (Hodgkin’s lymphoma and non-Hodgkin’s lymphoma) from CLNs, with the AUC, sensitivity, specificity, and accuracy being 0.85 and 0.90, 77.78% and 77.14%, 97.22% and 88.89%, and 90.74% and 83.10%, respectively. Conclusions. Radiomics analysis of 18F-FDG-PET/ CT images may provide a noninvasive, effective method to distinguish LLN and CLN and inform the choice between fine-needle aspiration and excision biopsy for sampling suspected lymphomatous lymph nodes.
Introduction
Lymph node enlargement has several causes, including invasion by lymphomas and metastatic cancers (not including lymphoma), which result in lymphomatous lymph nodes (LLNs) and cancerous (not including lymphomatous) lymph nodes (CLNs), respectively [1,2]. Traditional imaging techniques, such as ultrasonography (US), computed tomography (CT), and magnetic resonance imaging (MRI), often rely on the size, location, and shape to determine the nature of a swollen lymph node [3]. However, these features alone are insufficient for a reliable and effective judgment [4]. erefore, newer imaging techniques, such as contrast-enhanced ultrasonography (CEUS) and positron emission tomography/computed tomography (PET/CT), are currently used to assess the nature of lymph nodes. A previous study demonstrated that LLNs often exhibit a rapid, well-distributed hyperenhancement pattern on CEUS, which may help to distinguish between lymphomatous and benign lymph nodes [5]. Several studies have reported that PET/CT can help distinguish between benign and malignant lymph nodes and provide useful evidence for medical decisions [6][7][8].
e choice between excisional biopsy and needle biopsy for a suspicious malignant lymph node has always been difficult for clinicians. Pathological examination is the gold standard procedure for the diagnosis of malignant lymph nodes. Several medical institutions use fine-needle aspiration (FNA) for preliminary screening of enlarged lymph nodes [9]. FNA is a highly accurate and feasible procedure to diagnose most tumors [10][11][12]. However, it is difficult to accurately diagnose lymphomas by FNA [13]. Even in patients who have undergone FNA, a lymph node resection may nevertheless be required for further examination [14,15], resulting in the unnecessary burden of multiple invasive procedures. Excisional or incisional biopsy is clearly recommended by the National Comprehensive Cancer Network guidelines. Completely resected lymph nodes can provide adequate tissue for histological, immunological, and molecular biological assessments and can enable the accurate diagnosis of lymphoma and the differentiation of its various subtypes [16,17]. Because of the complexity of diagnosis, lymphomas may be misdiagnosed as other benign conditions, such as reactive immunoblastic proliferation [18], autoimmune lymphoproliferative syndrome [19], lymph-node infarction [20], or other tumors. ere are vast differences in the treatment of and evaluation methods for different tumors. erefore, the development of an effective method for the differentiation of LLN and CLN, without the need for additional investigation, time, and extra cost is an urgent unmet concern in clinical practice.
Using texture analysis (TA), radiomics can be utilized to analyze the voxel gray levels, as well as the distribution and relationship of pixels in US, CT, MRI, and PET/CT images to obtain radiomic features, thereby providing an objective and quantitative assessment of tumor heterogeneity [21,22]. is mode of analysis has been applied to a variety of imaging techniques to distinguish between benign and malignant lesions, such as US imaging of the thyroid [23] and liver [24], CT images of the lungs [25] and kidneys [26], and MRI imaging of the breasts [27]. Moreover, it has been used to evaluate the prognosis of esophageal [28], lung [29], and hypopharyngeal [30] cancers. e results of the abovementioned studies have been mainly limited to the differentiation between benign and malignant lesions; studies reporting on the differentiation of tumor types are rare [31][32][33]. We previously identified the benefits of applying radiomics analysis to PET/CT images to distinguish between renal cell carcinoma and renal lymphoma [34] and between breast carcinoma and breast lymphoma [33]. In this study, we focused on the usefulness of 18F-fluorodeoxyglucose (FDG) PET/CT-based radiomics analysis for distinguishing between LLN and CLN, and, moreover, used this method to classify the different subtypes of LLN in patients with lymphomas. We believe that the results of this study can help clinicians decide the optimal biopsy method (FNA or excision) for sampling diseased lymph nodes and thereby reduce the rates of misdiagnosis and unnecessary application of other invasive procedures.
General Demographic Data.
We evaluated all patients who underwent 18F-FDG PET/CT at our hospital between October 2013 and June 2018. e inclusion criteria were as follows: (1) presence of any solid tumor or lymphoma confirmed by pathological diagnosis, (2) lymph nodes that were invaded by lymphoma or cancer (determined by experienced radiologists and oncologists based on patient imaging characteristics such as FDG uptake and lesion morphology) and clinical information (e.g., treatment status and symptoms), and (3) no history of any systemic treatment before undergoing 18F-FDG PET/CT. e exclusion criteria were as follows: (1) unknown or uncertain pathological diagnosis and (2) combined diagnosis of lymphoma and cancer. Due to the retrospective nature of this study, which spanned a long period of time, informed consent was not sought from the subjects. is study was approved by the Institutional Review Board of our Hospital (Nos. 2019310 and 2019410).
Of note, the diagnosis of lymph node tumor invasion in patients was a clinical diagnosis rather than a pathological diagnosis. e diagnoses were mainly made using the patient's imaging report based on the level of FDG uptake and the morphology of the lymph nodes (such as the presence of fusion, significantly increased volume, irregular edges, and irregular density). All imaging reports were issued by a junior nuclear medicine physician of the Department of Nuclear Medicine in West China Hospital and reviewed by at least one senior nuclear medicine physician. Furthermore, the oncologist also consulted the patient's case data, such as other imaging examinations, signs and symptoms, changes in the lymph node after treatment, and pathological diagnosis. e lymph node was included in the analysis only when the imaging report demonstrated that the node was invaded and other evidence was not contradictory. e lymph nodes were divided into LLN, LLN caused by Hodgkin's lymphoma (HLLN), LLN caused by Non-Hodgkin's lymphoma (NHLLN), and CLN groups according to this diagnosis.
Image Acquisition and Clinical Data Collection.
All patients were subjected to an 18F-FDG PET/CTexamination using a Gemini GXL PET/CT scanner (containing 16 layers of CT, Philips Medical Systems, Cleveland, Ohio, USA). All patients were instructed to fast for more than 6 hours before the examination, and blood glucose estimation tests were performed before the examination to ensure that the blood glucose levels were below 8.0 mmol/L. An initial low-dose CT (120 kV, 40 mAs, 5 mm slice thickness) was performed approximately 1 hour after the intravenous injection of 5.18 MBq/kg (1.4 × 10 ∧ −4 Ci/kg) of 18F-FDG, followed immediately with a whole-body PET scan (head to extremities). Subsequently, PET/ CT images (attenuation corrected, based on CT) were generated and interpreted by experienced radiologists. e PET and CT images were reconstructed based on the European Association of Nuclear Medicine Research Ltd (EARL) guidelines (matrix size 4 × 4 × 4 mm and 1.2 × 1.2 × 5 mm voxel size). e data on participants' age, weight, and gender were also recorded.
Texture Analysis.
Texture analysis (TA) is a mathematical description of the gray level intensity and distribution of pixels or voxels in images [35]. It can be used to describe various parameters through statistics-based [36], model-based [37], or transform-based approaches [38] to express heterogeneity within lesions [39]. We used the LIFEx software (version 3.74, IMIV, CEA, Inserm, CNRS, Univ. Paris-Sud, Université Paris Saclay, CEA-SHFJ, Orsay, France) to perform TA of the PET and CT images. reedimensional volumes of interest (VOI) for each malignant lymph node (no limit to the maximum number in a single patient) in every slice were delineated manually by a welltrained radiologist (2 years' work experience). VOIs on PET and CT images were exactly consistent with each other. VOIs smaller than 64 pixels were excluded from the analysis. In the process of delineation, the tumors in each slice are carefully assigned to the VOI, and the surrounding tissues are excluded. Intensity discretization was performed automatically by the software: for PET images, intensity discretization was performed with the number of gray levels of 64 bins and the intensity rescaling was defined with absolute scale bounds between 0 and 20; for CT images, intensity discretization was performed with the number of gray levels of 400 bins and absolute scale bounds between −1000 and 3000 Hounsfield units (HUs) [33]. ereafter, extractionbased algorithms (fixed thresholding at 40% of maximum intensity cutoff) [39] were used to extract the radiomic parameters for both the PET and CT slices, including standardized uptake values (SUV) or CT-value parameters, PET parameters, and radiological parameters. e radiological parameters were divided into six groups: shape, graylevel zone-length matrix (GLZLM), gray-level run-length matrix (GLRLM), neighborhood gray-level different matrix (NGLDM), gray-level co-occurrence matrix (GLCM), and histogram (HISTO). During the extraction of radiomic parameters, the researcher was blinded to the patient's clinical information. Resampling voxel size was set at 4 × 4 × 4 mm (PET) and 1.2 × 1.2 × 5 mm (CT). We also measured the short/long diameter of the selected lymph nodes. Forty-seven parameters were extracted from the PET and CT images (94 in total). e image processing and radiomics workflow in this study followed the image biomarker standardization initiative (IBSI) guidelines.
Statistical
Analyses. Data were randomly divided into a training set and a test set (ratio 7 : 3), and the ratio of lymph node types in the training set and test set remained constant. e consistency of the number of samples in the two types of lymph node which were compared was maintained by oversampling in the training set, if the number of lymph nodes was extremely unbalanced (a ratio of less than 1 : 3). e sample in the smaller set will be randomly selected and replicated until the amount of data is equal to that in the other group when oversampling is performed. In the training set, we used the area under the receiver operating characteristic (ROC) curve (AUC) to identify and compare the screening effectiveness of the radiomic parameters. Optimal radiomic parameters (the six parameters with the largest AUC in PETor CT images) were used in the modeling by binary logistic regression; based on above models, new predictive variables were created, including CT predictive variable (PREct), PET predictive variable (PREpet), and combination of PET and CT predictive variables (PREcombination). We chose six parameters because, according to the preanalysis, the six parameters with the largest AUC size were relatively stable, and this number of parameters was sufficient to achieve a good AUC, sensitivity, and specificity within both the training and test set. en, we predicted the type of malignant lymph nodes in the test set by using the predictive models generated in the training set and compared the predictions with the pathological results of the primary tumor. In addition, we also performed the abovementioned analysis with the maximum standardized uptake value (SUVmax) as a separate prediction parameter. ROC and AUC were used to compare and analyze the new predictive variables in the training and test set. e best cutoff score corresponded to the top-left point on the ROCs in the training set [40]. e AUCs of these predictive variables in the test set were compared, and the result was verified by the z test [41].
Patient Characteristics and Imaging Parameters.
A total of 492 lymph nodes from 324 eligible patients were screened (Table 1). e SUVmax, mean CT value, long/short diameter of LN, and patient characteristics for each group in the training set and test set are presented in Table 2.
ROC Analysis of Potentially Optimal Radiomic
Parameters. In the LLN vs CLN, HLLN vs CLN, and NHLLN vs CLN groups, the six largest AUC radiomic parameters were no appreciable difference, CTvalue_min_CT and NGLDM_Coarseness_PET being the parameters with the largest AUC in all groups. In the HLLN vs NHLLN groups, the six radiomic parameters with the largest AUC differed from the previous three groups. In other words, different radiomic parameters are suited for distinguishing between LLN and CLN than for distinguishing HLLN from NHLLN (Table 3).
Regression Coefficient and Radiomic Predictive Variables.
Above six largest AUC radiomic parameters in each groups were used to modeling by binary logistic regression, respectively (Table 4). Based on these models, we can derive new predictive variables for each lymph node (PREct, PREpet, and PREcom) and evaluate their cutoff values of predictive variables (Table 5). Two examples are presented in Figure 2 and Table 6 to describe how to apply the abovementioned predictive models to the test set to derive predictive variables, followed by predictions of the type of malignant lymph nodes.
ROC Test of Predictive
Variables. e methods described in the above example were applied to all lymph nodes in the training and test sets to derive the corresponding predictive variables (PREct, PREpet, and PREcom) and prediction results. e ROC was used to test the predictive performance of these predictive variables. In the training set, while distinguishing between LLN and CLN, the PREcombination yielded an AUC of 0.96, sensitivity of 92.86%, specificity of 93.98%, and accuracy of 92.81% (the cutoff point was 0.39); in the test set, the AUC was 0.95, while sensitivity, specificity, and accuracy were 91.67%, 94.29%, and 92.96%, respectively. SUVmax was not a good predictor in each group, with an AUC < 0.75 for each group. Other outcomes for the differentiation between HLLN and NHLLN, as well as for the differentiation HLLN or NHLLN from CLN, are presented in Table 5. e ROCs for all predictive variables for all test groups are shown in Figure 3.
Comparison of the Diagnostic Ability among the ree Radiomic Predictive Variables and SUVmax.
PREcom had the highest AUC in each group, with some differences being statistically significant. e difference in AUC value between PREct and PREpet was not statistically significant in most groups. e AUC value of the SUVmax was significantly lower than that of other predicted parameters in each group (Table 7, Figure 3).
Discussion
is study is the first to report the use of radiomics for the analysis of PET/CT images of malignant lymph nodes, to construct a predictive radiomics model to distinguish between LLNs and CLNs. Furthermore, we found that this approach could be used to distinguish between HLLN and CLN, NHLLN and CLN, and HLLN and NHLLN. We believe that radiomics-based 18-FDG PET/CT is a promising tool for distinguishing between LLN and CLN, since it provides a useful reference for further clinical analysis of suspected malignant lymph nodes. Our results provide a new and reliable way to distinguish between LLN and CLN. Usually, radiologists examine images visually and make a diagnosis based on their knowledge and experience [42]. However, important information may be missed by visual examination [43], rendering a quantifiable, objective method necessary. Lymphomas and other types of cancer show significant differences in the biological behavior and spatial structure [44,45], which may be the primary cause of the heterogeneity that enables its detection by radiomics. Our study confirmed that the method based on the analysis of CT and PET images by radiomics was effective to distinguish between LLNs and CLNs and erefore, the clinical application of these findings, combined with the knowledge and experience of radiologists, could be instrumental in the judgment of the nature of lymph nodes and provide valuable guidelines for the determination of the appropriate method to use for a biopsy.
Radiomics functions differently for different imaging techniques. Previous studies have found that the AUC, sensitivity, and specificity of PREct, PREpet, and PREcom were significantly different while distinguishing breast cancer from breast lymphoma and renal cell carcinoma from renal lymphoma [33,34]. In this study, PREcom demonstrated the highest differentiating ability in any scenario. PREpet was narrowly inferior to PREct and PREcom. e close similarity between the differentiating ability of PREct and PREcom was unexpected, which is similar to the findings of our previous research [33]. We speculated there are two reasons why CT metrics demonstrate a high diagnostic performance and play a decisive role in PREcombination, which is completely different from the clinical scenario [46]. First, PET images may act as a guide to depict areas that should be taken into consideration, so that radiomic parameters from CT images can be extracted in these areas. Second, high differentiating ability may be achieved by CT image-based radiomics analysis itself, because several studies have shown that texture analysis based on CT images alone can identify whether lymph nodes are invaded by tumors [47][48][49]. We believe that PET/CT-based PREcom is highly effective in distinguishing between LLNs and CLNs, and this high efficiency is a combination of the high AUC of CT metrics and the localization effect of PET metrics.
We also discovered the differential ability of radiomics for distinguishing between different types of malignant lymph nodes. For distinguishing LLNs (and LLNs of different subtypes) from CLNs, our methods can achieve a high AUC, sensitivity, and specificity. is may be attributed to the high heterogeneity between CLN and LLN. On the other e cutoff point in the training set is consistent with that in the verification set. Abbreviations: AUC, area under the ROC curve; ROC, receiver operating characteristics; SUV, standardized uptake value; LLN, lymphomatous-lymph nodes; CLN, cancerous-lymph nodes; HLLN, LLN caused by Hodgkin's lymphoma; NHLLN, LLN caused by non-Hodgkin lymphoma; PREct, CT-predictive variables; PREpet, PET-predictive variables, PREcombination, the combination of PET and CT predictive variables. 8 Contrast Media & Molecular Imaging hand, while distinguishing between HLLN and NHLLN, the AUC, sensitivity, specificity, and accuracy were both lower than those for separating CLN and LLN. is could be related to the lower heterogeneity between lymphoma subtypes than that between lymphomas and cancer.
SUVmax is often used as an indicator to distinguish between benign and malignant lesions. In some cases, even higher specificity and sensitivity can be achieved [50]. However, in our study, SUVmax was not suitable as an indicator for distinguishing different types of malignant lymph nodes. In all groups, the AUC, specificity, sensitivity, and accuracy of SUVmax were significantly lower than those of PREct, PREpet and PREcom, and the absolute value was also low. We believe this is due to the fact that SUV values are high in most malignant diseases, leading to insufficient discrimination in different types of malignant diseases. e current trends in texture research involve the use of machine learning/deep learning to avoid the tedious process of manual operation and the accompanying uncertainty. Texture analysis based on dual-energy CT, full-field digital mammography, dual time 18 F-FDG PET/CT, and biparametric MRI can identify benign and malignant diseases with high efficiency (AUC fluctuates between 0.84-0.96 depending on the disease and analysis method) in studies using machine learning/deep learning [51][52][53][54]. e AUC in these studies did not significantly differ from that in our study, but the abovementioned studies focused on distinguishing between benign and malignant tumors.
At present, the final diagnosis of tumors is determined by biopsy. FNA is acceptable or recommended for most neoplastic lymph nodes, but as noted earlier, FNA is not sufficient to diagnose lymphoma (according to the National Comprehensive Cancer Network Guidelines). e application of FNA in a patient with lymphoma may render an accurate diagnosis difficult, and a repeat lymph node resection may be indicated in such a patient in order to obtain sufficient biopsy material [14]. However, blind biopsy of a swollen lymph node may cause unnecessary damage and increase medical costs. Although biopsy is the gold standard technique for the diagnosis of all malignant diseases, it has certain drawbacks. Biopsy is usually invasive, nonrepeatable, and time-consuming and can only be performed for a single lesion [55]. Most patients undergo imaging tests (PET, CT, MRI, etc.) to determine the location or extent of a lesion. Radiomics extracts more information (often missed on visual examination) from existing data to yield valuable diagnostic reference information, without increasing the medical costs. Moreover, the method devised herein has acceptable differentiating reliability even if a patient only undergoes CT and not PET/CT. erefore, radiomics can be an accurate screening method without the requirement of additional resources.
is method can provide a reliable reference for clinicians to determine the optimal biopsy method for sampling tumors and to avoid misdiagnosis or unnecessary damage. e present method is efficient and noninvasive and does not require additional testing for distinguishing between CLN and LLN. Future research should be directed towards the application of PET/CT in the differential diagnosis of lymphomas. Some studies on the radiomics analysis of US, CT, and MRI have reported good discrimination of papillary thyroid microcarcinoma [23], primary lung cancer [25], renal cell carcinoma [26], and prostate cancer [56] from benign lesions. Similarly, PET/CT-based radiomics analysis has been used to distinguish gliomas [57], thyroid cancer [23], and lung cancer [58] from benign lesions. A study using TA in combination with machine learning to distinguish the nature of neck lymph nodes also achieved very good results: Table 6 (a)
Sample 2
Original PET/CT image PET/CT image with volumes of interest (purple marked) Delineate VOI in every slice Extract parameters by LIFEx so ware (40% thresholding) Shown in Table 6 (b) Figure 2: Two examples of how predictive models work (next to Table 6). Abbreviations: VOI, volume of interest.
Contrast Media & Molecular Imaging Table 6: Two examples of how predictive models work (continued Figure 2 an accuracy of up to 93% and 80% for distinguishing lymphoma and inflammatory from normal nodes, respectively, and of 92% for distinguishing benign and malignant lymph nodes [53]. However, the abovementioned studies focused on the differentiation between benign and malignant lesions; reports on the use of radiomics for differentiating between different types of malignant lesions are rare [31][32][33]. Moreover, some earlier studies have reported the feasibility of radiomics-based TA of PET/CT imaging for distinguishing between renal lymphoma and renal cell carcinoma and between breast lymphoma and breast carcinomas [33,34]. In fact, the differential uptake of 18F-FDG by the lesion alone can be a good indicator for the distinction of benign and malignant lesions [59,60]. However, the differentiation of different malignant tumors exclusively based on the quantity of 18F-FDG uptake is difficult, which suggests that radiomics is a better method for the differentiation of malignant tumors. Our research focused on the differentiation between LLN and CLN. Compared with earlier studies, the sample size of the present study was larger, and the results obtained were more significant, which is of great practical and clinical significance. The present study has a few limitations. First, this was a retrospective, single-center study, which may limit the generalization of the results. Second, the inclusion and exclusion criteria employed in this study resulted in the accrual of a small CLN sample, which was not subdivided further. ird, cases of diffuse large B-cell lymphoma accounted for a large number of the LLN samples, which may have led to potential bias in the comparison of LLN caused by NHL and CLN. Fourth, the CT images were obtained from PET/CT scans for the radiomics study, which may have affected the quality of the CT images, in turn reducing the predictive performance by PREct. Fifth, we used a wide range (between −1000 and 3000 HU) for intensity discretization of CT images, based on our previous study findings [33]. is is outside the general HU range of lymph nodes [61] and may have an impact on TA in CT. However, given the relatively good results of both the previous and current studies, we believe that these effects are likely to be minor. Finally, as described earlier, the diagnosis of lymph node invasion by tumors in most patients was based on imaging reports and clinical data. However, PET/CT is very good in diagnosing whether lymph nodes have been invaded by multiple types of tumor (including lymphoma), especially in terms of specificity (92.06%-100%) [62][63][64][65], which ensures a low chance of including nontumorous lymph nodes. Nevertheless, collecting and investigating clinical data can further reduce the inclusion of nontumorous lymph nodes. However, because the present study is a retrospective study, it was not possible to perform a biopsy on every suspicious lymph node.
is makes the inclusion of a very small number of nonneoplastic lymph nodes inevitable. However, the relatively large sample size of this study may partially alleviate the impact of this situation. In addition, the incidence of multiple or secondary cancers was relatively low, mostly due to the side effects of subsequent cytotoxic treatments or radiotherapy [66]; our included patients did not undergo systemic treatment (including chemotherapy and radiotherapy) before undergoing PET/CT. erefore, we believe that this issue would have minimal impact on our results.
Conclusions
Radiomics based on 18F-FDG PET/CT images may provide an effective noninvasive modality for distinguishing between LLN and CLN and may even be applicable for the differentiation of LLN caused by different lymphomas. is modality can help clinicians decide on the method of biopsy and avoid misdiagnosis or unnecessary procedures. However, multicenter studies with large samples are required to validate these preliminary results.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Disclosure
is research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2020-08-06T09:04:37.658Z
|
2020-08-03T00:00:00.000
|
{
"year": 2020,
"sha1": "34d43f56ef3c78aa3cd6cefb5456653a54ee9b2f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/3959236",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8cb55bdf02c1f62bcdee879d185e1038f45d68d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8053242
|
pes2o/s2orc
|
v3-fos-license
|
Clinical practice. Diagnosis and treatment of cow’s milk allergy
Introduction Cow’s milk allergy (CMA) is thought to affect 2–3% of infants. The signs and symptoms are nonspecific and may be difficult to objectify, and as the diagnosis requires cow’s milk elimination followed by challenge, often, children are considered cow’s milk allergic without proven diagnosis. Diagnosis Because of the consequences, a correct diagnosis of CMA is pivotal. Open challenges tend to overestimate the number of children with CMA. The only reliable way to diagnose CMA is by double-blind, placebo-controlled challenge (DBPCFC). Therapy At present, the only proven treatment consists of elimination of cow’s milk protein from the child’s diet and the introduction of formulas based on extensively hydrolysed whey protein or casein; amino acid-based formula is rarely indicated. The majority of children will regain tolerance to cow’s milk within the first 5 years of life. Conclusions Open challenges can be used to reject CMA, but for adequate diagnosis, DBPCFC is mandatory. In most children, CMA can be adequately treated with extensively hydrolysed whey protein or casein formulas.
Introduction
The prevalence of cow's milk allergy (CMA) is estimated to be between 2% and 3% in infants and marginally lower in older children [1,2]. The percentage of parents that believe their child has CMA (or any other food allergy), however, amounts to between 5% and 20% [2][3][4]. Signs and symptoms of CMA are nonspecific and often difficult to objectify. Due to diagnostic burdens, the number of children treated for CMA is probably two to three times higher than justified [5]. A wrong diagnosis of CMA may not only result in somatisation but also in insufficient topical treatment of eczema, fear for or problems with the introduction of solids, and dietary deficiencies. Moreover, the long-term elimination of cow's milk protein (CMP) in a sensitised child without CMA may elicit severe adverse reactions when cow's milk is reintroduced [6]. Careful diagnosis of CMA, therefore, is of utmost importance.
Definitions
Adverse reactions to CMP can be present from birth, even in exclusively breast-fed infants. Not all reactions are of allergic nature. In 2001, the EAACI published a report on the terminology of adverse reactions [7]. The umbrella phrase, food hypersensitivity, covers non-allergic food hypersensitivity (traditionally named 'food intolerance') and allergic food hypersensitivity (food allergy). The latter requires an underlying immune mechanism. Most children with CMA have immunoglobulin E (IgE)-mediated allergy as a manifestation of their atopic constitution, with or without atopic eczema, asthma, or allergic rhinitis. A small group have cell-mediated allergy with gastro-intestinal symptoms [8].
History and physical investigation
Although signs and symptoms themselves (Table 1) cannot be used to diagnose CMA, history may be helpful as it can give clues to other diagnoses. Symptoms may start after the substitution of breast-feeding with formula feeding. The repeated occurrence of urticaria or rash shortly after CMP ingestion is suggestive of CMA. In general, signs and symptoms occurring late (>2 h) after the consumption of CMP are not caused by CMA [9]. Also, inconsistent signs and symptoms and those that do not emerge after every feeding must have a different aetiology.
The concurrent presence of other signs of atopy, such as eczema, wheezing, and asthma, increases the likelihood of CMA but cannot be used as a diagnostic proof. Especially the relation between CMA and eczema is difficult to assess. Although they can be present simultaneously and although CMP challenge may aggravate moderate to severe eczema in about 30% of cases [10][11][12], there is no proven relationship with mild eczema; moreover, there is no solid evidence that eczema can be expected to improve significantly on a CMP elimination diet [13]. Atopic eczema should be treated adequately with topical medication before CMA is considered [12]. Generally, other physical signs are lacking or nonspecific. Growth should be monitored closely.
Laboratory investigation
The role of laboratory tests in CMA diagnosis is debatable. The tests used in clinical practice only reveal sensitisation to CMP, which is not necessarily followed by clinically relevant allergy. Over 50% of sensitised children do not have food allergy [14][15][16]. In our experience, positive skin prick tests and allergen-specific IgE tests tend to be falsely interpreted as proof of CMA. This is not at all innocuous [6].
Although there is a strong positive correlation between the level of allergen-specific IgE and the chance of having CMA, unequivocally high specific IgE titres are rare and may occur in non-allergic children [17][18][19]. In general practice, therefore, laboratory tests are seldom helpful. The only way to prove CMA is through elimination and challenge.
Cow's milk challenge
After the elimination of CMP from the child's or the mother's diet, signs and symptoms should disappear within a few days. Atopic eczema, when caused by CMA, may take 4 weeks to improve sufficiently. Upon challenge, renewed confrontation with CMP results in recurrence of the presenting signs and symptoms.
Challenges may be performed either open or doubleblindly. With open challenges, both the staff performing the test and the parents know that the child is given CMP and in what amount. Double-blind placebocontrolled food challenges (DBPCFC) are designed to withhold this information both to the parents and the staff until afterwards. They are performed with placebo and verum in random order. CMP is concealed in a way that both test feedings look and taste similarly. DBPCFCs are superior to open challenges, but they are difficult to perform, require extensive preparation and are relatively expensive [9].
Open challenge
Open challenges can be used as the first diagnostic step. Challenges should follow an approved protocol, suited for the circumstances. In 1995, a national protocol for CMA diagnosis and treatment was introduced in Dutch well-baby clinics, including a simple open challenge, which is still in use [20]. After 2 weeks of elimination diet, the child ingests 10 ml of original formula while being supervised for 1 h; on the three following days, the formula is given in increasing amounts [20]. While originally, the protocol was developed to avoid inappropriate CMA diagnoses, presently, it is thought that it contrarily may induce falsely positive results and should [23]. The child's own formula is given in increasing amounts over 3 h (Table 2). In our opinion, however, open challenges should only be used to reject CMA [1,10,12]. DBPCFC Double-blind challenges are the gold standard. In 2007 as well, the Health Council of the Netherlands issued a report asking for the general introduction of DBPCFCs [1]. Dutch paediatric allergy centres and many hospitals already practice DBPCFCs [24][25][26]. They can be performed in well-baby clinics and general practices as well, as long as basic precautions have been met, including thorough knowledge of the procedure, careful patient selection and the equipment to block adverse reactions. The importance of DBPCFCs is underscored by the fact that between 13% and 30% of placebo tests are eliciting adverse reactions [24,27]. There is no generally accepted protocol for DBPCFC. The protocol presented here is used in the Wilhelmina Children's Hospital [25].
Preparation The diet should be CMP free for at least 2 weeks. The patient's condition should be stable, especially concerning skin symptoms. Topical corticosteroids may be continued, but antihistamines should be discontinued at least one week in advance. Preparation includes a thorough history for previous adverse reactions.
Safety Tests should be performed in a day-care setting or during admission. Depending on the severity of previous signs and symptoms, a monitoring device and intravenous access may be needed. Clemastine and epinephrin for parenteral use must be at hand. The personnel should be well trained, also regarding the management of (rare) severe acute reactions.
Test material The child is given his/her own hypoallergenic formula or expressed breast milk. The research kitchen prepares coded bottles with (verum) or without (placebo) 5 g Protifar® powder (Nutricia/SHS), containing 4.4 g CMP, per 250 ml formula.
Procedure Placebo and verum are administered on separate days, preferably 1 week apart. The test formula is given in increasing dosages at fixed intervals (Table 2) [26]. Adverse reactions are recorded. After a negative test, the child remains under supervision for 2 h, after a positive test for 4 h. The parents are asked to report late reactions. After the second test, a follow-up period of at least 48 h is observed before the seal is broken [26].
Evaluation The test is discontinued when the child experiences objective adverse reactions, subjective reactions that persist for 30 min or longer or repeated short-lived subjective reactions [9]. Allergic and non-allergic reactions are assessed separately and in the light of the child's history (Fig. 1). The final interpretation of the tests is given in Table 3. The DBPCFC is considered negative when verum did not elicit adverse reactions or when reactions following verum are not worse than following placebo. Even a Based on a mean protein content of 1.5 g/100 ml; depending on brand, the protein content of infant formula is between 1.3 and 1.6 g/100 ml; of follow-on formula, 1.7 and 1.9 g/100 ml DBPCFCs, however, may not provide unequivocal results [27,28].
Risk assessment
Although it is impossible to predict reaction severity during food challenges [29], some rules apply. Severe adverse reactions are more likely with previous severe reactions, with previous reactions on very low CMP doses, in older children, in children suffering from asthma and after prolonged exclusion of cow's milk [6,30,31]. Severe reactions with CMP challenges, however, are rare. During over 12 years of open challenges in Dutch well-baby clinics [20], no severe adverse events have been reported to the supervising committee (K.I. van Drongelen, Netherlands Nutrition Centre, personal communication) [20]. With the DBPCFC protocol presented here, over 500 challenges have been performed without any severe adverse events. Hence, CMP challenges can be safely performed in general practices, provided basic safety precautions are met. High-risk challenges should be performed in the hospital.
Cow's milk reintroduction
When CMA is refuted, standard formula and dairy products can safely be reintroduced in the diet of the child or the breast-feeding mother. Sometimes the child's illness has put so much strain on the parents that the help of a dietician is required to complete the transition to a normal diet. When, nevertheless, adverse reactions develop after reintroduction, this may be due to the natural course of the underlying condition (eczema), but often expresses the preset conviction of the parents that notwithstanding the test outcome, their child is suffering from CMA. These signs or symptoms are likely to disappear when the introduction is continued.
Therapy
Elimination of CMP from the diet is at present the only proven therapy.
Breast-fed infants Breast-feeding mothers need to eliminate all dairy products from their diets. There is controversy about other measures; as the child is at increased risk of other food allergies, it could be wise for her to eliminate allergens such as soy, egg and beef as well [8]. This increases the burden for the mother, however, and may provoke the failure of breast-feeding. A practical approach would be to start with CMP elimination and to eliminate other products only when the child remains symptomatic.
Bottle-fed infants Formula is replaced by hypoallergenic formula based on extensively hydrolysed CMP [32]. There is limited experience with hydrolysates from other sources, such as soy and collagen. The use of soy formula in infants <6 months is discouraged [23,33]. The only formulas suitable for treatment are those that meet the criterium of being tolerated (with 95% confidence interval) by at least 90% of patients with proven CMA [34]. The protein source may be based on extensively hydrolysed whey protein (eHW) or casein (eHC). Children that do not tolerate eHW may be able to tolerate eHC, and the other way round; despite differences in production and in vitro test results, there is no proven difference in clinical efficacy between both groups of formulas.
Some subgroups, including children with non-IgEmediated gastrointestinal CMA and severe atopic eczema, may show better results with amino-acid-based formulas as opposed to eHC or eHW [35]. Amino-acid-based formulas Table 3). Adapted from 26 should be restricted to children who fail to tolerate extensively hydrolysed formulas.
Solids There is no need to postpone the introduction of solid foods or to follow a detailed introduction schedule. Most children can tolerate other (non-dairy) foods when introduced after the age of 4 months. In highly allergic children, solids are best introduced stepwise: only one or two new foods every 3 days. Because many parents are anxious to proceed with solid food introduction, dietary advice and guidance may be necessary.
Counselling The diagnosis of CMA has a great impact on the family. Proper education of parents and caretakers is essential. They need not only to learn avoidance strategies, such as reading food labels and avoiding high-risk situations, but also to recognise early signs and symptoms and to treat acute reactions. Antihistamines are prescribed for mild dermal conditions but will not suffice for severe systemic reactions. Anaphylactic reactions to CMP are rare; the parents of children with a history of anaphylaxis should be provided an epinephrine auto-injector and a written individualised treatment plan [36].
Tolerance induction
In the past decade, there is increasing interest in specific oral or sublingual immune therapy as a treatment option for CMA in older children [37,38]. Immune therapy may lead to an increased tolerance threshold for CMP with persisting CMA [39] and may even induce permanent tolerance to CMP [37]. More research needs to be done before immune therapy can be offered as a competing therapeutic option.
Prognosis CMA usually is a temporary condition. It is suggested that by the age of 3 years, 85% of children have regained tolerance to CMP [40]. More recent studies, however, are less optimistic; persisting IgE-mediated CMA is reported to persist up to the age of 8 years in 15% [41] to even 58% of children [42]. It is advisable to repeat challenges at regular intervals in order to keep the child on an elimination diet no longer than strictly necessary. There is no reason for DBPCFCs unless the diagnosis never has been made properly. Challenges can be scheduled at the ages of 12, 18 and 24 months and yearly thereafter.
|
2014-10-01T00:00:00.000Z
|
2009-03-07T00:00:00.000
|
{
"year": 2009,
"sha1": "6187ccfdb5328b654a8fff46af3e1f69d70edb1c",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00431-009-0955-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6187ccfdb5328b654a8fff46af3e1f69d70edb1c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268534244
|
pes2o/s2orc
|
v3-fos-license
|
Development of marine biodiversity database (BISMaL) to enable estimations past habitat conditions for marine life in the northwestern Pacific
Abstract Global activities involving the collection of marine biodiversity information have provided a large amount of biological observation records that cover various spatiotemporal areas. To predict biological responses or distribution changes in response to environmental changes by using these observation records, it is essential to analyze not only the current marine physicochemical environmental conditions but also the past conditions when the organisms were observed. We developed a new function to estimate the past marine environmental conditions for the observation records in our marine biodiversity database (Biological Information System for Marine Life: BISMaL) and examine whether the database can reliably estimate thermal habitats for both benthic and planktonic marine organisms. For the benthic squat lobster Shinkaia crosnieri, the estimated and observed in situ temperatures were similar to each other. For the planktonic chaetognaths Krohnitta pacifica and K. subtilis, the estimated temperatures showed clear seasonal changes specific to their distribution areas. These results indicated that BISMaL can reliably provide past habitat conditions regardless of planktonic or benthic lifestyles. BISMaL, which provides both biological observations and estimated past environmental conditions through web services, could lower the barrier to data access and use and make data-driven science available not only for data scientists but also for various marine scientists, such as taxonomists, ecologists and field scientists. Database URL: https://www.godac.jamstec.go.jp/bismal/e/
Introduction
Global efforts to collect marine biodiversity information, such as Ocean Biodiversity Information System (OBIS, https://obis.org/) and Global Biodiversity Information Facility (https:// www.gbif.org/),have compiled a large number of biological observation records that cover various spatiotemporal areas.Based on these records, many studies have been published, including predictions of biological community responses to ocean warming (1)(2)(3) and latitudinal patterns of marine biodiversity (4-7).To predict biological responses or distributional changes in response to global environmental changes by using these observation records, it is essential to analyze not only current environmental conditions but also past conditions when organisms were observed.Of the vast number of accumulated observation records, however, biological observation records that have associated environmental measurements such as water temperature or salinity are very limited.To assess this issue, we attempted to develop a new function in a biodiversity database to estimate the associated environmental conditions at the time of past biological observations.The Biological Information System for Marine Life (BIS-MaL) is a database that accumulates and disseminates marine biodiversity data derived from scientific research mainly in the northwestern Pacific (8,9) and was developed and began operating in 2009.Currently, over 1 700 000 biological observation records are published in BISMaL, but a very limited number of the records have associated environmental measurements.Estimating associated environments for biological observation records can be performed by referencing open data that are part of a global reanalyzed ocean dataset such as the World Ocean Atlas (WOA, 10), which provides environmental data with a resolution of 0.25 ∘ grids (see https:// accession.nodc.noaa.gov/NCEI-WOA18).When available, it is preferable to use a regionally specialized ocean dataset with a higher resolution.In the northwestern Pacific region, Usui et al. (11) produced the Four-dimensional Variational Ocean ReAnalysis (FORA) dataset (https://www.godac.jamstec.go.jp/fora/e/index.html)over 33 years with a resolution of 0.1 ∘ grids.Such regionally specialized ocean datasets with high data resolution are valuable as a source of information about the past environment.Therefore, we integrated FORA into BISMaL to attempt to regenerate past habitat conditions for biological observation records by referring to the spatially and temporally closest environmental conditions.
In this paper, we introduce the BISMaL database, which integrates ocean datasets to estimate past habitat conditions for marine organisms, and then examine whether BISMaL can estimate reliable habitat conditions for marine organisms.Specifically, the thermal habitats of marine organisms with two different lifestyles (benthic and planktonic species) were estimated: one is a benthic squat lobster, Shinkaia crosnieri (Baba and Williams, 1998), that is distributed in deepsea hydrothermal vents, and the others are two planktonic chaetognaths, Krohnitta pacifica (Aida, 1897) and K. subtilis (Grassi, 1881), that are commonly observed around Japan.The reliability of the estimated thermal habitat of S. crosnieri was examined by comparing estimated values and actual measured values when the species was observed.For the two Krohnitta species, whether the estimated thermal habitat can accurately represent seasonality specific to Japanese waters was examined.Additionally, we examined whether a datadriven hypothesis can be extracted by detecting differences between the estimated thermal habitats of the two Krohnitta species.
Data in BISMaL
Data in BISMaL are mainly taxonomic information (Figure 1, left panel) and biological observation records (Figure 1, right panel).Observation records contain latitude, longitude, scientific name, date-time, observation methods and other terms in Darwin Core format 2.0, which is an international data standard for exchanging biodiversity information (12).Taxonomic information is a list of scientific names extracted from taxonomic research papers, books or monographs.Each record in the list is composed of scientific name, taxonomic rank, references, accepted/unaccepted situation, Japanese common name, registered date/modified date and national Red List status in Japan.In February 2021, 54 877 scientific names were registered in BISMaL, of which 19 252 taxa had biological observation records.Maintaining the observation records and the taxonomic information within a single database allows flexible handling of the records.For example, in the case of trying to access records for a given scientific name, all records with the scientific name and its unaccepted (synonymized) scientific names can be retrieved without omission by systematically defining the relations between an accepted and unaccepted name.Furthermore, by defining a taxonomic hierarchical structure in the system, observation records of a higher taxonomic rank (e.g.order or family level) can be collectively retrieved from the records of its subordinate rank.
Ocean physicochemical dataset
To determine the environments at a given point where a biological observation record was obtained, we adopted the ocean dataset FORA, which is a regenerated oceanographic physicochemical condition dataset spanning 33 years in the northwestern Pacific (11).FORA provides daily Network Common Data Form (NetCDF) files from 1 January 1982 to 31 December 2014.Each NetCDF file consists of temperature and salinity data at depths of 0-6300 m with 54 layers and in the range of 117 ∘ E-160 ∘ W and 15 ∘ -65 ∘ N with a resolution of 0.1 ∘ .In matching biological observation records with FORA environmental data, the latitude, longitude, depths and date of the observation records were adjusted to the data resolution in FORA.Latitude and longitude were rounded down to the nearest 0.1 ∘ , and the shallowest depth in the observation record was used as a representative value and then matched to the FORA depth layer containing the representative depth value.Where the date in observation record was given as a period, the earliest date in the period was used as the representative value for matching.
FORA values assigned to observation records are displayed as summarized histograms of water temperature and salinity for each taxon page (Figure 1c) and as underlined values in each biological observation record (Figure 1g).It is also possible to display the distribution of the observation records overlaid on the contour map of temperature or salinity (Figure 1f).
Database implementation
All data in BISMaL were stored and managed using Post-greSQL (https://www.postgresql.org/).The interface components of the website were designed and implemented using the JavaServer Faces 2.2.20 (https://jakarta.ee/specifications/faces/) and PrimeFaces 10.0.9 (https://www.primefaces.org/).Maps presenting the geographical distribution of biological observation records were drawn by GeoServer 2.20.1 (http:// geoserver.org/).The website was successfully tested in several popular web browsers including Google Chrome and Firefox.
Of the data in BISMaL, biological observation records are systematically shared with the OBIS global database.All observation records, which have valid values for eight required Darwin Core terms defined by OBIS, are stored in a system (Integrated Publishing Toolkit) for data sharing (https://www.godac.jamstec.go.jp/ipt/), harvested by OBIS in real time and then integrated into the OBIS database.
Validation of thermal habitats of benthic species: S. crosnieri
We selected the hydrothermal vent squat lobster S. crosnieri as a benthic species for validation, because there were sufficient observation records with an actual measured temperature when the species was observed.Observation records of S. crosnieri (1254 records) in BISMaL are mainly reported from the dataset of 'JAMSTEC e-library of deep-sea images' (J-EDI, 13).The J-EDI dataset archives observation records based on the videos and images taken by deep-sea submersibles and environmental data, including depth, dissolved oxygen, water temperature and salinity that were measured by conductivity-temperature-depth (CTD) attached to the submersibles.The reliability for the estimated thermal habitat of the species was validated by direct comparison with estimated temperatures from FORA and measured in situ temperatures by CTD.
Planktonic species: K. pacifica and K. subtilis Because there are sufficient available observation records, which have all latitude, longitude, depth and date information needed to estimate temperature, we selected two chaetognaths, K. pacifica and K. subtilis, as planktonic species for validation.Observation records of chaetognaths are mainly reported from the 'JODC Dataset' (14), which are long-term plankton survey data around Japan from 1951 to 2006.BIS-MaL archived a total of 904 records of K. pacifica and 1136 records of K. subtilis during 1982-1992, and there were no in situ temperature data for the two species.The two Krohnitta species are both categorized as epiplanktonic (>150 m) species ( 15), but K. subtilis has also been reported as mesopelagic in the eastern area of Japan (16) and Sagami Bay (17).As the depth data for all observation records of the two species were reported as ranges (mostly 0-200 m), BISMaL used a depth of 0 m for the estimation.To verify the reliability of the estimated thermal habitat for the two Krohnitta species, we estimated experienced temperatures based on the latitude, longitude, depth and date/time of the observation records and examined whether the temperatures could serve to recreate the seasonal changes specific to the sea around Japan.As a similar service to estimate past water temperature, OBIS provides the package obistools (Provoost and Bosch, 2019) for R (18), and the lookup_xy function in the package returns surface temperatures by referring to the WOA.We compared the results between BISMaL and the obistools package.
To detect the differences between the estimated thermal habitats of the two Krohnitta species, patterns of estimated temperature compared to latitudinal changes during the hightemperature season (June-October) were visualized using 2D kernel density estimation with the kde2d function in the R package MASS (19).
Results
Among a total of 1 713 682 observation records (19 252 taxa) in BISMaL, there were only 25 023 records (1627 taxa) with water temperature measurements (data accessed on 19 February 2021).Estimation of water temperatures was performed for 203 897 records (5778 taxa) by referring to FORA based on date, latitude, longitude and depth.
Validation of thermal habitats of benthic species: S. crosnieri
Among the 1254 records of S. crosnieri in BISMaL, 392 records included observed in situ temperatures with narrow temperature ranges of 3.7-7.5 ∘ C (4.2 ± 0.45, mean ± SD; Figure 2).The estimated values for the 392 records were 3.3-7.2∘ C (3.8 ± 0.36, mean ± SD), a range similar to that of the observed values.The estimated values were consistently lower than the observed in situ values, but the difference between the observed and estimated values was within 1 ∘ C (96.1% of the data were plotted between the solid line and the dotted line in Figure 2).
Validation of thermal habitats of planktonic species: K. pacifica and K. subtilis
The occurrence patterns of the two species mostly overlapped throughout the year (Supplementary Figure S1); for example, the occurrence points of the two species overlapped in February (Figure 3a and c).However, in October, the occurrence points of the two species overlapped in the Pacific, while only K. pacifica occurred in the Sea of Japan (Figure 3b and d).K. pacifica was repeatedly observed in the Sea of Japan during September-October (mostly in October) in 1982, 1983, 1988, 1991 and 1992, while K. subtilis was observed only in February 1988.
Except for November, which had very limited data, the mean estimated temperatures in BISMaL showed clear seasonal changes in K. pacifica and K. subtilis (Figure 4), and there was little difference in the monthly mean temperatures between two Krohnitta species (monthly mean in K. pacifica and K. subtilis: 18.8 ∘ C and 18.6 ∘ C in February and 25.3 ∘ C and 26.0 ∘ C in October, respectively).On the other hand, seasonal changes in the estimated temperatures from the obistools library were small (monthly mean in K. pacifica and K. subtilis: 23.2 ∘ C and 23.0 ∘ C in February and 22.3 ∘ C and 23.2 ∘ C in October, respectively).
The distributions of estimated temperatures across latitude for the two species mostly overlapped (Figure 5a).The kernel density distributions showed almost the same shapes, indicating that the centers of the thermal habitats were located at 25-30 ∘ C and 25-30 ∘ N (Figure 5b and c).In the areas >35 ∘ N and <23.5 ∘ C, however, there was a difference in the estimated thermal habitats of the two species, and only K. pacifica expanded its range to high-latitudinal and low-temperature areas.
Discussion
By developing a function referencing past environmental conditions in the BISMaL database, we were able to estimate past habitat conditions for marine organisms regardless of their lifestyle as benthic or planktonic.For the benthic squat lobster S. crosnieri, the thermal habitat was estimated to be 3.3-7.2∘ C, which was similar to the range of observed in situ temperatures.For the planktonic species K. pacifica and K. subtilis, their estimated temperatures showed clear seasonal changes, from 18 ∘ C to 26 ∘ C from January to October.In addition, visualizing the thermal habitats of the two Krohnitta species showed that the distributions of the observed records mostly overlapped but differed in their marginal areas.
The estimated and observed in situ temperatures for S. crosnieri were similar to each other; however, there was a constant bias in that the estimates were lower than those observed in most cases.The observed temperatures were measured by a CTD attached to a submersible that captured and recorded videos of the species around hydrothermal vents.Therefore, water temperatures near the CTD could have been affected by the hydrothermal activities of the vents.Tsuchida et al. (20) measured temperature in a S. crosnieri habitat and a hydrothermal vent directly and reported that the temperature at the vent was 301 ∘ C and the temperature in the S. crosnieri habitat (1-2 m away from the vent) was 4.0-6.2∘ C. As the reported temperatures in the habitat are not largely different from our estimates (3.3-7.2 ∘ C), our result is considered to be reasonable.However, it is notable that the temperature estimation based on BISMaL is preferable for detecting averaged environments in a 0.1 ∘ grid, which is a resolution of FORA, and not preferable for detecting unique and specific phenomena such as hydrothermal vents.
For the planktonic chaetognath genus Krohnitta, BISMaL estimated seasonal temperature changes of 18 is able to estimate past thermal habitats reliably and provides an improved estimation over those from existing services, such as the OBIS packages, by using the regional specialized ocean data FORA.However, it is notable that all Krohnitta records in this study have a depth range of 0-200 m, which indicates vertical net towing, and BISMaL used the shallowest depth (0 m) for the estimations.Miyamoto et al. (17) investigated the vertical distribution of chaetognaths in Sagami Bay and reported that the mean depths were different between the two Krohnitta species (30 m and 194 m in K. pacifica and K. subtilis, respectively).In future, if highly precise depth information for the two species becomes available, then differences in thermal habitat between the two species may be detected more precisely.
Our results showed that the distribution of the two Krohnitta species mostly overlapped; however, K. pacifica extended its distribution to the Sea of Japan and higher latitude areas in the high-temperature season.Kuroda et al. (16) reviewed the fauna, distribution ecology and community structures of pelagic chaetognaths around Japan and reported that the two Krohnitta species were observed in all sea areas except the Okhotsk Sea.However, knowledge about the detailed life cycle of the two species is limited around Japan.This may be partially because the two Krohnitta species are not conspicuously dominant in every area.Johnson and Terazaki ( 22) investigated chaetognath species composition in Kuroshio warm-core ring waters, which are eddies of warm water detached from large current systems (see Supplementary Figure S2), and reported that K. pacifica was a Kuroshio water indicator species and K. subtilis was an offshore water indicator species.It is likely that K. pacifica, whose distribution is more affected by Kuroshio waters than that of K. subtilis, temporarily appears in the Sea of Japan along the Tsushima Current, which is a branch of the Kuroshio Current (see Supplementary Figure S2).Nagai et al. (23) studied the occurrence of chaetognaths, including the two Krohnitta species, at a line transect in the Sea of Japan during 1972-2002 and noted that the occurrence of K. subtilis was rare in the Sea of Japan, which supported our results.Our results clearly showed the spatiotemporal differences in the marginal habitats of the two Krohnitta species.Although the ecological importance of the difference cannot be determined from our data, the difference highlights what needs to be addressed to understand the whole life history of the two Krohnitta species around Japan.
In this study, we developed a region-specific marine biodiversity database BISMaL to enable the estimation of physicochemical environmental conditions and showed that it is possible to reliably estimate thermal habitats for marine organisms regardless of their lifestyles.The progress in ocean observation technology will accelerate the accumulation rate of biological observation records in many regions, and smooth integration of the regionally accumulated observation records in a global level will enhance the environment for datadriven researches in marine biodiversity.In fact, BISMaL has achieved smooth data sharing with the OBIS global database, and the data from BISMaL contributes as data to interpolate the Northwest Pacific region in enhancing environment for global marine biodiversity studies.However, it is important to mention that the enhancement of these data-driven research environments is not only for pure data scientists.BIS-MaL provides an easily accessible platform of both biological and environmental data as a web service, and this could open the door to data-driven science for not only pure data scientists but also for a variety of marine researchers, such as taxonomists, ecologists and field investigators.This approach could broaden the limit and encourage greater engagement with research communities outside specific areas of scientific research.
Figure 1 .
Figure 1.Screenshots of the BISMaL web page for S. crosnieri.Left panel: a taxonomic information page (https://www.godac.jamstec.go.jp/bismal/e/ view/9000078) composed of taxonomic tree (a), map view of biological observation records (b), histograms of environmental conditions (c), related images and notes (d) and references (e).Right panel: a biological observation record page (https://www.godac.jamstec.go.jp/bismal/e/ occurrences?taxon=9000078)composed of map view with estimated environmental contours (f) and observation records (g).Estimated salinity or temperature for observation records is shown as underlined values.
Figure 2 .
Figure 2. Observed in situ temperatures by CTD of S. crosnieri habitat and estimated temperatures by BISMaL based on FORA.When the observed in situ temperature corresponds to the estimated temperature, data are plotted along the solid line (y = x).
Figure 3 .
Figure 3. Distribution of biological observation records of K. pacifica in February (a) and October (b) and K. subtilis in February (c) and October (d).Data are pooled over 11 years (1982-1992).
Figure 4 .
Figure 4. Monthly mean temperatures for the two Krohnitta species, which were estimated by BISMaL with FORA (solid lines) and the OBIS package 'obistools' (dashed lines) with WOA.Vertical bars indicate standard deviation.
-26 ∘ C around Japan, which are typical, while the OBIS packages estimated no clear temperature change of 22-23 ∘ C. Focusing on the area within 25-35 ∘ N and 125-150 ∘ E around Japan where a large part of the observed records of the two Krohnitta species were obtained, clear seasonal changes in mean surface temperature during 1991-2020 are reported as 18-21 ∘ C in February and 27-29 ∘ C in August (21, https://www.data.jma.go.jp/gmd/ kaiyou/data/db/kaikyo/knowledge/sst.html), and these temperature changes are close to our results.Therefore, BISMaL
Figure 5 .
Figure 5. Visualization of estimated thermal habitat across latitude for the two Krohnitta species, scatterplot of estimated temperatures (a) and 2D kernel density fitting for K. pacifica (b) and K. subtilis (c).
|
2024-03-21T06:17:55.837Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9943d121bbfe8df18f4b5d4e28654b88700d4577",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/database/article-pdf/doi/10.1093/database/baad081/57027455/baad081.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f215b679cf4ecadc3e518745f9cca01c335eb06",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
88444018
|
pes2o/s2orc
|
v3-fos-license
|
ANTI-CLASTOGENIC ACTIVITY OF ROSELLE ( Hibiscus sabdariffa ) EXTRACT USING A VARIETY OF SHORT-TERM GENOTOXIC BIOASSAYS
It was reported that Roselle (Hibiscus sabdariffa) is antiseptic, digestive, diuretic, emollient and purgative (Duke, 1985; Truswell, 1992). Recent scientific research work has established the protective effect of the dried flower extract of Hibiscus sabdariffa (Tseng et al., 1997); anti-inflammatory activity (Dafallah and Mustapha, 1996), antihypertensive effect of the calyx extract (Adegunloye et al., 1996; Onyeneka et al., 1999) and antimutagenic activity (Morton, 1987). It was also reported that the cultivation of Roselle as a "recent" crop in Arab-speaking countries is centered more on its pharmaceutical than its food potential. In 1971 this crop was distributed in tropical areas especially in Africa and India. In these countries, it is also cultivated to some extent for the freshly calyx of the flower from which jelly, a kind of tea named karkadeh is extracted. Karkadeh is widely cultivated in Sudan for the extraction of the jelly which is dried and exported to other countries. (Chewonarim et al., 1999).
countries, it is also cultivated to some extent for the freshly calyx of the flower from which jelly, a kind of tea named karkadeh is extracted.Karkadeh is widely cultivated in Sudan for the extraction of the jelly which is dried and exported to other countries.(Chewonarim et al., 1999).
The present work aim to investigate the anti-genotoxic activity of Hibiscus sabdariffa.To achieve such a purpose an investigation of cytogenetic effect of syrup extract from calyx and sub-calex, in decreasing chromosomal abnormalities, after treatment of mice, human lymphocytes and Allium cepa have been carried out.A variety of short-term in vivo and in vitro genotoxic bioassays which that recommended by EPA-US.These bioassays are analysis of chromosomal abnormalities in mice bone marrow, analysis of micronucleus in mice bone marrow, estimation of cell proliferation, analysis of primary spermatocyte (diakinesis stage) in mice , analysis of mitotic activity and cell proliferation in Allium cepa cells, analysis of chromosomal abnormalities in Allium cepa cells and estimation of micronucleus in interphase cells for mice bone marrow.
Cytological analysis
The cytogenetical characterization aim to investigate the potentiality of calyx and sub-calyx extract to play an important role in reducing the clastogenic effect caused by the well known positive controls( sodium nitrite in Allium cepa); cyclophosphamide in mice and ethyl methane sulfonate in human lymphocytes.
Cold and hot Roselle extracts treatments 1-Cold treatment
Three doses were prepared as follows: 25 g/100 ml; 12.5 g/100 ml and 6.25 g/100 ml (Hirunpanich et al., 2005) were incubated overnight at 37C and filtered.
Each mouse received 100 μl for 60 days.
2-Hot treatment
Three doses were prepared as follows: 25 g/100 ml; 12.5 g/100 ml; and 6.25 g/100 ml were boiled, filtered and mice were treated; each mouse received 100 μl for 60 days.
3-Sodium nitrite, cyclophosphomide and
Ethyl methanesulfonate were used as mutagenic substances for positive control group
Experimental design techniques
Three doses i-e., (6-25 g, 12.5 g and 25 g/100 ml) were used in mice treatment for 60 days (Chewonarim, et al. 1999).Cyclopho sphamide (50 mg/kg.b.wt) was as a positive control.Technique given by Brusiek (1980) were used for the analysis of metaphase index, chromosomal abnormalities, micronucleus assay given by Schmid (1975) was used for estimation of micronucleated polychromatic erythrocytes.Analysis of human lymph was carried out according to Schwartz (1974).
Analysis of variance, Duncan's multiple range test and chi square were used for MNT, the tables given by Hart and Engberg-Pedersen (1983) were used.
Analysis of metaphase index (MTI) and
analysis of chromosomal abnormalities in mice bone marrow cells.These analyses were done as that described by Brusick (1980).
Analysis of mice primary spermatocytes.Five male mice samples were used for each dose.These doses were orally given for 10 days.The animal samples were killed by decapitation (24 hr) after the last dose.The used procedure basically follows the description given by Oud et al. (1979); Adler (1984) and Seehy and Osman (1989).
Human lymphocyte culture technique.This technique was carried out according to the description given by Schwartz (1974) and the same concentrations of cold and hot extract were added to the culture.
Chromosomal abnormalities
Cytological examination of chromosomal aberrations in mice bone marrow after treatment with Hibiscus extract is shown in Tables (1 and 2).Different types of structural and numerical aberrations were obtained (Robertosonian Centric Fusion, gap, fragment and polyploidy), however, positive control gave high percentage of Stickiness and some of this aberrations are given in Figs.(1a-1e).Cyclophosphamide was capable in inducing hyperploidy i.e ˃ 2n .Hot extract induced a significant increase of micronucleus (Fig. 1i).Total aberrant of metaphases were found to be 39% after treatment with cyclophosphamide.The results showed that the high concentrations of caused a high degree of stickiness and accordingly high aberrant metaphases were obtained and ranged from 5 to 54%.It probably seems that the high percentages of aberrant metaphases might be caused by cytoplasmic disturbance which induced by high concentrations.Tables (1 and 2) showed the results obtained after treatment with hot extract.Comparing data in these two tables, one can conclude that the cold extract was found to be capable in decreasing the total aberrant metaphases caused by cyclophosphamide.
It is taken for granted that the degree of mutagenic potentiality of environmental pollutants which evaluated in one test system may not be the same in another one, therefore, testing for the induction of DNA lesions and the mutagenicity using a variety of short-term assays has become as an accepted part of the toxicological evaluation of drugs, industrial intermediates, cosmetics, food and feed additives, pesticides, etc……...According to Brusick (1987) positive controls are included to establish the ability of the analyzers to correctly determine aberrations and to ascertain the expected test-to-test and animal-to-animal variations, and to establish the sensitivity of a particular test.However, cyclophosphamide is a clastogenic agent for various animal species.Chorvatovicova and Sandula (1995) recommended the use of this drug in cytogenetical studies as a positive control.
Micronucleus test
Cyclophosphamide and hot extract were proven to be clastogen, since statistical analysis showed significant increases in micronucleus.Tables (3&4) which illustrated the data obtained from the analysis of micronucleated polychromatic erythrocytes.It is clear that the cold extract showed anticlastogenic activity.
Analysis of primary spermatocytes:
Tables (5 and 6) showed the results obtained from the analysis of diakinesis stages after treatment of mice with cold and hot extracts.Different types of aberrations at diakinesis such as fragment, Stickiness XY univalent, autosomal univalent and translocation in addition to Stickiness were obtained (Figs. 1b,1c,1d,1e and 1f).These results showed that cyclophosphamide was capable to reach the germinal cells.On the other hand, cold extract was proven to be capable of decreasing the clastogenic effect which caused by cyclophosphamide and indicated cold extract has in vivo anticlastogenic activity.
Allium cepa
The analysis of mitotic activity and chromosomal aberrations in cells of adventitious roots of Allium cepa are given in Tables (7 and 8).
Mitotic index was 14.8% for the negative control and 6.2% after treatment with sodium nitrite.For cold treatment, it was ranged from 8.1 to 13.4%, while from 4.2 to 12.2% for hot treatment.Figs.(3a-3g) showed the effect of different treatment upon Allium cepa genome.Total aberrant metaphases was ranged from 5 to13% after cold extract treatment and it was ranged from 13 to 26% after hot extract treatment.
Data obtained from these genotoxic bioassays revealed that cold extract has anticlastogenic activity upon ethyl methanesulfonate.Which presented strong evidence that cold extract of hibiscus has anticlastogenic activity.
Human lymphocyte culture
An attempt was carried out to investigate the in vitro effect of hibiscus extract upon human chromosomes.Total aberrant metaphases was 3% in the negative control group, 28% after treatment with the positive control (EMS) and it was ranged from 5 to 8% after cold extract treatment and it was ranged from 5 to 9% after hot extract treatment (Figs 2a and 2e) and Tables (9 and 10) showed the effect of cold, hot, and EMS treatments.This result, however, presented evidence that hot extract treatment in vitro was positive clastogene, while cold extract was treatment had in vitro anti clastogeneic effect.
In conclusion, the present investigation clearly revealed that cold extract treatment of calyx and sub-calyx of Hibiscus was proven to decrease the cellular toxicity and clastogenic effect of positive controls (Cyclophosphamide, sodium nitrite and EMS).
Assessing human risk to mutagenic substances represents a formidable task.There is so far no conclusive proof of showing chemical-induced mutation in human germ cells; however, mutagens can alter rodent germ cells and quantitative estimates of induced mutation rates per gene locus or the dose required to double a specific mutation rate which have to be calculated from results of the in vivospecific-locus or heritable translocation assays.These estimates may be of limited value in calculating human risk or in setting safe exposure levels because they are based on male gametes and, in the case of specific-locus assay, generally on pre meiotic stem cells (spermatogonia).The data do not reflect the risk to later cell stages in spermatogenesis or in female germ cells.Estimates of mutation in postmeiotic sperm and from female gametes will become available; but even so, other important biological variables would interfere with reliable risk estimates and extrapolation between species (Brusick, 1980(Brusick, & 1987;;Abid-Alla, 2007).Regarding the micronucleated polychromatic erythrocytes, the micronuclei represent acentric chromosome fragments or whole chromosomes that lost during cellular anaphase.These structures are easy to visualize in erythrocytes and therefore, are often used as a measurement of chromosomal aberrations (Rabello-Gay, 1991).
Exposure to pollutants has been associated with cancers, degenerative neurologic diseases, and altering immune response, but the mechanism of action is unclear.Genotoxic potential is a primary risk factor for long-term health effects such as cancer and reproductive health outcomes.Bolognesi (1997) and Hagmar et al. (2001) reviewed the usefulness of cytogenetic biomarkers as intermediate end points in carcinogenesis and concluded that chromosomal aberration (CA) frequency predicts overall cancer risk in healthy subjects, but such associations have not been found for sister-chromatid exchanges and micronuclei (Mn).Although, the genotoxic potential of pesticides is believed to be low, but genotoxic monitoring in farm worker populations could be a useful tool to estimate the genetic risk from exposure to complex pesticide mixtures over extended lengths of time.To date, genotoxic biomarker studies of workers exposed to pesticides have focused on cytogenetic end points including CAs, Mn frequency, and sisterchromatid exchanges.This conclusion came from the observation that chromosomal aberrations; micronucleated polychromatic erythrocyte and aberrant diakinesis stages were decreased with the increasing of the plant dose given to the mice beside the data obtained from the analysis of Allium cepa genome and human lympthocytes.
SUMMARY
Nowadays, it has been appeared that there are several advantages for the medical use of hibiscus, which showed the ability to reduce cholesterol level and lipids in animals at laboratory tests in addition to antibiotic oxidation.Thus, the aim of this research is to study its role as anticlastogenic agent upon the chromosomes damage.The calyx and sub-calyx of the Roselle plant has long been recognized as a source of antioxidants.The objective of this study was to investigate the capability of Hibiscus sabdariffa juice to act as anticlastogenic agent by preventing or decreasing chromosomal breaks.In order to achieve such a purpose the genetic material of Mouse (Mus musculus, 2n = 40) and roottip cells of Onion (Allium cepa, 2n = 16) were selected and used employing a variety of short-term genotoxic bioassays that recommended by EPA-US.The obtained result revealed that Roselle cold extract or syrup treatment had anticlastogenic effect.While hot extract has not.How does this suggested repair system play its role?by activation of cell proliferation, apoptosis; or by interfering with cellular repair system or by all these assumptions.Further research is needed in order to precisely answer this question.(Hart & Pederson, 1983) (g)
ACKNOWLEDGMENTS
to be effective in inducing significant decreases in cell proliferating rate and giving evidence on its cellular toxicity.Chromosomal abnormalities indicated that it is a strong clastogenic agent which reflects the possible mutagenic activity of cyclophosphamide.DNA damage may be classified into several broad categories based on the nature (presumed mechanism) of the DNA change.
Table ( 4
): Micronucleus test in mice after hot treatment with Hibiscus extract.
Table ( 5
): Primary spermatocytes in mice after cold treatment by Hibiscus extract.
Table ( 6
): Primary spermatocytes in mice after hot treatment with Hibiscus extract.
Table ( 7
): Mitotic activity and chromosomal aberrations in Allium cepa genome after cold treatment by Hibiscus extract.
Table ( 8
): Mitotic activity and chromosomal aberrations in Allium cepa genome after hot treatment by Hibiscus extract.
Table ( 9
): The effect of Hibiscus cold extract treatment upon human lymphocyte culture.
Table (10): The effect of Hibiscus hot extract treatment upon human lymphocyte culture.
|
2019-01-02T04:25:13.950Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "0826935b9d9aabfd17c31a41362eda142ea3c427",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21608/ejgc.2014.9921",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0826935b9d9aabfd17c31a41362eda142ea3c427",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
249830253
|
pes2o/s2orc
|
v3-fos-license
|
Network Analysis of the Brief ICF Core Set for Schizophrenia
Background The International Classification of Functioning, Disability, and Health Core Sets (ICF-CSs) for schizophrenia are a set of categories for assessing functioning in persons with this health condition. This study aimed to: a) estimate the network structure of the Brief ICF-CS for schizophrenia, b) examine the community structure (categories strongly clustered together) underlying this network, and c) identify the most central categories within this network. Methods A total of 638 health professionals from different backgrounds and with a significant role in the treatment of individuals with schizophrenia participated in a series of Delphi studies. Based on their responses we used the Ising model to estimate the network structure of the 25-category Brief ICF-CS, and then estimated the degree of centrality for all categories. Finally, the community structure was detected using the walktrap algorithm. Results The resulting network revealed strong associations between individual categories within components of the ICF (i.e., Body functions, Activities and participation, and Environmental factors). The results also showed three distinct clusters of categories corresponding to the same three components. The categories e410 Individual attitudes of immediate family members, e450 Individual attitudes of health professionals, d910 Community life, and d175 Solving problems were among the most central categories in the Brief ICF-CS network. Conclusion These results demonstrate the utility of a network approach for estimating the structure of the ICF-CSs. Implications of these results for clinical interventions and development of new instruments are discussed.
INTRODUCTION
Schizophrenia is a chronic and disabling mental disorder that is characterized by heterogeneous symptoms, both positive (e.g., delusions and disorganized thinking) and negative (e.g., blunted affect and anhedonia), as well as cognitive impairment and multiple functional deficits (1,2). Although recovery is possible and should be a priority goal in the treatment of this population, it remains a challenge insofar as persons diagnosed with schizophrenia usually experience important disability in personal, social, and occupational functioning across their life span (3,4). Nevertheless, research suggests that interdisciplinary mental health teams providing integrative care to individuals with schizophrenia can achieve substantial improvements in clinical, social, and health outcomes (5,6). However, providing integrated care requires a common language that enables interdisciplinary team members to develop a shared understanding of a patient's functioning problems. The International Classification of Functioning, Disability, and Health (ICF), which was proposed and adopted in 2001 by the World Health Organization (WHO) (7), meets these requirements.
The WHO conceptualizes health in terms of a biopsychosocial model and uses the term functioning to refer to the positive and practical aspects of health, that is to say, what a person can or cannot do in daily life, regardless of any diagnosed disease or specific health condition (7). It was within this conceptual framework that the ICF was developed. The main objectives of the ICF are to provide a scientific basis for the understanding and description of health and health-related states, to establish a unified and standard language to describe them, and to permit comparison of data across countries, health care disciplines, services, and over time (7). The ICF describes functioning in persons with any health condition through the dynamic interaction between the following components: Body functions, which comprise the physiological and psychological functions of the body systems; Body structures, which refer to the anatomical parts of the body; Activities and participation, encompassing the execution of all the tasks and actions that an individual may perform and which may be involved in a life situation; and Contextual factors, which includes both Environmental and Personal factors. Regarding the latter component, Environmental factors reflect the physical, social, and attitudinal environment in which individuals live and conduct their lives, whereas Personal factors refer to the particular background of an individual's life and comprise traits that are independent of his or her health condition, such as gender, race, or age.
The ICF system comprises more than 1,400 categories. To facilitate its implementation, ICF Core Sets (ICF-CSs) linked to specific health conditions have been developed using a protocol proposed by the WHO (8); those developed to date can be viewed and downloaded at: https://www.icf-core-sets.org/en/ page1.php. The ICF-CSs consist of a list of the most relevant ICF categories for describing and assessing functioning and disability in people living with a certain health condition, and in most cases both a comprehensive and a brief version of the Core Set have been developed. The categories listed in a Comprehensive ICF-CS cover the full spectrum of problems in functioning that are typically experienced by individuals with a specific health condition; the corresponding Brief ICF-CS consists of a selection of the most essential categories that should be considered when exploring functioning in individuals with this health condition (8). Thus, ICF-CSs could serve as a reference pool of categories to identify individual's functional strengths and weaknesses, plan appropriate interventions, and develop standardized assessment instruments. For schizophrenia, two ICF-CSs (i.e., the Comprehensive and the Brief version) have been developed following the evidence-based process proposed by the ICF Research Branch, a WHO collaborating center (9). The Comprehensive ICF-CS for schizophrenia comprises 97 categories representing a broad spectrum of common problems in functioning suffered by persons with schizophrenia. The corresponding Brief ICF-CS includes 25 of these 97 categories, those of most importance in the assessment and treatment of persons with schizophrenia and hence of most relevance to clinical practice. The two ICF-CSs for schizophrenia can be viewed and downloaded free of charge at: https://www.icf-research-branch.org/icf-core-sets-projects2/ mental-health/icf-core-set-for-schizophrenia.
In order to be applicable in clinical practice, ICF-CSs must be validated through different sources of evidence. Evidence for the content validity of the ICF-CSs for schizophrenia has been obtained in a series of previous studies in which we used the Delphi method to explore the perspective of different health professionals involved in treating persons with schizophrenia, namely psychiatrists (10), psychologists (11), nurses (12), occupational therapists (13), social workers (14), and physiotherapists (15). The results of these studies indicated that both the Comprehensive and Brief ICF-CSs for schizophrenia provide an effective framework for investigating functioning and disability in persons with schizophrenia. However, the ways in which the different components of functioning in schizophrenia may interact with one another has yet to be tested empirically. To address this gap, the present study uses network analysis to analyze the data obtained in the aforementioned series of Delphi studies.
To the best of our knowledge, no study has applied network analysis to examine the structure of ICF-CSs, including the Brief ICF-CS for schizophrenia. The aim of the present study was therefore to: (1) estimate the network structure of the Brief ICF-CS for schizophrenia and examine the connections between its categories using data obtained from different health professionals with experience in the assessment and/or treatment of individuals with schizophrenia; (2) examine the community structure (categories strongly clustered together) underlying this network; (3) identify the most central ICF categories that are associated with functioning and disability within this network; (4) identify the bridge categories (i.e., the categories in a component that have strong edges with all categories in all other communities, and vice versa); and (5) assess the robustness and stability of this network.
Data Collection
In a previous series of three-round Delphi studies we explored the perspective of professionals from six different health disciplines (i.e., psychiatry, psychology, nursing, occupational therapy, social work, and physiotherapy) regarding the individual problems, resources, and environmental factors (presented in the form of ICF categories) that they most frequently encounter when treating individuals with schizophrenia. The Delphi method is a multistage process in which a panel of experts are asked to give their opinion about a specific topic across a series of rounds. After each round, each panel member receives feedback in the form of an anonymous summary of the responses given by the other experts, which they must take into account before giving their opinion again (16). This methodology allows researchers to obtain the opinion of numerous experts on the same subject with the objective of reaching a consensus (17).
A total of 790 health professionals from 85 different countries representing all six WHO regions (i.e., Africa, The Americas, Eastern Mediterranean, Europe, South-East Asia, and Western Pacific) participated in the first round of the six aforementioned Delphi studies, and of these, 638 completed all rounds of the Delphi process. The recruitment of participants and the Delphi process are described in detail in Nuño et al. (18) and summarized in Figure 1. The task for participants in these studies was to judge, from their professional perspective, whether they considered each ICF category to be relevant or not (yes/no) to the assessment and/or treatment of persons with schizophrenia.
Network Analysis
The data obtained from the six Delphi studies were used to estimate the network structure of the Brief ICF-CS for schizophrenia. From the network perspective, the ICF categories rated by experts may be represented as nodes (circles), which are connected with edges (lines) when they tend to co-occur (i.e., they are selected as relevant by the same expert), forming a network structure. Applying this approach, the 25 categories of the Brief ICF-CS for schizophrenia produced a network with 25 nodes (i.e., categories) and 300 potential edges (i.e., connections) between these nodes. Network analysis was conducted following the methodology described by Epskamp et al. (19).
Network Estimation
Given the binary nature of the variables, the Ising model (20) was used to estimate the Brief ICF-CS network. In this model, the edges (i.e., connections) between nodes (i.e., ICF categories) are estimated using regularized logistic regression (20). These edges can be understood as partial correlation coefficients, which means that a correlation between two nodes A and B is estimated as a conditional dependence relationship after controlling for all other connections between nodes in the network. In other words, if nodes A and B are not connected after controlling for all the connections between all other nodes, then A and B may be considered independent nodes. The Ising model also employs a regularization strategy whereby very small correlations (connections) are shrunk to be exactly zero; this decreases the number of false positive connections between nodes and allows more accurate detection of the underlying network structure (20). The Brief ICF-CS network was estimated using the R IsingFit package (version 0.3.1) with gamma = 0.25 (20), and the results were visualized with the qgraph package (version 1.6.5) (21). Extensive details regarding the Ising model can be found in van Borkulo et al. (20).
Finally, the community (or cluster) structure of the network was detected using the walktrap algorithm (22). In this case, a community describes a set of categories that are strongly associated/clustered together within the estimated network.
Node Centrality Indices
The centrality of each category in the network was computed to identify those categories which form the core or are more important than others (23). To this end, we obtained the following three indices: a) the node strength represents the sum of the absolute values of all connections with respect to other nodes; b) the closeness centrality measures how strongly a node is associated indirectly with other nodes in the network; and c) the betweenness centrality assesses the shortest path length connecting any two nodes (19). These three centrality indices were extracted and graphs were generated to investigate the centrality of each of the categories.
Bridge Centrality
We also estimated the bridge expected influence (BEI; one step) (24) via the networktools package (25). The BEI determines potential bridge nodes by summing all absolute edges between a node (e.g., category b160 Thought functions from the Body functions component of the Brief ICF-CS) and all nodes that do not form part of the same component (i.e., categories from the Activities and participation and Environmental factors components). Nodes with high absolute values of BEI are potentially important as bridge nodes. For instance, the BEI of a Body function category (symptom) indicates to what extent this category is related to categories of the Activities and participation and Environmental factors components. Identifying these bridge nodes may yield hypotheses about categories/symptoms that cause (or prevent) the occurrence of positive (or negative) outcomes (26). The parameter accuracy of edges and the stability of centrality indices in the estimated network were examined using the bootnet package (version 1.4.3) in R (19), specifically through a bootstrap sampling technique with 1,000 iterations. The accuracy of edges was investigated using the non-parametric bootstrap technique to draw the 95% confidence intervals (CIs) for the edge-weights. Additionally, we used the casedropping bootstrap technique to investigate the stability of the order of nodes in terms of centrality. This technique yields a correlation stability (CS) coefficient, which shows how many cases (i.e., proportion of individuals) might be removed from the analysis while maintaining a correlation of at least 0.7 with the original centrality values within a 95% confidence interval. Consequently, the CS coefficient assesses whether original estimates correlate with bootstrapped estimates. The CS value should not be below 0.25, and ideally it will be above 0.50. The case-dropping bootstrap technique was also used to assess the stability of the bridge. Further information about these methods can be found in Epskamp et al. (19).
Sample Description
Of the 638 health professionals who completed the Delphi studies, 303 (47.5%) were psychiatrists, 137 (21.5%) were psychologists, 79 (12.4%) were nurses, 73 (11.4%) were occupational therapists, 36 (5.6%) were social workers, and 10 (1.6%) were physiotherapists. Overall, 52.0% of respondents were male. Demographic and professional characteristics of participants are summarized in Table 1. More detailed The Brief ICF-CS Network Figure 2 depicts the network structure of the Brief ICF-CS for schizophrenia, showing the connections between the 25 ICF categories. Each node in this network represents a category, and each edge represents bidirectional partial relations between categories after controlling for all other associations in the estimated network. There were no unconnected nodes, and 52 of all potential 300 edges were estimated to be above zero, indicating a medium-density network (i.e., 17% of the possible connections were observed in the network). Moreover, all of these connections were positive (solid green edges indicate positive associations, whereas red edges indicate negative associations). Figure 3 presents the standardized estimates of the three centrality measures for each category. These three centrality estimates appear to be highly intercorrelated, with significant correlations being observed between node strength and closeness (0.931), node strength and betweenness (0.724), and betweenness and closeness (0.713). We will therefore focus on node strength as the measure for identifying the most central categories in the estimated network.
Node Centrality Indices
Based on node strength, the five most central categories were e410 Individual attitudes of immediate family members, e450 Individual attitudes of health professionals, d910 Community life, d175 Solving problems, and d710 Basic interpersonal interactions; the least central categories were b156 Perceptual functions and d570 Looking after one's health. From the network perspective, this implies that these five categories provide (from the point of view of expert professionals) the most important information about problems in functioning among individuals with schizophrenia.
Concerning the accuracy analyses, bootstrapped results were relatively narrow for the estimated edge-weights (i.e., connection weights between the 25 categories; see Supplementary Figure S1), suggesting that the estimated edges were relatively reliable. Regarding the stability of centrality indices, the results from the case-dropping subset bootstrap indicated that the order of node strength centrality was more stable than the order of betweenness and closeness indices when dropping large proportions of the sample (Supplementary Figure S2), although CS coefficients were low (< 0.25) for all indices. Figure 4 shows that the standardized BEI is strongest for the categories e410 Individual attitudes of immediate family members, b160 Thought functions, d760 Family relationships, and e570 Social security services, systems and policies (all with z score > 1). However, the findings for bridge stability when using the case-dropping bootstrap method indicated low stability when
DISCUSSION
Although many studies have described the problems that people with schizophrenia most frequently experience in daily life (27)(28)(29)(30), the present report is the first to apply network analysis to determine the relevance of and interrelationships between these problems from the perspective of health professionals, using the ICF as a conceptual framework. To the best of our knowledge, it is the first time that network analysis has been applied to the Brief ICF-CS for schizophrenia.
Consistent with the ICF model, the network approach depicts functioning as a dynamic system of node-to-node interactions (31), with each node representing an ICF category or aspect of functioning. Accordingly, we sought here to provide a new empirical perspective on the adequacy of ICF categories for describing the functioning of persons with schizophrenia, in this case from the perspective of health professionals from different disciplines (and all six world regions defined by the WHO) with experience in the treatment and/or assessment of people with schizophrenia. In our view, the present study makes four novel contributions to the ICF-CS literature, insofar as we 1) estimate the network structure of the 25-category Brief ICF-CS for schizophrenia, 2) assess the degree of centrality of each of the 25 categories in this network, 3) identify the community structure underlying the Brief ICF-CS network, and 4) investigate the stability and robustness of this network.
Regarding network estimation, our findings largely support the component structure of the Brief ICF-CS as defined by the biopsychosocial model, and as such they broadly corroborate the international validity of the ICF-CSs for schizophrenia from the perspective of these health professionals. Importantly, however, the analysis also identified specific aspects of functioning with high centrality and associations between a wide range of ICF categories within the Brief ICF-CS network, thus indicating the potential importance of these categories in the treatment of individuals with schizophrenia. Although these findings underline the importance of highly specific aspects of functioning that have already been the focus of different programs aimed at improving the functioning of individuals with schizophrenia (32)(33)(34), they also provide a novel framework for the design of more comprehensive interventions targeting those aspects that are shown here to have the greatest impact on functioning.
One finding of note was that the categories within each component of the ICF were highly inter-connected. Specifically, for Body functions a close connection was observed among b122 Global psychosocial functions, b152 Emotional functions, and b160 Thought functions, after controlling for all other connections. Regarding Activities and participation, there was FIGURE 4 | Standardized bridge expected influence (one-step) for each of the 25 categories in the Brief ICF-CS network (as shown in Figure 2). a strong connection between d240 Handling stress and other psychological demands, d175 Solving problems, and d720 Complex interpersonal interactions, after controlling for all other connections. Concerning the Environmental factors component, strong positive connections were found between e310 Immediate family, e355 Health professionals, e450 Individual attitudes of health professionals, and e410 Individual attitudes of immediate family members, and there was also a strong association among e570 Social security services, systems and policies and e580 Health services, systems and policies, after controlling for all other associations.
These connections further support the clustering of ICF-CS categories as proposed in the ICF model (7). For example, the close connections between b122 Global psychosocial functions, b152 Emotional functions, and b160 Thought functions, which all belong to the Mental functions chapter of the ICF (7), indicate the relevance of problems representing classical symptoms of schizophrenia, for instance, delusions and hallucinations (e.g., b160 Thought functions), negative symptoms such as affective flattening (e.g., b152 Emotional functions), and psychological functions (b122 Global psychosocial functions) (1). This is also consistent with studies indicating that individuals with schizophrenia show impairment in thought and emotional functions such as emotion perception and expression (35,36), symptoms that represent common therapeutic targets for health professionals (37,38).
The community structure analysis identified three distinct clusters in the Brief ICF-CS network, corresponding to the components Body functions, Activities and participation, and Environmental factors, thus reflecting the theoretical ICF components (7). Furthermore, all but two of the 25 Brief ICF-CS categories (i.e., b156 Perceptual functions and b180 Experience of self and time functions) were found to belong to their corresponding theoretical ICF component. Taken together, these findings provide further support for the multidimensional structure of ICF-CSs and suggest useful directions for future research into functioning and the validity of the ICF-CSs.
It should be noted, however, that some categories showed weak or no connections with other categories from the same ICF component. For instance, no connections were found between either of the two categories b156 Perceptual functions and b180 Experience of self and time functions and the other categories that comprise the Body functions component, whereas a small connection was found between these two categories and, respectively, the categories d240 Handling stress and d840 Apprenticeship (work preparation), which belong to the Activities and participation component.
From the network perspective, the absence of an edge between b156 Perceptual functions and b180 Experience of self and time functions indicates their independence from each other and implies that they are conditionally independent from the other categories of the Body functions component to which they theoretically belong, as well as from the other categories in the network, with the exception of the categories d240 Handling stress and d840 Apprenticeship (work preparation). From the clinical perspective, these findings suggest that positive symptoms (such as hallucinations or delusions) and those related to an awareness of one's identity may be relevant to the execution of daily life tasks and also influence an individual's participation in certain contexts. Specifically, these symptoms might play an important role in hindering a person's ability to deal adequately with emotions and to enter the labor market.
Centrality measures such as node strength can be used as an indicator of the most important variables in the network (39). A noteworthy finding in the present estimation of centrality was the variability in node strength among the 25 categories of the Brief ICF-CS for schizophrenia. The five most central categories in the Brief ICF-CS network, based on their strength as nodes (see Figure 3, Strength column), comprised two categories from the Environmental factors component, namely e410 Individual attitudes of immediate family members and e450 Individual attitudes of health professionals (ranked 1 and 2), and three categories from Activities and participation, namely d910 Community life, d175 Solving problems, and d710 Basic interpersonal interactions (ranked 3, 4, and 5 out of 25, respectively), each of which showed strong connections with other categories. These findings are in line with the results of previous studies in which these five categories (identified here as being the most central) were among the most frequently reported not only by individuals with schizophrenia but also by experts in the assessment and/or treatment of individuals with this health condition (9-12, 27, 28). This suggests, from the network perspective, that these central categories may be key to understanding the functioning problems experienced by individuals with schizophrenia.
In the clinical context, these five central categories (i.e., e410 Individual attitudes of immediate family members, e450 Individual attitudes of health professionals, d910 Community life, d175 Solving problems, and d710 Basic interpersonal interactions) would potentially be important for predicting functioning problems in individuals with schizophrenia and may also play a critical role in relation to treatment interventions and outcomes. For instance, the strength centrality of e450 Individual attitudes of health professionals confirms the central importance of the relationships between individuals with schizophrenia and mental health professionals, as highlighted by Lauber and colleagues (40), who recommended that professionals should have more personal contact with individuals suffering from mental illness so as to minimize the negative impact of stigmatization on these patients. Similarly, the strength centrality of both e410 Individual attitudes of immediate family members and e450 Individual attitudes of health professionals, alongside the strong positive connections that were found between these two categories and e310 Immediate family and e355 Health professionals, corroborates previous research (41)(42)(43) emphasizing the need for families and mental health professionals to collaborate in order to facilitate a patient's recovery. In this regard, several studies (44,45) have suggested that a supportive family environment may be clinically useful for improving functioning and recovery in individuals with schizophrenia. By contrast, greater family burden (e.g., lack of family integration, financial difficulties, and limited opportunities) has been linked to worse functional outcomes, insofar as it may affect the level of support that families are able to provide to an individual with schizophrenia, which in turn impacts that person's functioning (46). As regards the fact that three of the five most central categories (i.e., d910 Community life, d175 Solving problems, and d710 Basic interpersonal interactions) belong to the Activities and participation component, this reflects the findings of Izquierdo et al. (47), who likewise observed that patients with first-episode psychosis experienced more difficulties in participation domains (e.g., joining in community activities).
A further point to note in our analysis is that the category e410 Individual attitudes of immediate family members also yielded the highest BEI value, followed in rank order by categories b160 Thought functions, d760 Family relationships and e570 Social security services, systems and policies. However, the results obtained when applying the case-dropping bootstrap technique indicated that the centrality measures, including node centrality and bridge centrality, should be interpreted with caution because the stability of these indices might be unreliable. This result may be due to the small sample size. Whatever the case, it should be noted that the stability coefficients reported in previous studies that have employed the case-dropping bootstrap technique are usually low (48).
From the network perspective, the current results largely support the integrity of the Brief ICF-CS categories, and they have a number of implications. First, the network analysis adds novel findings to the literature on validation of the Brief ICF-CS for schizophrenia, not least by identifying connections between categories both within and across components of the Brief ICF-CS. Second, the identification of a community structure of strongly connected categories, and especially the three meaningful clusters, provides potentially useful information for clinical practice and intervention. Third, the centrality analysis draws attention to the core problems in functioning experienced by individuals with schizophrenia, in this case from the perspective of professionals from different health fields with experience of treating persons with this disorder. These central problems within the Brief ICF-CS network are therefore the ones that should be especially targeted during assessment and treatment. From a psychometric perspective, the central problems in functioning and the identified clusters could also provide the basis for the development of new instruments for assessing these specific aspects of functioning and which would be sensitive to change when comprehensive programs are applied to improve functioning in individuals with schizophrenia.
This study has certain limitations that warrant consideration. One is that the present findings regarding the central problems in the Brief ICF-CS for schizophrenia are based on the perspective of health professionals from different disciplines, and it is unclear whether the same results would be obtained when considering the perspective of individuals with schizophrenia themselves, or that of their families or caregivers. Future studies should therefore investigate the relevance of different problems in functioning from the perspective of individuals with schizophrenia and their families and caregivers so as to enable comparison with the current network findings. A further potential limitation concerns the small number of health professionals from the African and Eastern Mediterranean WHO regions among participants in the six Delphi studies. There were several reasons for this, including difficulty contacting them due to their limited internet access and the lower number of specialized health professionals in these regions. Nevertheless, the expert panels who took part in the Delphi studies did include representatives from all six WHO regions, and the results obtained largely supported the worldwide validity of the Brief ICF-CS for schizophrenia from an expert perspective.
CONCLUSION
In summary, network analysis is a useful approach for exploring the network structure of the Brief ICF-CSs and for identifying the most important problems in functioning within the estimated network. Accordingly, within the Brief ICF-CS network, the specific aspects of functioning with high centrality and relations between ICF categories were identified, highlighting the relevance of these categories in the monitoring and treatment of individuals with schizophrenia. Notably, we found support for a three-component model underlying the Brief ICF-CS for schizophrenia and our results further confirm its validity, insofar as the data on which this analysis is based were acquired by surveying professionals from all six WHO regions. The current study therefore makes a major contribution to the literature on validation of the Brief ICF-CS for schizophrenia by evidencing the underlying network and providing a framework for the design of new comprehensive interventions aimed at improving the functioning of individuals with schizophrenia.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board Committee of University of Barcelona (IRB00003099). The patients/participants provided their written informed consent to participate in this study.
|
2022-06-19T15:23:21.192Z
|
2022-06-17T00:00:00.000
|
{
"year": 2022,
"sha1": "bdf28355ab45aeab68963d937cd2494a7075d03a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2022.852132/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61aae0cb2318eb73836bb8f2b4e12363fe1d1008",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
1850585
|
pes2o/s2orc
|
v3-fos-license
|
Substance Abuse Treatment, Prevention, and Policy Measuring Activities in Tobacco Control across the Eu. the Matoc
Background: Objectives of this study are (a) to develop a comprehensive and economic tool to estimate tobacco control (TC) activities in single EU member states, (b) to compare TC activities between member states of the EU. This article provides the questionnaire and gives a benchmark of EU member states according to their perceived TC activities. Methods: An international workshop was specifically initiated to develop the questionnaire "Measuring Activities in Tobacco Control (MATOC)". TC experts from 8 European countries participated and chose 40 items to cover 11 general topics of TC. At the World Conference of Tobacco or Health in Helsinki 2003 participants were asked to fill out the questionnaire. N = 142 participants from EU-member states returned questionnaires.
Background
In Europe, current tobacco control (TC) research faces a series of problems (a) country TC profiles in different countries describe and compare only single TC measures [1,2]. (b) TC is a very comprehensive and rapidly changing field. Gathering comprehensive information about TC is time consuming, integration of data across countries is hard to accomplish, and updating the data is a constant need. While European Union (EU) wide legislation is adopted across countries in a similarly manner, country specific regulations differ in their enforcement and their implementation.
The goal of this study was to (a) develop a tool to estimate TC activities in single EU member states. Demands of this tool are to be comprehensive in its scope, economic in its data assessment and valid in describing a country's TC, (b) to compare the perceived TC activities of EU-member states according to this tool.
To accomplish these goals, we developed a brief questionnaire for expert ratings. The advantage of expert ratings is three-fold: 1. Expert ratings are an economic way to gather information about TC activities. 2. Expert ratings, we assume, include knowledge about what is "no activity at all" and what is "desirable activity". 3. Expert ratings may help to fill the gap between the knowledge what is desirable in TC and what is realised in a country. When, for example, an age restriction on smoking cigarettes exists, expert ratings may give a valuable estimate about the degree to which this law is realised. Previous research has indicated that questionnaires or expert ratings can be used to a satisfying degree to assess the quality of TC policies [3,4].
Methods
The item pool for the MAToC (Measuring Activities in Tobacco Control) was generated during an international workshop specifically initiated to develop the questionnaire. TC experts from 8 European countries participated and the 11 topics chosen to be covered by 40 items of the questionnaire were: taxing, smuggle, product control, smoking cessation, media, protection from exposure to environmental tobacco smoke (ETS) which means second hand smoke, health care, research, politics, population, and prevention. These were the topics that were agreed on by the experts to play a vital role across EU countries and that have shown efficacy, according to the experts, in changing a country's smoking rate or smoking climate. Advertising was a topic too, but was excluded in this analysis due to item wording that was misunderstood by many respondents. The questionnaire includes questions about the respondent's country, his smoking status, field of Response patterns range from yes/no/don't know answers to 5-point Likert scaled items indicating agreement to the statement from "not at all" (1) to "absolutely" (5). The difference in response pattern reflects the difference in the required information. The 5-point Likert scale allows respondents to rate to what extent a statement is implemented in a country, while Yes/No questions were used in items where the MAToC asks for facts or activities reflected in existing legislation (like: Are health warnings required?). The tobacco-control related items of the MAToC, their response pattern and the categorisation into subscales are illustrated in Table 1.
Data was gathered from 142 subjects from 14 different EU member states. All subjects participated in the World Conference on Tobacco or Health in Helsinki in August 2003. With this, we assumed that respondents were at least somewhat knowledgeable in the field of tobacco control. At the place of registration to the conference participants were randomly contacted by research assistants and were handed out the questionnaire and its internet-address.
They were asked to fill out the 40-item questionnaire as paper pencil on the site of the conference or could fill out a version online afterwards. The online version was made available to reach more respondents. Additionally, all participants who provided their e-mail address in the conference participants book were addressed via e-mail afterwards. Among the participants, 52% were female, 19% indicated that their field of work was in education, 28% in treatment, 30% in research and 23% in policy. Confidence in their answers was "very confident" for 36.6% of the respondents, 55.2% were "quite confident", 6.2% "not very confident", 0.7% "not confident at all" and 1.4% did not indicate their confidence. Subjects with missing data and subjects "not confident at all" were excluded from the analysis.
The statistical analysis was restricted to calculating raw scores of each subscale per country and the average rank of a country across all subscales and all countries. Items belonging to one subscale were summed and then divided by the number of items of this subscale. The prevention subscale was calculated differently, because the experts felt that just indicating whether there is a legal regulation does not describe prevention strategies appropriately. Therefore the questions about compliance were added and the subscale was calculated as follows: If participants indicated that there was a regulation (question 4 or 5), this response was calculated with "1", when compliance was reinforced with a "5" on the Likert scale (resp. 0.8 when it was 4, 0.6 when it was 3, 0.4 when it was 2 and 0.2 when it was 1). The range of item 6 was transformed accordingly. From all respondents of one country an average raw score could be calculated for each subscale. With this information countries were ranked in each subscale (not shown in the table) and the average rank of a respective country across all subscales was calculated (shown in Table 2). This procedure is more appropriate than summing up the raw scores of all items, since items of different subscales might correlate negatively. Footnote: the countries are sorted by their mean rank across the different dimensions of tobacco control activities; the range in columns 5 to 7 indicates the reference points of the dimensions: 0-1 with 0 = No and 1 = Yes; the range in columns 8 to 15 indicates the reference points 1-5 with 1 = "not all", 5 = "absolutely"; * = these figures represent male adult smoking, siince smoking rates for the general population were not available; ** = percentage smokers in the adult population, taken from the WHO-report for the
Results
Subjects from the tobacco field in Finland gave the highest TC values to their country indicating that Finland was the most active in TC among the countries in our sample, followed by Sweden, Ireland, the UK and the Netherlands ( Table 2). The least active countries in TC were Greece and Germany, behind Austria, Spain, Belgium and Portugal. Italy, France and Denmark constituted the middle field. Table 2 also gives country profiles across the different fields of TC. For example: the UK was ranked 4 th overall. While they had a leading position in the field of smoking cessation (an average agreement to the statements about support for smoking cessation of 3.78, with 1 indicating no agreement at all and 5 indicating absolute agreement), they put less effort into the protection from ETS (an average agreement of 2.68) when compared on the European level. In this fashion each country shows its own, individual profile, and countries at the end of the ranking also have dimensions where they are European average on TC or even better. For example, Germany was rated last in the EU overall, but looking at the dimension of prevention participants from Germany evaluated their country with 0.72, which is comparably high among the EU-member states. By giving average raw scores Table 2 also indicates the size of the difference between certain EU-member states in a certain TC field. For example: Protection from ETS is rated very high in Finland (4.4) and very low in Germany (1.7), while the difference between Ireland and the UK is very small (3.67 and 3.68). Comparisons between the dimensions reveal that the evaluation of activities in research can be improved in all EU-member states (highest score of 3.00) while support by the health care system is estimated fairly high in all EU-member states (lowest score 2.00). Table 2 also indicates prevalences for smoking in the adult population. Due to the small number of participants and to the differences in quality of the smoking rates more sophisticated analyses of the relation between smoking prevalence and different TC activities were not possible [5].
Discussion
The MAToC can be answered easily and quickly, so that its application fields are large samples of respondents. Further research with it seems an economic way to assess TC in European countries.
The questionnaire developed may provide a profile of TC across European countries by indicating benchmarks for countries such as illustrated here. It also indicates the actual amount of TC in a specific area in a specific country as it is perceived by experts from the tobacco field. This information is valuable for each country in terms of where they stand with their efforts in TC in comparison to other countries as well as where there is still room for improvement in their own country. What does this mean for spe-cific countries? Germany, for example, is ranked lowest in TC overall. Looking at specific dimensions one can see that in Germany protection from ETS is perceived the worst in Europe by far. German efforts in ETS could benefit from looking into Finnish efforts in this field, since Finland is the leading country in this dimension. In prevention on the other hand, Germany seems to be on an average European level. Another example is Finland: they can benefit from the information provided by MAToC concerning research efforts. Even though Finland is leading in comparison to the other states, the raw score in this dimension leaves room for improvement. By assessing data with this instrument longitudinally changes in TC can be evaluated. Institutions like the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA) in Lisbon could use such an instrument after it has been psychometrically tested.
The results are plausible since they fit into the findings which exist so far. The findings correspond to the amount of TC provisions. Furthermore, they correspond to those of Fagerstrom et al. [6] according to where Finland and Sweden found to be among the highest ranks and Germany and Austria found to be among the lowest ranks with regards to the anti smoking climate. Also, looking at the smoking prevalence taken from the WHO-report. 1 the 2 countries with the highest prevalence (Spain, Greece) are ranked very low, the ones with the lowest prevalence are ranked the highest in TC (Finland, Sweden). But since the mean rank is an aggregated value the relation between prevalence and specific TC dimensions needs to be analysed in more detail, for example with the development of multi-dimensional models. This could not be achieved by this analysis because of the small number of respondents. Further validation and examination of psychometric properties of the MAToC is necessary.
The data presented here fulfils the purpose to illustrate what is possible with such an instrument. Yet, there are some limitations to this study: (a) The psychometric properties of the instrument need to be examined. While the face-validity is high, other forms of validity have yet to be tested empirically. We assume that the participants we chose have a valid picture of TC in their country, but further research should validate the MAToC by comparing it with other instruments and by examining the relationship between smoking prevalence and TC as was previously done for the US [7]. Additionally a validation could also include trend in lung cancer development, number of exsmokers and sales of cessation products. However, this could not be achieved by this study since it is the first study to quantify a wide scope of TC, and to our knowledge no other comparable instrument exists. Furthermore to the lack of comparable instruments there also is a lack of comparable data about smoking prevalence due to dif-ferent definitions. Re-test reliability needs to be examined so that this instrument can be used to assess longitudinal changes and inter-rater-reliability needs to be examined to give a picture how well the instrument measures different perceptions of experts in each country. These analyses will also provide information, e.g. whether a subscale consisting of just one item (like smuggling) is valid and reliable.
(b) Even though we tried to raise the number of people participating, the number of experts per country still differs and is partly rather small. This might be due to a lack of TC experts in some countries. Since representativity was not the main goal of this study this question needs to be addressed in further research with this tool. Future research needs to identify organisations that provide a good amount of experts, so that a larger scale study can be carried out with the MAToC. (c) The items regarding regulation of advertisement needed to be excluded. However this topic represents an important field of tobacco control and should be included in the revised version of the MATOC. Then it should be similar to the prevention items, asking whether there is a regulation towards restriction of advertising followed by an item where the participants can indicate how comprehensive this is. (d) Even though we assume that the experts chosen were knowledgeable about TC in their country this instrument does not measure the actual level, but the respondent's perception and knowledge about TC. To measure the actual level of TC different instruments need to be developed. Results might differ and further research could compare perceptions and actual levels and their relation better.
We conclude that the approach used for this study is valuable in delivering the information wanted. This brief, easy to fill out questionnaire can be used to compare TC activities across the EU countries like they are evaluated by experts from the tobacco field. Benchmarking of EUmember states regarding TC in general and in specific areas is possible and can deliver clear cut information to support political decision making. This procedure could serve as model of practice for other areas like alcohol or legal drug control in the EU, too. Assessing different areas of control policy could lead to the comprehensive description of drug control, working patterns of policies could be identified and policy making could be tailored to countryspecific needs. However, the present analyses is just a first step in the area of tobacco control.
Keypoints
• This scale provides a quantitative ranking of European countries indicating their perceived activity in various fields of tobacco control and relates them to smoking prevalence.
• Decision makers and advocates get an overview on differences across country to help and support them in developing future plans.
• The tool is a first step in quantifying tobacco control, further research is needed to optimize and improve measurement of tobacco control.
|
2018-05-08T18:07:42.595Z
|
0001-01-01T00:00:00.000
|
{
"year": 2006,
"sha1": "4589c4155b0d34a589cbb98283f9dbbcbeb2599c",
"oa_license": "CCBY",
"oa_url": "https://substanceabusepolicy.biomedcentral.com/track/pdf/10.1186/1747-597X-1-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd3503fe591bab378991b33d4c6444067be06c96",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
}
|
260852112
|
pes2o/s2orc
|
v3-fos-license
|
BOARDS OF DIRECTORS AND FIRM INTERNATIONALIZATION: A BIBLIOMETRIC REVIEW
How to cite this paper: Herrera-Barriga, R., & Escandon-Barbosa, D. (2023). Boards of directors and firm internationalization: A bibliometric review [Special issue].
INTRODUCTION
The study of corporate governance began with the Cadbury report in 1992 (Committee on the Financial Aspects of Corporate Governance, 1992) followed by the publication of the Organisation for Economic Co-operation and Development's (OECD, 1999) corporate governance principles in 1999.However, the impulse of companies towards good corporate governance practices arose after the financial scandals that occurred in major companies in the world at the end of the twentieth century and the beginning of the twenty-first century (e.g., HIH Insurance, Onte-Telm, and Enron, among others).The increased popularity of corporate governance as a topic of study arises after the financial crisis of 2008, and its literature usually focuses on topics such as the relationship of corporate governance with corporate performance (Bhagat & Bolton, 2008;Brahmana et al., 2018;Hermuningsih et al., 2020), the relationship of corporate governance to social responsibility (Van den Berghe & Louche, 2005; Lending et al., 2018), and the role of boards of directors and/or boards of directors (Adams et al., 2010;El Gammal et al., 2020).Boards of directors have become a fundamental subject of study considering that the effectiveness of their role constitutes a governance mechanism that shareholders have to control top executives, mitigating agency problems.
As a result, various bibliometric studies on the literature regarding boards of directors have been discovered, covering issues such as board qualities (Zheng & Kouwenberg, 2019;Trinarningsih et al., 2021), the role of the board chair (Banerjee et al., 2020), and board structure in emerging markets (Ararat et al., 2021).However, the use of this sort of study in the research on the link between the board of directors and corporate internationalization is limited and recent (Lee et al., 2022;Debellis et al., 2023).
Therefore, the purpose of this analysis was to examine the evolution of the literature on the relationship between boards of directors and firm internationalization by addressing the following research questions: RQ1: What is the volume of publications by time, geographic distribution, and patterns of collaboration in research on the relationship between boards of directors and firm internationalization? RQ2: Which journals, authors and publications have had the most significant influence on research on the relationship between boards of directors and corporate internationalization?
RQ3: What study variables, theories and methods have been applied in the development of research on the relationship between boards of directors and the internationalization of companies?
This review seeks to examine the role of boards of directors in the internationalization of companies, learning about the different practices and characteristics that influence the international strategies and activities of companies.To address these questions, 125 articles extracted from Web of Science (WoS) and Scopus databases were analyzed.Bibliometric methods such as citation analysis and co-occurrence analysis for keywords were adopted, the latter together with a content analysis of the abstracts, provided insight into the evolution of the literature over time.This paper contributes to the literature by providing a bibliometric review of the literature on boards of directors, differing from other published articles because the analysis not only focuses on the characteristics of boards but specifically focuses the search on the research that examines their relationship or impact on the internationalization of companies.
The paper is structured as follows.In Section 2, the methodology used and the search strategy are described.Section 3 presents the results, answering the research questions based on the analysis of the information obtained from the databases and the VOSViewer software.Section 4 presents the discussion based on the content of the selected articles, obtaining the state of the literature in terms of theories, themes and methodologies applied.Finally, Section 5 provides the conclusions.
RESEARCH FRAMEWORK
The bibliometric analysis employs several quantitative methods to examine the bibliographic information associated with articles published in a certain field of knowledge.
The main characteristics associated with the production of knowledge about the investigated topic are revealed via these analyses (Zheng & Kouwenberg, 2019).As a result, this bibliometric review was created to supplement the literature and findings from previous studies.
Data sources and search strategy
The WoS and Scopus databases were used to perform the bibliometric analysis on the relationship between the board of directors and the internationalization of companies, because they are considered objective data sets for the literature review, which include the journals with the highest impact factor and provide detailed information on sources, authors, institutions, countries, citations, which are usually indicators used to develop a solid bibliometric analysis.Following the guidelines of Moher et al. ( 2010) we followed the four steps to identify and select the information for a bibliometric review that are described in Figure 1.
The first step was to identify articles related to boards of directors and their relationship with the internationalization of companies, for which an initial search was performed with the following search equation: "boards of directors" AND "internationalization".Since not all scholars have referred to the topic of boards of directors with the same terms, the search was expanded to include other similar terms such as: "corporate governance" and "board structure".We used the following search equation: search term = "boards of directors" or "corporate governance" or "board structure" and "internationalization".
The selection of published documents began with a search for titles, summaries, and keywords for all types of documents in both databases.This first search yielded 329 publications from WoS and 252 from Scopus.
In the second step, publications were filtered according to the following research areas: business, administration, and economics.This classification resulted in the exclusion of 31 WoS articles and 32 Scopus articles.
In the third step of the 518 publications obtained, those that are articles in English were selected because most of the research is published in this language.With the 428 articles obtained, a manual review was carried out comparing both databases to identify repeated articles, obtaining a database of 282 articles.Subsequently, the titles and abstracts were manually reviewed to determine their relevance to the research topic, especially to identify articles investigating the relationship between the board of directors and the internationalization of companies.This led to the exclusion of 157 papers resulting in a database of 125 articles published between 1998-2022.
Data analysis
Once the articles were selected, the WoS and Scopus databases were downloaded including information such as title, authors, affiliations, journal name, abstract, and number of citations.Several descriptive analyses were performed using the WoS and Scopus tools, as well as the VOSviewer software.WoS and Scopus tools were used to identify patterns within the database such as the number of publications by year and country, as well as the elaboration of tables to describe the average number of authors in the publications, identify influential articles and journals, and citation analysis recognized as a means to establish the academic impact of publications.
VOSviewer software was used to identify co-authorship and thematic foci by mapping keyword co-occurrence.By reviewing the content of the articles, they were classified according to the thematic approach describing the variables studied around the board of directors and the internationalization of companies, the theories adopted, and the research methods employed.
RESEARCH RESULTS
The results of the bibliometric analysis are presented in this section, following the research questions posed in the Introduction.
Volume, collaboration patterns, and geographical distribution
The first analysis refers to the volume of literature.The research on boards of directors is broad because it contemplates different thematic axes around it, however, a total of 125 journal articles were identified as relevant to the relationship of boards of directors with the internationalization of companies.The evolution of the literature was analyzed based on the progression of the volume of publications over time (see Figure 2).Research on the topic began to take shape in the late 1990s from key works focused on agency theory (Sanders & Carpenter, 1998), and strategy theory (Sherman et al., 1998), analyzing the relationship between the degree of internationalization of firms and their governance represented under the term board or council.Although the literature on corporate governance began to boom in 2002, until 2012 there were few publications per year on the relationship between the board of directors and the internationalization of companies, however, the growth of emerging economies generated a turning point in the evolution of the literature, which began to focus research on this relationship in emerging countries, with studies on small and medium-sized enterprises (SMEs) and family businesses standing out.Finally, in the last two years, research on the subject has increased significantly, continuing with the interest in emerging markets, but involving a more detailed analysis of the different structural characteristics of boards of directors.Research on the relationship between the board of directors and company internationalization was mainly carried out by 2 to 4 authors, accumulating 84% of the publications (117 articles).Researchers worked in small groups, where 5 or more authors wrote 8 articles (6.2%).Only 12 articles (9.6%) were written by one author.Table 1 shows the distribution of the number of authors per article.
Authors with at least two articles are shown in Figure 3, this threshold in the number of publications was chosen taking into account the tendency of authors to research in small groups.van Essen, M. is the most central author in the research with a collaboration of 4 authors.There was no author with a significant number of publications, Calabró, A. was found with five publications, followed by 6 authors with 3 publications.Figure 4 shows the geographical distribution of the literature obtained from WoS and Scopus reports.In total, authors from 41 countries contributed to the publication of the selected literature, showing the 15 countries with the most research on the relationship between the board of directors and company internationalization.Authors from the United States (US) produced the most literature on the subject with 38 publications (17.7%) followed by England with 20 articles (15.9%) and Italy with 18 articles (9.3%).European countries dominated the top 15 with nine countries, followed by four Asian countries, one North American country and one from Oceania.The US as a country predominates in the field of knowledge, and as a continent Europe leads with 82 publications (38.1%), finding that the predominance in the literature comes from developed economies where the OECD has become an advocate of good corporate governance practices (Trinarningsih et al., 2021).However, the number of articles originating from emerging economies such as China, India, Taiwan, and South Korea with 47 publications (21.9%) is remarkable, indicating that the topic has attracted the attention of researchers from these economies, evidencing a global relevance on the subject, given that academic production has originated in most continents.As with the number of publications by country, US institutions are among the most productive in research on the relationship between the board of directors and the internationalization of companies.Table 2 shows that, although the IIM System is in first place with 9 publications, a total of 5 institutions in the ranking are from the US, with a total of 21 publications on the subject.
Journal analysis and citations
For this analysis, descriptive statistics in Excel were used to integrate the data obtained from WoS and Scopus to generate tables that identify the patterns within the databases related to the number of articles per journal, most cited articles and authors (see Tables 3, 4, and 5).In each table, the number of citations obtained in both databases is discriminated.The analysis of the number of citations makes it possible to establish the academic impact of the publications by identifying the most influential journals, articles and authors in the literature on the relationship between the board of directors and the internationalization of companies.
Entrepreneurship
Theory and Practice were published in 2012 and 2017 when the number of publications on the subject begins to grow.Therefore, despite the scarcity of articles on the topic in these journals, the articles published have had a high impact on the field.It is worth noting that the most cited journals are among the leading journals in the fields of international business, management and strategy (e.g., International Business Review, Journal of World Business, Strategic Management Journal, Entrepreneurship Theory and Practice), highlighting the centrality of studies in these three fields.The two journals focused on the study of corporate governance, Journal of Management and Governance and Corporate Governance and International Review, also stand out in the list of the most cited journals.Another important aspect of the bibliometric review is to identify the most representative articles within the line of research.Citation analysis is used to determine the leading research with the greatest academic impact in the field of study.Table 4 Table 5 lists the most representative authors according to the number of citations in journals indexed in WoS and Scopus.The results reveal the leadership of US academics, 7 of the top 10 authors are from the United States.The top three authors according to the number of citations are Carpenter, M. A., Sanders, W. G., and Hitt, M. A. Of these authors, none have a significant number of publications, Hitt, M. A. has 3 articles, the highest number of publications on the list.
Thematic approaches to the board of directorsinternationalization of companies literature
A keyword co-occurrence analysis is conducted to identify the frequently studied themes around the relationship between the board of directors and company internationalization, as well as their underlying relationship.According to Zupic and Čater (2015), when words co-occur frequently in papers, it means that the concepts behind those words are closely related, therefore, the result of co-occurrence analysis is a network of topics and their relationships that represent the conceptual space of a field of knowledge and reveals patterns and trends in the topics studied within the field.
The keyword co-occurrence analysis conducted (see Figure 5) identified 53 keywords.The five most co-occurring keywords were "corporate governance" (89 links), "internationalization" (57 links), "performance" (47 links), "ownership" (36 links), and "diversification" (22 links).These results reveal that the studies consider a close relationship between corporate governance and internationalization, with an emphasis on the study of the ownership structure of companies, and internationalization in terms of performance and international diversification.The analysis carried out by VOSviewer shows that the 53 keywords identified form 4 groups of words according to the strength of the relationships.The first group, identified in Figure 5 with red lines, is made up of 16 words where diversification and directors are the words with the highest number of links (40 and 46, respectively).In this group, agency theory, and words such as experience, senior management team compensation, business performance, research and development (R&D), and internationalization of companies stand out.A second group identified with green lines made up of 13 words led by the word ownership with 47 links, has in common the resource-based perspective associated with the words with the highest number of links such as board composition, board of directors, family businesses and management, and in terms of internationalization it is related to entry modes.The third group identified with blue lines, made up of 13 words, with the term internationalization as the center of the network with 51 links, poses stronger relationships with the words ownership structure, performance, innovation, governance and compensation.The last group identified with yellow lines, made up of 11 words, with corporate governance as the center of the network with 52 links, proposes as the words with the strongest relationships: business groups, emerging economies, foreign direct investment (FDI), strategy, and boards.
DISCUSSION OF THE RESULTS
Several research studies have been conducted to analyze the relationship between boards of directors and the internationalization of companies, however, the number is limited.Through a content analysis of the keywords and abstracts of the selected articles, several characteristics were identified in the publications that have been useful to have a broader overview of the advances in the field of knowledge.The main theoretical approaches on which the research focused were: theories of internationalization (34 articles), corporate governance theory including the definition of board of directors (31 articles), resource-based perspective (23 articles), agency theory (21 articles), and institutional theory (11 articles).Other theories used in the studies were the upper echelon theory, family business and international business literature, signaling theory, social capital theory, and information perspective, among others.Of the 125 articles, 18 were qualitative studies, of which 6 were literature reviews, and the remaining 12 were case studies.106 articles were quantitative studies using samples of large companies, SMEs and family-owned companies in different countries, as well as studies in private and state-owned multinationals.One of the studies was conducted under mixed methodology (Arreola & Bandeira-de-Mello, 2018) integrating a quantitative study with a sample of Brazilian multinational companies and a case study.
Table A.1 in Appendix presents a summary of the articles reviewed, classified according to six identified themes: corporate governance structure, board of directors structure, board capital, board of directors networks, international diversity in boards of directors, and a last category where articles dealing with other aspects of boards of directors different from those previously presented are gathered.These six topics are usually positively related to the internationalization of companies in their different forms of measurement such as entry modes, degree of internationalization, export intensity, propensity to internationalize, export performance, international performance, and FDI, among others.Mainly, studies analyze the influence of different board characteristics on firm internationalization, however, some studies analyze the opposite relationship, i.e., the influence of firm internationalization on the composition of boards of directors (Gozzi et The first topic analyzed is the corporate governance structure.To compile the articles on this topic, it was taken into account that corporate governance structure is defined by aspects of ownership concentration and intensity of state participation (Liu et al., 2020).In addition, it encompasses the distribution of bargaining power between investors and management, as well as the managerial remuneration scheme composed of bonus and stock-based elements.The bargaining power of each stakeholder is determined by the distribution of shares between short-and longterm oriented investors, and by the robustness of management against possible shareholder interference (Guerini et al., 2022).Therefore, reviewing the summary and content of the articles, it is found that one part analyzes the ownership structure of private companies and the effects on their internationalization variables such as level and degree of internationalization, export intensity, FDI, international investment ( Guillen (2016) compare public and private companies in Norway.In these researches, it stands out that most of the quantitative studies are conducted with samples of firms in emerging markets, applied mainly to samples of SMEs and family firms in Asian emerging economies and developed countries such as the US and Germany.
CONCLUSION
This study analyzed the profile of articles published in the Web of Science and Scopus databases from 1998 to 2022, on the relationship between "board of directors" and "internationalization of companies" in the field of business, management and economics knowledge.The board of directors is a key player in determining the success of the internationalization strategy of companies, being responsible for providing the resources, finding the opportunities and generating the strategies for companies to enter foreign markets.This bibliometric analysis helps researchers to know the current research trend, the most influential research articles, and the most studied topics in the relationship under study, which would help them to obtain an overview to be used later in applied research that seeks to test or examine in greater depth, the behavior of the variables identified in different business contexts globally.A total of 585 articles were retrieved that contained the words "board of directors", "corporate governance" or "board structure", together with the word "internationalization", in the title, abstract and keywords.After applying the different steps of a bibliometric review, a database of 125 articles extracted from the Web of Science and Scopus databases was obtained.The research had few publications between 1998 and 2011, after this stagnation from 2012 began to see a growing trend, showing a significant improvement in 2018, and finding a large number of articles published in the last years 2021 and 2022 (count 33% of the total number of articles analyzed).Most of the published articles were written by researchers from the United States and European institutions, however, research in China, Taiwan and India stand out as the most productive emerging economies in the research field.This is also reflected in their performance at the institutional level, where an institution in India appears as the most productive, but these institutions received a lower impact with respect to institutions in the United States and England.
The co-occurrence of keywords in this study revealed emerging research themes in the relationship between boards of directors and the internationalization of firms.Five board research themes were found all related to firm internationalization variables: corporate governance (CG) structure, board directors (BD) structure, capital, networks, international diversity, and finally other characteristics.On the other hand, most of the published articles related to boards of directors frequently discuss the degree of internationalization and international performance, which become one of the ultimate strategic objectives of the board in a firm.
The results provide several interesting contributions.First, this study analyzes the profile of publications on the relationship between boards of directors and firm internationalization, authors, countries, institutions, journals, influential articles, and years of publication.Second, the study maps the thematic structure using co-occurrence and analyzing the content of article abstracts, helping researchers to identify stagnation in topics, and evolve towards improving the field of knowledge.Third, this research suggests the existence of great potential for more research on the topic, in emerging Latin American countries, as well as on topics beyond governance and board structure, such as, for example, board equity, this could greatly improve the quality of research.
Some limitations are presented in this bibliometric review.First, this study only includes publications considered in the Web of Science and Scopus databases, which, although they are the most representative for having the highest impact journals indexed, leave out literature from other research databases in which publications in countries where research on the topic was not identified, as well as articles in other languages, could be identified.Third, it is possible that some articles on the topic were not included because other similar terms were not included in the search equation.Although the database was cleaned, there may be articles that are not strictly related to the study variables.
The research began with the study of corporate governance structure and its effect on internationalization starting in 1998 and evolved towards topics such as board structure and board capital, and their effects on internationalization.However, it can be identified that in the last three years (2020-2022), the corporate governance structure continues to be the focus of research with topics on ownership and compensation, evolving towards the relevance of board capital which has had several publications in that period, being the subject of future research.
Figure 1 .
Figure 1.Flow chart of the search strategy
Figure 2 .
Figure 2. Number of publications over time
Figure 3 .
Figure 3. Network visualization map of authors active in research with a minimum of 2 publications
Figure 4 .
Figure 4. Number of publications by country WoS and Scopus.In second place is the Journal of Business Research from the same publisher and in third place is the Global Strategy Journal published by Wiley, both of which are indexed in WoS.The fourth place is occupied by the Journal of World Business.It is identified that within this top 4 are located some of the main journals of administration and management, demonstrating the interdisciplinary nature of the subject, and in particular, highlighting the field of internationalization of companies.The Journal of Management and Governance ranked fifth, is the first journal in the ranking that has a focus on corporate governance, identifying other journals in this field, such as Corporate Governance and International Review (ranking 8) that relate both fields (governance and internationalization).However, most of the journals within this top 15 are in the field of knowledge of internationalization.The data reveal an interesting pattern in terms of the number of citations, where journals that have published a small number of articles have the highest citation count in the WoS database among the top 15 journals, as is the case of the Academy of Management Journal with two articles and 648 citations in Scopus for one of them, Strategic Management Journal with two publications and 393 citations in WoS, and Entrepreneurship Theory and Practice with the same number of publications and 307 citations in WoS.The publication of Academy of Management Journal is a representative article of the beginnings of research on the topic published in 1998, the two articles of Strategic Management Journal were published one in 2003 and the other in 2014.For their part, the articles on lists the 20 most cited articles in the WoS and Scopus databases.The results indicate that the article by Sanders and Carpenter (1998) entitled Internationalization and Firm Governance: The Roles of CEO Compensation, Top Team Composition, and Board Structure is the most cited paper in the literature.This article, which belongs to the Scopus-indexed database, is representative of the beginnings of research on the relationship between the board of directors and firm internationalization, constituting a theoretical basis for the literature in the following years.The following two articles in the list are also part of the publications made within the first five years of the beginning of research on the subject, with Tihanyi et al. (2003) and the article Institutional Ownership Differences and International Diversification: The Effects of Boards of Directors and Technological Opportunity and Carpenter et al. (2003) with Testing a Model of Reasoned Risk-Taking: Governance, the Experience of Principals and Agents, and Global Strategy in High-Technology IPO Firms.It should be noted that these papers are included in the WoS database of indexed journals.The 19 most cited papers are empirical studies, only the paper by De Massis et al. (2018) entitled Family Firms in the Global Economy: Toward a Deeper Understanding of Internationalization Determinants, Processes, and Outcomes is a theoretical study (ranking 8).Finally, the data in Table 3 support the leading position of the Academy of Management Journal, Strategic Management Journal, and Entrepreneurship Theory and Practice, as 6 of the top 20 cited papers were published in these journals.
Table 1 .
Number of authors per publication
Table 2 .
Institutions with 4 or more published articles
Table 3 .
Top 20 journals with the largest number of published articles
Table 3
International Business Review, whose publisher is Elsevier and is indexed in
Table 4 .
Top 20 most cited articles
Table 5 .
Top 10 most cited authors
|
2023-08-13T15:08:30.606Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "88e27caf2ccb6995fa98216ba430ef723045a0ed",
"oa_license": "CCBY",
"oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=12697&hash=960a39c09a489a488422a80d48bcde1980eaea64",
"oa_status": "CLOSED",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e37fc8602a5fe3c7f266fa4e56910f08c3f3d37c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
253445975
|
pes2o/s2orc
|
v3-fos-license
|
Older patients with COVID‐19 and neuropsychiatric conditions: A study of risk factors for mortality
Abstract Background Little is known about risk factors for mortality in older patients with COVID‐19 and neuropsychiatric conditions. Methods We conducted a multicentric retrospective observational study at Assistance Publique‐Hôpitaux de Paris. We selected inpatients aged 70 years or older, with COVID‐19 and preexisting neuropsychiatric comorbidities and/or new neuropsychiatric manifestations. We examined demographics, comorbidities, functional status, and presentation including neuropsychiatric symptoms and disorders, as well as paraclinical data. Cox survival analysis was conducted to determine risk factors for mortality at 40 days after the first symptoms of COVID‐19. Results Out of 191 patients included (median age 80 [interquartile range 74–87]), 135 (71%) had neuropsychiatric comorbidities including cognitive impairment (39%), cerebrovascular disease (22%), Parkinsonism (6%), and brain tumors (6%). A total of 152 (79%) patients presented new‐onset neuropsychiatric manifestations including sensory symptoms (6%), motor deficit (11%), behavioral (18%) and cognitive (23%) disturbances, gait impairment (11%), and impaired consciousness (18%). The mortality rate at 40 days was 19.4%. A history of brain tumor or Parkinsonism or the occurrence of impaired consciousness were neurological factors associated with a higher risk of mortality. A lower Activities of Daily Living score (hazard ratio [HR] 0.69, 95% confidence interval [CI] 0.58–0.82), a neutrophil‐to‐lymphocyte ratio ≥ 9.9 (HR 5.69, 95% CI 2.69–12.0), and thrombocytopenia (HR 5.70, 95% CI 2.75–11.8) independently increased the risk of mortality (all p < .001). Conclusion Understanding mortality risk factors in older inpatients with COVID‐19 and neuropsychiatric conditions may be helpful to neurologists and geriatricians who manage these patients in clinical practice.
Here, we examined the risk factors for mortality in hospitalized patients aged 70 years or older, with COVID-19 and neuropsychiatric comorbidities and or new neuropsychiatric manifestations, taking into account general, geriatric, and paraclinical findings.
Participants
This study was part of the "Cohort of Patients with Covid-19 Presenting Neurological or Psychiatric Disorders" (CoCo-Neurosciences) (Delorme et al., 2021). COVID-19 was defined by at least one of the three following criteria: (a) positive severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) real time-polymerase chain reaction (RT-PCR) from swab, or positive antibody tests; (b) typical chest computed tomography-scan (chest CT) findings for SARS-CoV-2 infection during the pandemic; (c) suspected COVID-19 infection according to WHO criteria (2020) (see details of COVID-19 diagnoses in our patients in Figure 1). We selected patients aged 70 years or older, hos-
Data collection
On admission and during hospitalization, treating physicians filled out the CoCo-Neurosciences standardized electronic data collection including demographic, baseline data, comorbidities, COVID-19 symptoms, new neurological and psychiatric manifestations, treatments, outcome, and paraclinical findings. Then a group of doctors including neurologists, a geriatrician, and a radiologist reviewed medical records, completed data as well as possible and obtained follow-up information about survivors after hospital discharge.
Criteria of the studied variables
Autonomy was assessed using the activities of daily living (ADL) (Katz & Akpom, 1976). Frailty was assessed with the adjusted Rockwood clinical frailty scale (Rockwood et al., 2005). (Ellul et al., 2020). Critical illness polyneuropathy or myopathy unrelated to COVID-19 was diagnosed in the recovery phase after sedative drug withdrawal in post-ICU wards by intensive care physicians and neurologists.
Other clinical, treatment, and paraclinical variables are defined in Supporting Information S1. We used the neutrophil to lymphocyte (N/L) ratio to reflect the combination of inflammatory response and immunity imbalance. We built receiver operating characteristic curves for the N/L ratio, and by maximizing the Youden index, we identified the best N/L ratio thresholds to predict mortality risk was ≥ 9.9 with an area under curve at 0.7. We included this cut-off in analysis.
Outcome
The main outcome was the survival time and mortality from COVID-19 at 40 days. All deaths occurring within 40 days from first symptoms of COVID-19 were recorded. Survival time was the number of days from the first symptoms of COVID-19 until death or the end of the 40 dayobservation period. The 40-day endpoint was chosen because there was no loss of follow-up before 40 days, and we believe it is long enough to reflect the mortality risk related to COVID-19, taking into account both respiratory and neurological complications.
Statistical analysis
Continuous variables are presented as median [interquartile range (IQR)], as they were non-normally distributed using the Shapiro-Wilk test. Categorical variables are presented as numbers (percentages).
Missing data were not imputed and are described in Supporting Information S2.
The Kaplan-Meier estimator was used to compute the survival curve.
Cox proportional-hazards models were used to assess the potential factors of instantaneous risk of mortality (RoM) from COVID-19 infection. This analysis consisted of three steps: (1) univariate analysis on all variables; (2) forward stepwise analysis including variables from univariate analysis with p < .1 and less than 15% missing data in nonsurvivors. Among two paraclinical variables of the same category (white blood cell (WBC) count and WBC groups, neutrophil count and neutrophil groups, platelets and thrombocytopenia, serum creatinine, and glomerular filtration rate), we included the one with smaller p. The N/L ratio cut-off was included rather than continuous N/L ratio value (both with p < .001 on univariate analysis) because the clinical application of a cut-off is more direct. We excluded lymphocyte count because this value was used to calculate the N/L ratio; (3) Multivariate analysis: using the three most significant variables to build a multivariate model to avoid overfitting the model because of a limited number of deaths (n = 37). p-Values < .05 were considered significant on multivariate analysis.
The proportional hazards assumptions of the Cox models were checked using the Schoenfeld residuals test. Analyses were performed using STATA R version 16.1 (Stata Corp., College Station, TX, USA).
Univariate analysis for risk factors of mortality
The results of univariate analysis were presented in Tables 1-3 and Table S1. Variables with p < .1 and less than 15% of missing data in nonsurvivors were candidates for the stepwise analysis.
Preexisting factors such as male gender, South Asian ethnicity as compared to Caucasian ethnicity, a lower ADL score, a higher Rockwood score, comorbidities including immunodepression, cancer, Elevated WBC and neutrophil counts, decreased lymphocyte and platelet counts, the presence of neutrophilia and thrombocytopenia, higher N/L ratio, a N/L ratio cut-off of ≥ 9.9, higher levels of glycemia, sodium, uremia, osmolarity, creatinine, aspartate aminotransferase, serum ferritin, lower level of prothrombin time (in percentage) were associated with a higher RoM (all p < .05) ( Table 3, Table S1).
Multivariate analysis for risk factors of mortality
Among variables included in forward stepwise analysis that remained significant (p < .05) (Table 4)
DISCUSSION
To the best of our knowledge, this is the first study to examine the risk factors for mortality in older inpatients with COVID-19 and neuropsychiatric conditions, taking into account preexisting and new-onset neuropsychiatric conditions, together with general and paraclinical findings. We found a mortality rate of 19.4% at 40 days. Among several factors found on univariate analysis, male sex, a history of brain tumor or Parkinsonism or the occurrence of impaired consciousness were associated with a higher RoM. On multivariate analysis, lower ADL scores for autonomy, an N/L ratio ≥ 9.9, and thrombocytopenia were independently associated with a higher RoM.
Mortality rates
The mortality rate of our patents (19.4%) appears higher than in patients under the age of 70 (between 0% and 18.7%) (Richardson et al., 2020)
Mortality risk factors related to preexisting conditions, baseline functional status, and neurological comorbidities
Unlike studies in patients of all ages (Docherty et al., 2020) or aged 18 years or older (Cummings et al., 2020), studies in older patients often did not show old age (Hägg et al., 2020;Steinmeyer et al., 2020;Zerah et al., 2021), or major comorbidities including chronic cardiac and respiratory disease, as risk factors for mortality (Hägg et al., 2020;Ramos-Rincon et al., 2021;Steinmeyer et al., 2020;Zerah et al., 2021) as older populations are often more homogeneous. However, male sex was associated with higher mortality in adult cohorts (Docherty et al., 2020;Richardson et al., 2020) and remained significant in many studies in older patients (Mendes et al., 2020;Ramos-Rincon et al., 2021;Zerah et al., 2021) as well as in our study.
A lower ADL score was independently associated with a higher RoM, confirming previous studies Zerah et al., 2021). Reduced autonomy increases vulnerability in these older patients, and nongeriatric physicians should systematically adopt this score in routine, as do geriatricians, to be aware of their higher risk for mortality in order to adapt treatment and better explain prognosis to loved ones.
Globally, data are sparse for patients with brain tumors and COVID-19. A cohort of 87 patients under the age of 25 with cancer, including three patients with brain tumors showed no death related to COVID (Parker et al., 2022). Certain cohorts of older patients have found an association between cancer/malignity and death while others have not (Genet et al., 2020;Neumann-Podczaska et al., 2020;Ramos-Rincon et al., 2021;Vrillon et al., 2020;Zerah et al., 2021), but these studies did not mention brain tumors Genet et al., 2020;Neumann-Podczaska et al., 2020;Ramos-Rincon et al., 2021;Vrillon et al., 2020;Zerah et al., 2021). In our study, the presence of a brain tumor was associated with a higher RoM (p = .03), and this association remained significant (p = .012) after stepwise analysis. Brain tumors were active medical conditions in our patients who were often undergoing chemotherapy, which could contribute to their overall fragile condition and contribute to poor outcomes.
Parkinsonism-related pathology may lead to the rigidity of certain respiratory muscles which may impair the cough reflex, therefore contributing to poor prognosis in patients with SARS-CoV-2. Parkinson's Meta-analyses including a cohort of younger patients (mean age 58 years) (Zhang et al., 2020) and older patients in nursing homes (Rutten et al., 2020) (mean age 84 years) showed an association between Parkinson's disease and mortality from COVID-19 , but this association was not found in inpatient geriatric cohort (Zerah et al., 2021). Our study showed an association (p = .04) between Parkinsonism and a higher RoM, but this association was not independent.
Patients with advanced Parkinson's disease appear to be particularly vulnerable (Fearon, 2021). Further studies with large cohorts of geriatric patients with COVID-19 and advanced Parkinson's disease are needed to provide more information.
Cognitive impairment is a common condition in older patients.
an association (De Smet et al., 2020;Genet et al., 2020;Graham et al., 2020;Mendes et al., 2020;Ramos-Rincon et al., 2021;Steinmeyer et al., 2020;Zerah et al., 2021) consisted of very old patients (average age between 80 and 87 years), with a high prevalence of dementia (ranging from 30% to 88%). Moreover, our cohort of older patients with neuropsychiatric conditions may have had more aggressive neurological conditions than dementia, conditions that may be associated with an even higher risk of mortality.
Mortality risk factors related to new-onset neurological symptoms or disorders during the course of COVID-19 infection
A study of adult patients aged 18 years or older (mean age 63) showed that coma predicted death in patients hospitalized for COVID, independent of age (Boehme et al., 2022). In studies with older patients, impaired consciousness has often been integrated into a score Zerah et al., 2021), but rarely examined as an individual symptom (Annweiler et al., 2021;Sun et al., 2020), or in association with mortality . We found that impaired consciousness was associated with a higher RoM, and this association remained significant after stepwise analysis (p = .002). Impaired consciousness is often caused by severe brain damage or serious non-neurological conditions including respiratory failure, or even multiorgan dysfunction which may explain our findings.
Encephalopathy was shown to be associated with a higher risk of mortality in a mixed (outpatients and inpatients) cohort of adult patients aged 18 years or older with COVID-19 and de novo neuropsychiatric manifestations (Delorme et al., 2021) and tended towards an association with a higher RoM (p < .1) in our study. The high prevalence (61%) of encephalopathy in our older patients explains this weak association. We found no corresponding study in older patients, but encephalopathy was the most frequent (20%) neurological complication in older patients who died from COVID-19 (Martín-Jiménez et al., 2020). A toxic-metabolic origin was suggested in most cases with encephalopathy in our study. Correction of toxic-metabolic perturbations could improve outcome for these patients.
Delirium is a common complication of COVID-19 in older patients and has been shown to be associated with an increased risk of mortality (Shao et al., 2021). Studies showing no association between delirium and mortality (Vrillon et al., 2020(Vrillon et al., , 2021 study) seemed to be more homogeneous with a higher prevalence of delirium (48-82%) as compared to the prevalence found in a metanalysis (24% for all patients and 28% for patients aged over 65 years) (Shao et al., 2021).
Mortality risk factors related to biological markers during the course of COVID-19 infection
An N/L ratio ≥ 9.9 strongly and independently predicted RoM in our study. Its robust value in predicting COVID-19 severity and mortality has been demonstrated in adult and younger patients (aged 18 years or older; Liao et al., 2020), median age 48 years; S. Wang et al., 2021, mean age 53; Y. with interestingly, a similar cut-off value for the N/L ratio (> 9.13) (Liao et al., 2020). A high N/L ratio may reflect an increased inflammatory response, an immune imbalance, or a combination of the two. The ratio can remain significant in predicting mortality risk in inpatients with COVID-19 even when neutrophil or lymphocyte counts alone do not (Y. S. Wang et al., 2021). However, this ratio has rarely been studied in older people. One study included the N/L ratio into a 10-item score which had a good predictive value for in-hospital death in older adults aged 60 years or older . Another study in patients aged 70 years or older showed an increase in the likelihood of death in patients with a higher N/L ratio, but this study included few symptoms of COVID, no neurological symptoms and only two biological variables . Here, we showed that the N/L ratio ≥ 9.9 predicted an increased RoM, independently of several other neurological and non-neurological variables.
This variable is particularly simple to obtain because any entry blood workup includes a blood cell count, and the ratio is rapid to calculate.
Thrombocytopenia was also found to be an important independent mortality risk factor in our study. In addition, a lower prothrombin time (expressed in percentage) was associated with a higher RoM on univariate analysis (p = .04). These findings echo the results of a study in an adult population of 18 years or older (Liao et al., 2020), where a similar coagulopathy profile was associated with mortality. Previous studies in older patients did not show an association between platelet count and mortality (Neumann-Podczaska et al., 2020;Steinmeyer et al., 2020;Vrillon et al., 2020Vrillon et al., , 2021Zerah et al., 2021), but one study showed a significant decrease in platelets in nonsurvivors (L. Wang et al., 2020). Coagulation laboratory markers vary greatly over the course of COVID-19 and platelet count decreased in patients with progressively severe illness (Liao et al., 2020), which may explain these conflicting results as markers may have been tested at different points during the disease. It is unclear which mechanisms lead to thrombocytopenia in COVID-19, but thrombocytopenia is common in viral infections, possibly due to immunological platelet destruction, inappropriate platelet activation and consumption, and impaired megakaryopoiesis (Amgalan & Othman et al., 2020).
We found that lymphocyte count was associated with a higher RoM, but the reference value for lymphopenia < 1.5 × 10 9 /L (92% of our patients) was not. A lower threshold of lymphopenia < 0.8 × 10 9 /L significantly predicted mortality in older patients (Ramos-Rincon et al., 2021;Steinmeyer et al., 2020). Moreover, the median lymphocyte count value was very low in nonsurvivors between 0.51× 10 9 /L and 0.57 × 10 9 /L in our study and in other older populations . Studies in younger patients (mean age 62-68 years) (Yan et al., 2020) showed that a low lymphocyte count was associated with death, and the value in nonsurvivors was very low (0.33 × 10 9 /L) as compared to survivors (0.97 × 10 9 /L). We suggest that when evaluating the severity and risk of mortality in patients with COVID-19, clinicians should consider a lower lymphocyte count threshold.
Limitations
Our population does not represent a general geriatric population, but our findings will be useful for older patients seen in neurological, psychiatric or specific geriatric settings. Geriatricians and neurologists often encounter in real-life conditions older patients with both new symptoms and significant medical and neurological histories. Our findings therefore can be applied to a real-world setting and aid neurologists treating older patients and geriatricians who manage patients with neurological disease. The limited sample size and missing data did not allow for inclusion of all possible variables into a multivariate analysis. Data were retrospectively collected which can lead to bias.
CONCLUSION
In this study, we identified risk factors for mortality in older inpatients with COVID-19 and neuropsychiatric conditions. In addition, we reported a wide spectrum of new neuropsychiatric manifestations that can occur during COVID-19 in older patients. These findings may be helpful for neurologists who manage older patients and geriatricians who treat patients with neuropsychiatric conditions.
ACKNOWLEDGMENTS
The authors thank the CoCo-Neurosciences study group for their participation in the data collection (the list of study group is given in sup- Degos has received research support from Orkyn, Elivie, Merz, honoraria for speeches from Ipsen, and travail grant from Orkyn outside of this work. The other authors report no disclosures.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
PEER REVIEW
The peer review history for this article is available at https://publons. com/publon/10.1002/brb3.2787
|
2022-11-11T06:17:48.548Z
|
2022-11-10T00:00:00.000
|
{
"year": 2022,
"sha1": "369f7f9cd10a096a75ad8d87937afb017e007c74",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "f1c425e17ff1e56c5157529617f4ddc7a6349255",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
167221604
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of a new one-hour blood pressure monitoring method to diagnose hypertension: a diagnostic accuracy clinical trial protocol
Introduction 24-hour ambulatory blood pressure monitoring (ABPM) is the gold standard diagnostic method for hypertension, but has some shortcomings in clinical practice while clinical settings often lack sufficient devices to accommodate all patients with suspected hypertension. Home blood pressure monitoring (HBPM) and office blood pressure monitoring (OBPM) also have shortcomings, such as the white coat effect or a lack of accuracy. This study aims to study the validity of a new method of diagnosing hypertension consisting of monitoring blood pressure (BP) for 1 hour and comparing it with OBPM and HBPM and examining the sensitivity and specificity of this method compared with 24-hour ABPM. The patient experience will be examined in each method. Methods and analysis A minimum sample of 214 patients requiring a diagnostic test for hypertension from three urban primary healthcare centres will be included. Participants will undergo 24-hour ABPM, 1-hour BP measurement (1-BPM), OBPM for three consecutive weeks and HBPM. Patients will follow a random sequence to first receive 24-hour ABPM or 1-hour ABPM. Daytime 24-hour ABPM records will be compared with the other monitoring methods using the correlation coefficient and Bland Altman plots. The kappa concordance index and the sensitivity and specificity of the methods will be calculated. The patient’s experience will be studied, with selected indicators of efficiency and satisfaction calculated using parametric tests. Ethics and dissemination The protocol has been authorised by the research ethics committee of the Hospital Clinic of Barcelona (Ref. HCB/2014/0615): protocol details and amendments will be recorded and reported to ClinicalTrials.com. The results will be disseminated in peer-reviewed literature, and to policy makers and healthcare partners. Trial registration NCT03147573; Pre-results.
Introduction 24-hour ambulatory blood pressure monitoring (ABPM) is the gold standard diagnostic method for hypertension, but has some shortcomings in clinical practice while clinical settings often lack sufficient devices to accommodate all patients with suspected hypertension. Home blood pressure monitoring (HBPM) and office blood pressure monitoring (OBPM) also have shortcomings, such as the white coat effect or a lack of accuracy. This study aims to study the validity of a new method of diagnosing hypertension consisting of monitoring blood pressure (BP) for 1 hour and comparing it with OBPM and HBPM and examining the sensitivity and specificity of this method compared with 24-hour ABPM. The patient experience will be examined in each method. Methods and analysis A minimum sample of 214 patients requiring a diagnostic test for hypertension from three urban primary healthcare centres will be included. Participants will undergo 24-hour ABPM, 1-hour BP measurement (1-BPM), OBPM for three consecutive weeks and HBPM. Patients will follow a random sequence to first receive 24-hour ABPM or 1-hour ABPM. Daytime 24-hour ABPM records will be compared with the other monitoring methods using the correlation coefficient and Bland Altman plots. The kappa concordance index and the sensitivity and specificity of the methods will be calculated. The patient's experience will be studied, with selected indicators of efficiency and satisfaction calculated using parametric tests. Ethics and dissemination The protocol has been authorised by the research ethics committee of the Hospital Clinic of Barcelona (Ref. HCB/2014/0615): protocol details and amendments will be recorded and reported to ClinicalTrials. com. The results will be disseminated in peer-reviewed literature, and to policy makers and healthcare partners. trial registration NCT03147573; Pre-results.
bACkground Hypertension, a leading cardiovascular risk factor that causes premature morbidity and mortality, is associated with an increased risk of cerebrovascular disease (haemorrhagic or ischaemic), coronary heart disease, heart failure, chronic renal failure, peripheral vascular disease, cognitive impairment and premature death. 1 2 In high-income countries, the age-standardised prevalence of hypertension rises to 28.5% (95% CI 27.3% to 29.7%). 3 Current evidence suggests that patients with systolic blood pressure (SBP) ≥140 mm Hg and/or diastolic blood pressure (DBP) ≥90 mm Hg require treatment to reduce the risk of cardiovascular and kidney disease. 4 However, not all patients with these values have a high cardiovascular risk, sometimes due to factors such as white coat hypertension. Open access for 20 or 30 min during 24 hours. 4 This identifies differences between day and night and the risk associated with hypertension. 7 8 However, the high incidence of hypertension means it cannot be carried out in all patients who require it, and 24-hour ABPM is not always available for all primary healthcare patients and is uncomfortable. 9 HBPM measures BP at home or in the pharmacy. Determinations may be made by patients or relatives after receiving instruction. 10 This method avoids bias due to the white coat effect. However, the benefits may be lost due to incorrect application of the technique or a lack of standardisation of the measuring equipment (measurement with a mercury sphygmomanometer is still used). 11 OBPM consists of taking the BP measures at the clinic for several days. After determining the reference arm, three determinations are made once a week for three consecutive weeks and the mean of these determinations is calculated. The disadvantages of OBPM are the overdiagnosis of hypertension, unnecessary treatments in 15%-30% of patients, 8 the inability to detect sudden changes in BP, measurement errors, the limited number of records that can be made, and the risk of confusing isolated hypertension, the white coat effect and the lack of correlation with real BP. 12 13 Other drawbacks are the lack of data on the prognostic value and the definition. 7 14 15 However, despite its disadvantages, clinicians often perceive that OBPM is controllable and verifiable, and therefore it persists as the preferred method, including in large population studies and pharmacological clinical trials with a required diagnosis of hypertension. 12 16 Recently, two studies have examined the effectiveness of two new methods which use a modified 24-hour ABPM device. One takes a set of measurements during 30 min and the other during 60 min. 17 18 These methods have some of the advantages of OBPM: the clinician can verify and control the process because the patient is in the clinic but avoiding the frequent white coat effect. In 1 hour-BP measurement (1-BPM), mean daytime BP values were similar to 24-hour ABPM, with a sensitivity of 85.2% (95% CI 67.5% to 94.1%) and a specificity of 92% (95% CI 83.6% to 96.3%) for masked white coat hypertension. 18 However, 1-BPM requires further study for use in clinical practice, and the patient experience in terms of handling, satisfaction, understanding and adherence is unclear. Patient involvement in the evaluation is required to improve the quality of care, and to increase effectiveness and the patient's sensitivity to their health status according to their perceived needs. 19 However, this has not been widely reported in previous studies and current BP monitoring methods have not been studied in the light of patient experience. 13 The European, North American and British guidelines recommend the diagnosis should be made after verifying BP values with an ambulatory method involving various measurements or, at best, using 24-hour ABPM. [4][5][6] However, all methods of measuring hypertension in daily clinical practice have shown important shortcomings related to the accuracy of monitoring, and 24-hour ABPM is not feasible in all cases. 9 The aim of this study is to examine the accuracy of the 1-BPM method against the out-of-office and office methods, taking 24-hour ABPM as the gold standard.
MEthods/dEsIgn study objectives
To study the validity (sensitivity and specificity) of 1-BPM versus OBPM in three visits and HBPM, using 24-hour ABPM as the reference during the daytime period, and to examine patient acceptance and experience of the methods and compare the cost of 1-BPM with the other methods.
design This study protocol presents a diagnostic accuracy study of BP measurement using ABPM. Patients requiring a diagnostic test for high BP will be prospectively included. All participants will undergo 24-hour ABPM. The specificity and sensibility will be studied with respect to 1-BPM, HBPM and OBPM at three visits. The trial registration identifier in ClinicalTrials. gov web is CT03147573, registered 10 May 2017.
study population
Patients assigned to a primary healthcare centre (PHC) referred for a diagnostic test for hypertension by the family physician.
Inclusion and exclusion criteria Inclusion criteria will be as follows: (1) age ≥18 years, (2) patients seen routinely in the study PHCs, (3) voluntary participation and (4) family physician referral for diagnosis of high BP. Exclusion criteria will be as follows: (1) severe physical or cognitive limitations, (2) previous episodes of arrhythmia, atrial fibrillation with rapid ventricular response, frequent ventricular extra-systole or other arrhythmias, (3) Parkinson's disease or any other condition causing permanent tremor, (4) arm circumference >42 cm, (5) arterial-venous fistula in the arm, (6) mental disorders or intolerance to BP measurement method, (7) inability to attend the study PHC and (8) 1-BPM Measurement of BP for 1 hour using the 24-hour ABPM device. BP will be measured every 5 min. During the hour, the patient will remain in a quiet room in the primary healthcare centre, without walking actively, eating or smoking. After 1 hour, the device and arm cuff will be removed.
OBPM
A standard device will be used (Omron m6 AC, Japan). The patient must not have smoked tobacco, drunk coffee or other caffeine-containing drinks in the previous 30 min or had a large meal in the previous 2 hours, complain of pain or anxiety and must not have psychomotor agitation.
The patient will be seated with their arms crossed, the legs uncrossed and the feet touching the floor. DBP and SBP will be determined in the two arms, and the arm with the highest BP will be selected; if both are equal either will be chosen. After 5 min, a new determination will be made, which will be the one recorded in the data collection notebook. In following visits, BP will be determined following the same procedure but only in the reference arm.
HBPM HBPM will be carried out according to daily clinical practice. All participants will be asked to provide at least three BP measurements over 3 days. Nurses will check whether the participant has an adequate BP monitor (a semiautomatic, validated BP monitor); if they do not have access to a validated BP device at home, they will be asked to measure the BP in the pharmacy. Participants will receive instructions and an information sheet on the BP measuring method. BP should be measured in a quiet room without noise or interruptions. Patients should be relaxed and should not drink, eat, smoke or exercise during the previous half hour. Patients should not talk during the measurement and should rest for 5 min beforehand. They should sit comfortably with the back resting on the back of the chair, with the legs uncrossed and arms not constrained by clothing. The cuff should be placed on the arm, 2 or 3 cm above the elbow, leaving the hand on the back and the elbow slightly flexed at the height of the heart. Two measurements will be made at least 2 min apart and the mean of the two will be used. Measurement in the pharmacy will follow the recommendations of pharmacy employees, as in usual practice. Table 1 summarises the number of days/hours/visits needed, measurements obtained and the threshold used to diagnose hypertension in each of the four methods.
study procedures
The study will have a total of three visits. Figure 1 presents an overview of the tasks carried out at each study visit and a flowchart of the process. The tasks are shown below.
Visit 0
Patients with inclusion criteria seen by PHC physicians or nurses will be invited to participate. If the potential participant shows interest, they will receive information on the study requirements. Any patient doubts will be answered and clinicians will offer a brochure with detailed information on the study. At this visit, the nurse will measure the BP and determine the reference arm, will measure the first OBPM and will request a HBPM. The next day, the nurse will call the patient to confirm inclusion and arrange the next visit.
Visit 1
Between 5 and 7 days after visit 0, the participant will attend visit 1 with the nurse. All nurses will be instructed by the study coordinator in data collection and the study procedures. Before starting the procedures, they will Open access review the study aims and the patient will be asked to sign an informed consent. Thereafter, the nurse will record anthropometric variables and carry out the second determination of OBPM and patients will follow the 1-BPM or 24-hour ABPM according a randomisation sequence. If 24-hour ABPM is chosen, a visit will be arranged from Monday to Thursday (not Friday because the device must be removed the next day at the same time). At the end of this visit, an electrocardiographic record will be made. A new HBPM will be requested.
Visit 2
Between 5 and 7 days after visit 1, the third OBPM determination will be made. 1-BPM will be administered to patients who underwent 24-hour ABPM during visit 2 and 1-BPM in patients in whom 24-hour ABPM was carried out during visit 1. The nurse will collect data on the variables concerning the patient's experience of the four methods using the patient experience sheet (online supplementary appendix 1). The nurse will record the HBPM participants have provided. After completion, the patient will be referred to their general practitioner to assess the results.
Participant discontinuation
Participants who start any drug treatment to treat hypertension before visit 2, or do not attend any of the visits for three consecutive weeks will be withdrawn from the study. To promote retention, the nurse will phone participants before the visit and will change the date of the visit to another day of the same week if the participant requests.
1-BPM and 24-hour ABPM sequence randomisation As 1-BPM and 24-hour ABPM use the same device, there might be some adaptation effect, which will be mitigated by randomisation: at visit 1 and visit 2, each participant will be allocated randomly to 1-BPM or 24-hour ABPM.
Figure 1
Research visits and content of study assessments (left) and flow chart of study design (right). 1-BPM, 1-hour ambulatory blood pressure monitoring; 24-hour ABPM, 24-hour ambulatory blood pressure monitoring; HBPM, home blood pressure monitoring; OBPM, office blood pressure monitoring.
Open access
The randomisation sequence will be generated previously and will not be changed. We do not foresee that any other randomisation method will be needed to examine the study outcomes.
statistical analysis
Categorical variables will be presented as absolute frequency and percentages and continuous variables as mean and SD, or median and IQR. The Shapiro-Wilk normality test will be used to check the normal distribution of variables. Pearson's correlation coefficient will be calculated for 1-BPM, HBPM and OBPM. The Bland-Altman method will be applied with the graphic representation of the correlations of the intervals (the differences between the measurements against the mean) to confirm independence between the differences obtained with each method and the magnitude. The prevalence of white coat hypertension and masked hypertension will be calculated. The proportion of well-classified participants with 1-BPM will be estimated with respect to 24-hour ABPM and HBPM and OBPM, and the kappa index will be calculated to measure the degree of agreement between the four methods, to classify the participants in subpopulations of hypertensive patients. Sensitivity, specificity and positive and negative predictive values will be calculated for the diagnosis of hypertension subtypes. All concordance and correlation results will be based on the means of SBP and DBP readings. To analyse the cost, the mean of the sum of all the cost variables will be analysed and each variable will be analysed separately by calculating the differences between means by analysis of variance. Likewise, the time necessary to obtain the result of the test and the validity with respect to 24-hour ABPM will be analysed. Values of p<0.05 will be considered statistically significant. The statistical analysis will be made using the R V.3 for Windows statistical program.
sample size calculation A recent study comparing 1-BPM with 24-hour ABPM found that 1-BPM classified 87.3% of patients correctly.
To achieve a precision of 5% in the estimate of a proportion with 95% CIs, assuming the proportion is 87.30%, and taking into account an expected dropout rate of 20%, a minimum of 214 patients will be required.
Patient involvement
There was no patient or public involvement in the study design. The study aims to analyse regarding the time required by the patient to obtain the result, including the time of travel to the centre, the comfort of the test and opinions on recommending it to another patient.
Ethics and dissemination
The protocol presents a health technology trial to study a new test with a medical device, but is not a medicine. The procedures will follow Spanish and Catalan laws. Researchers will follow the ethical standards of the Declaration of Helsinki for biomedical studies and the activities described will follow the Code of Good Practice in Clinical Research. The protocol has been authorised by the research ethics committee of the Hospital Clinic of Barcelona (Ref. HCB/2014/0615). Authorisation will be sought from the research ethics committee for protocol modifications, which will be reported to Clinical Trials. gov. The results will be presented in the scientific literature, and to healthcare partners.
Confidentiality and data management
All study-related information will be stored securely at the study sites. All participant information will be stored in locked file cabinets in areas with limited access. Information from participants will be identified by a coded identification number only to maintain participant confidentiality. All records that contain names or other personal identifiers, such as locator forms and informed consent forms, will be stored separately from study records identified by code number. All local databases will be secured with password-protected access systems.
Patient and public involvement
There was no patient or public involvement in the study design.
dIsCussIon Hypertension is the most frequent reason for consultation in primary healthcare, representing nearly 6% of all visits to Spanish family physicians. 22 The diagnosis of hypertension depends on BP measurements. However, these vary widely, and include several hidden effects, including the white coat effect. Two of the most important international guidelines suggest using 24-hour ABPM. 4 6 Despite the validity and reliability of this method, it has drawbacks: a high cost, the lack of instruments to carry out the procedure in all patients, the discomfort of the patient during sleep (the cuff activates every 30 min at night) and the reluctance of some patients to wear the device for 24 hours. 8 23 Therefore, in daily clinical practice, nurses and physicians use OBPM and HBPM which, while more feasible, lead to other problems such as overdiagnosis. 1-BPM seems to have a sensitivity and specificity in the diagnosis of hypertension close to that obtained with daytime 24-hour ABPM and avoids the white coat effect and other BP effects. Another important aspect that will be examined in this study is the patient experience.
In daily practice, clinicians use the best method in any tests requiring medical devices, despite the sensitivity and specificity values, as any diagnostic method requires examination of the patient experience to evaluate the global evidence. 24 Limitations of the study The validation of 1-BPM will not allow night-time recording of BP, a relevant issue in the assessment of cardiovascular risk. Nor does it seek to substitute 24-hour ABPM. However, if the working hypothesis is verified, the new method will allow for a new valid, effective Open access diagnostic alternative, acceptable to the patient, and easily applicable in primary healthcare, and more reliable than HBPM and OBPM.
|
2019-05-29T13:10:38.504Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "88a959f30a86fd8bef71072226bb9f017a5459a0",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/5/e029268.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "df9adb9f3a9bb9a208c8d94b27bd59c8ab61188e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235377016
|
pes2o/s2orc
|
v3-fos-license
|
Mixed dark matter: matter power spectrum and halo mass function
We investigate and quantify the impact of mixed (cold and warm) dark matter models on large-scale structure observables. In this scenario, dark matter comes in two phases, a cold one (CDM) and a warm one (WDM): the presence of the latter causes a suppression in the matter power spectrum which is allowed by current constraints and may be detected in present-day and upcoming surveys. We run a large set of $N$-body simulations in order to build an efficient and accurate emulator to predict the aforementioned suppression with percent precision over a wide range of values for the WDM mass, $M_\mathrm{wdm}$, and its fraction with respect to the totality of dark matter, $f_\mathrm{wdm}$. The suppression in the matter power spectrum is found to be independent of changes in the cosmological parameters at the 2% level for $k\lesssim 10 \ h/$Mpc and $z\leq 3.5$. In the same ranges, by applying a baryonification procedure on both $\Lambda$CDM and CWDM simulations to account for the effect of feedback, we find a similar level of agreement between the two scenarios. We examine the impact that such suppression has on weak lensing and angular galaxy clustering power spectra. Finally, we discuss the impact of mixed dark matter on the shape of the halo mass function and which analytical prescription yields the best agreement with simulations. We provide the reader with an application to galaxy cluster number counts.
with M wdm is the WDM particle mass, Ω wdm its density parameter and h the dimensionless Hubble parameter. Free-streaming becomes astrophysically relevant for masses of O(keV) and below: typical candidates are gravitinos [26][27][28] and sterile neutrinos [29,30]. Currently, the tighest constraints come from Lyman-α data [31], in particular from the combination of the XQ-100 flux power spectrum and HIRES/MIKE, which yields M wdm > 5.3 keV at 95% confidence level (C.L.). The latter limit does however depend on a given set of astrophysical priors and on assumptions on the reionization models and on the thermal history of the intergalactic medium. In Ref. [32], the authors used strong lensing to probe the free-streaming scale and the subhalo mass function: their result is M wdm > 5.2 keV at 95% C.L., while a recent study of Milky Way satellites [33] found M wdm > 2.02 keV at 95% C.L., a limit that it is tightened to M wdm > 3.99 keV when modelling the effect of reionization. More recently, in Ref. [34] strong gravitational lensing with extended sources, the Lyman-α forest, and the number of luminous satellites in the Milky Way were analyzed jointly in a consistent framework in order to find lower limits on thermal WDM and sterile neutrinos, finding M wdm > 6.048 keV at 95% C.L.. With these numbers in mind, upcoming large-scale structure surveys, such as Euclid, 1 DESI, 2 and LSST at the Vera C. Rubin Observatory, 3 are likely not to improve much these constraints using galaxy clustering and weak lensing only, due to the fact these observables will not probe the scales where the suppression in the density perturbations and in turn on the power spectrum occurs.
However, a scenario where DM comes in two phases, a cold one and a warm one, called mixed dark matter or cold and warm dark matter (CWDM), constitutes an intriguing possibility and a very simple extension to the ΛCDM model for which weak lensing and galaxy clustering could provide some interesting constraint. In this picture we have, besides the WDM mass, an additional parameter, which is the WDM fraction with respect to the total DM amount: where Ω cdm and Ω wdm are the density parameters for CDM and WDM, respectively. These scenarios typically display a suppression in the power spectrum, which is shallower than WDM alone, with the exception that it can occur already at relatively large scales. For instance, the combination of a low M wdm with a low f wdm is in agreement with observations and can in principle be detected by large-scale structure surveys. Previous works on CWDM have focused on the physics at the halo/galactic scale, highlighting how models sharing similar freestreaming lengths but with different combinations of f wdm and M wdm can produce halos with different properties below masses of 10 11 M /h due to the different behaviour of the power spectra at small scales [35,36]. Another work [37] used Lyman-α data in combination with WMAP5 [38] to constrain CWDM models, finding that f wdm < 0.2 is allowed independently of the WDM mass. More recently, Ref. [39] carried out an extensive study of mixed dark matter models, fixing the WDM temperature to the Standard Model neutrino one in order to obtain the correct f wdm and also considering the fermionic or bosonic nature of these particles. They found, combining data from Planck, BOSS DR11 and Milky Way satellites f wdm < 0.29 (0.23) for fermions (bosons) in the mass range 1−10 keV and f wdm < 0. 43 (0.45) in the mass range 10 − 100 keV. This is the first of a series of two papers in which we investigate the impact of CWDM models on the main cosmological large-scale structure observables that will be probed in upcoming surveys: in particular, we will focus on the angular power spectra of galaxy clustering and cosmic shear, on the halo mass function as well as on the reconstructed (theoretical) matter power spectrum. While for both galaxy clustering and weak lensing the link to the observable is clear, the halo mass function cannot be directly probed and further assumptions (on e.g. stellar content or luminosity of galaxies) residing in dark matter halos should be made. However, using the few existing theoretical prescriptions for reproducing the mass function, we decided to provide just a qualitative investigation on observed quantities, in particular the cluster number count.
To do so, we run a large set of high-resolution cosmological N -body simulations over a wide range of values in the plane M wdm − f wdm in order to build an emulator able to predict the suppression in the non-linear matter power spectrum with respect to ΛCDM with percent precision. This would allow us to improve upon currently existing fitting functions for CWDM, already provided in Ref. [40]. However, the focus of that paper was on strong gravitational lensing and therefore the scales involved were much smaller than this work. Those fitting functions were obtained by using only 6 different models in the M wdm − f wdm plane in rather small boxes (10 Mpc/h) and this does not allow us to connect properly to the linear regime. We also compare our results with the fitting formula by Ref. [41] for WDM only. In a companion paper [42] we will present a detailed Markov Chain Monte Carlo (MCMC) forecast on CWDM in future surveys. This paper is organized as follows. In Section 2 we present the set of simulations we run and use for our theoretical predictions. In Section 3 we focus on the matter power spectrum: we describe how the suppression due to CWDM looks like, we show how we build the emulator, we investigate possible dependencies on the cosmological parameters and check whether the baryonification effect [43,44] is independent from the DM model assumed (ΛCDM or CWDM). In the following Sections we investigate and discuss the impact of CWDM models on some fundamental cosmological observables such as cosmic shear spectra (Section 4) and halo mass functions (Section 5). Finally, in Section 6 we draw our conclusions.
Simulations and dataset
The creation of an emulator requires a thorough sampling of the whole parameter space: this implies that a large number of different simulations has to be run for a wide range of values for f wdm and M wdm . On the other hand, we also want to use cosmological observables for which a reliable theoretical prescription already exists, like the halo mass function [45,46]. We therefore split our simulation set into two sub-sets. The "main" set samples the (f wdm , M wdm ) parameter space on an almost regular grid (20 points total). We rely on boxes of 80 h −1 comoving Mpc of linear size: this allows us to reconnect with the linear regime at the largest scales, while not being subjected to numerical fragmentation or resolution effects (see below) at the smallest scales of interest for future surveys (k 10 h/Mpc). Then, we choose the values f wdm = 0.25, 0.5, 0.75, 1 and M wdm = 0.1, 0.3, 0.5, 1.5 keV as our parameters. The range of masses has been chosen as follows. For masses larger than 1.5 keV the differences with ΛCDM at the power spectrum level are below percent level at scales 10 h/Mpc (we also run a set with M wdm = 3.0 keV as a further check); for masses smaller than 0.1 keV and f wdm = 1 the suppression in the matter power spectrum starts to occur at wavenumbers comparable to the box size itself, also altering the value of σ 8 which we want to keep fixed for all of our runs. On the other hand, the "extra" set randomly samples the parameter space (54 points in total) and its only purpose is to populate the training set for our matter power . This set is mainly used to test the validity of theoretical predictions for the observables we consider here. Blue dots mark the "extra" set, created with the purpose of populating randomly the parameter space to train the emulator for the suppression in the matter power spectrum with respect to the ΛCDM model.
spectrum emulator. Figure 1 shows all the simulations that we run. In particular, red dots correspond to the "main" set, while the blue ones refer to the "extra" set. Our simulations have been run with the tree-particle mesh (TreePM) code Gadget-III [47]. The simulations follow the gravitational evolution of 512 3 particles from an initial redshift of z in = 99 to z = 0. All the particles were initialized assuming they were CDM and neglecting thermal velocities: we explicitly checked that for M wdm ≥ 0.3 keV the inaccuracy introduced by this assumption does not exceed 1.5% in the matter power spectrum overall suppression for all redshifts and scales k 5 h/Mpc. Thus, the differences among the different models are to be found in the initial power spectrum, computed with CLASS [48], and therefore in the initial displacement field generated with a modified version of the N-GenIC software, 4 using second-order Lagrangian perturbation theory.
The fiducial cosmological parameters are set to Ω m = Ω cdm + Ω wdm + Ω b = 0.315, Ω b = 0.049, h = 0.674, n s = 0.965, σ 8 = 0.811. With this choice of parameters, a single CDM particle has a mass of ∼ 3.3 × 10 8 M /h. For each simulation we run four different realizations: two standard ones and two where the phases of the initial density field have been flipped. We do so to reduce effects due to small-scale cross-correlations: as such, when computing matter power spectra and halo mass functions, we will always take the average of the 4 realizations. We take 8 different snapshots, equally spaced from z = 3.5 to z = 0. We compute the matter power spectra using the Pylians3 code 5 . We assign particles to a grid of size 1024 using the Cloud-In-Cell mass-assignment scheme. This ensures that the Nyquist frequency k Nyq ≈ 40 h/Mpc is much larger than the maximum k ∼ 10 h/Mpc we are interested in in our analysis. To further make sure that our results are converged at the smallest scales, we run an additional simulation set with 1024 3 particles while keeping the same box size. Figure 2 summarizes this test. We plot the percent difference in the suppression of the matter power spectrum when considering the simulation with 512 and 1024 particles per side. As it can be seen, for all the redshifts considered and for all the scales of interest the difference falls within ∼ 0.5%.
The halo mass function is computed with the ROCKSTAR software [49]. We first identify all halos with masses larger than 10 10 M /h through a Friends-of-Friends (FoF) algorithm, with linking length 0.28 times the mean interparticle separation. The virial mass in this paper is defined as the mass enclosed in a region where the density is 200 times larger than the critical density of the Universe. We operate a cut in sphericity, i.e. we consider only halos with axes ratios c/a > 0.24 and b/a > 0.34, where a > b > c are the three semi-axes. This algorithm has been shown to remove the presence of an artificial population of very elongated proto-halos which is typical in WDM scenarios but rather insignificant in ΛCDM [46,50]. Last, we operate a second cut in mass and keep only halos with a mass larger than 10% of the free-streaming mass in order to avoid effects from numerical fragmentation.
In Figure 3 we show a small region (10×10×10 Mpc/h) of three different simulations, each with the same seed. The left one is a pure ΛCDM simulation, the one on the right is a pure WDM simulation with M wdm = 0.5 keV, while the central one is a CWDM simulation also with M wdm = 0.5 keV but f wdm = 0.5. Four different snapshots at z = 0, 1, 2, 3 are shown from top to bottom. The free-streaming length at z = 0 is roughly 0.4 Mpc/h in pure WDM, and 0.5 Mpc/h in CWDM: below this scale structures are less clustered and give rise to a suppression in the matter power spectrum and, correspondingly, on the halo mass function.
Matter power spectrum
We start by analyzing the matter power spectrum. In this Section, we first discuss how the suppression in the matter power spectrum looks like as a function of f wdm , M wdm , and redshift (Section 3.1). We compare the results of our simulations to the theoretical prediction from an emulator whose construction and testing is presented in Section 3.2. We also show how these results extend and improve previous works in predicting the non-linear power spectrum [40,41,51]. In Section 3.3, we assess possible dependencies of the suppression on the cosmological parameters. Finally, we show how baryonic processes are independent from the DM model assumed (Section 3.4).
Suppression of power spectrum
We summarize our results for what concerns matter power spectra in Figure 4. Each subplot shows the suppression of power with respect to ΛCDM for a given redshift (different rows) and WDM fraction (different columns). Color-coded are the different WDM masses: red for 0.1 keV, blue for 0.3 keV, green for 0.5 keV, and yellow for 1.5 keV. In particular, the dots represent the results from our simulations, while solid lines display the performance of our emulator (see Section 3.2). As a reference, with the same colors each subplot shows the various scales at which the power spectrum starts to differ from the ΛCDM one, computed as in Ref. [37]. This is referred to as free-streaming horizon, i.e. the largest scale which is affected by WDM in cosmic history. Finally, the grey shaded area denotes k > 10 h/Mpc, a rough estimate of the scales which will not be probed by future large-scale structure surveys. As we already mentioned, free-streaming is responsible for this suppression 6 . The effect is more pronounced for smaller WDM masses (because of the larger thermal WDM velocities), for larger WDM fraction (as the amount of matter subject to free-streaming increases), and for increasing redshift (as the free-streaming length scales as 1/k fs ∝ (1 + z) 1/2 during matter domination and as the high-redshift regime is closer to linear behaviour, where primordial differences in matter power are more pronounced than in the non-linear regime).
Emulator: building, testing and performance
In order to find a model that best describes observations, we use a Markov chain Monte Carlo (MCMC) framework. This method samples the parameter space, comparing theoretical predictions with data. The typical number of samples drawn in a cosmological MCMC is O(10 5 ). However, our simulations are computationally expensive (> 5000 CPU-hours per set of parameter values). Therefore we cannot directly explore the entire CWDM parameter space.
In this section, we describe an emulator that can replace our simulation procedure. An emulator can be imagined as a regression model learnt from examples of our simulations, It is clearly visible that below the free-streaming length (roughly 1/20 of the size of the panel) structures and filaments become less prominent with increasing WDM fraction, especially at high redshift. Figure 4: Each subplot shows the suppression of the matter power spectrum in CWDM models with respect to ΛCDM for a given redshift (rows) and a given f wdm (columns). Different colors label different M wdm : 0.1 keV in red, 0.3 keV in blue, 0.5 keV in green, and 1.5 keV in yellow. Dots represent the measurements from the "main" suite of our simulations, while solid lines display the results obtained with the emulator described in Section 3.2. Vertical colored dotted lines represent the scales at which the suppression starts to kick in, as function of f wdm , M wdm , and z, as computed in Ref. [37]. The grey shaded area on the right marks the region k > 10 h/Mpc, i.e. an estimate of those wavenumbers which will not be probed by upcoming large-scale structure surveys.
which is known as the training set. To create the training set, we run 74 models (both the "main" and "extra" sets) by randomly sampling the (f wdm , M wdm ) parameter space. As described in the previous Section, we extract snapshots at 8 redshifts. The parameter space sampling is shown in Figure 1. Instead of directly emulating P CWDM , we emulate the power spectrum ratio P CWDM /P ΛCDM , so that the contribution of cosmic variance at the largest scales of the simulations is removed.
Our data set is very large as we have 592 power spectra, each evaluated at 886 wavenumbers. To make our emulator memory efficient, we pre-process the data set by reducing the dimensionality of our power spectra suppression data set using principal component analysis (PCA). PCA is performed using the module provided in scikit-learn package [52]. We keep the first 20 principal components (PCs). With these PCs, we can reconstruct our data set within an error of 2% for k 10 h/Mpc.
We use the Gaussian Process Regression (GPR) [53] to build our emulator. Previous works have shown that GPR is apt to emulate cosmological power spectrum, such as for non-linear matter distribution [54], Lyman-α forest [55], and the 21cm signal [56]. We use the GPR module provided in GPy package [57]. This module finds a model that relates the input vector x = (f wdm , M wdm , z) to the output vector y. In our work, we will emulate the coefficients of the 20 PCs.
A Gaussian process assumes any finite number of points in a parameter space to be jointly Gaussian distributed as where K is the kernel function. This function models the similarity between the data points in the training set. There exist various choices of kernel functions [see e.g. 53]. In our work, we use the Matern kernel [53,58], which contains two hyper-parameters. These hyperparameters can be determined from the training set. Once we have learnt the kernel function for our parameter space, we can predict the PCs at any point in our training set. We then can reproduce the power spectra suppression from the predicted PCs. We use 90% of our data set, which was selected randomly from the full set, to train the emulator, while we keep the remaining 10% to test its performance. This test set contains ∼ 60 data points. In Figure 5, we show the percentage difference between the emulated and simulated power spectra suppression for 12 representative data points from our test set. Most of the emulated power spectra suppression are within 0.5% magnitude difference at all scales. There are quite a few power spectra suppression where the difference is larger. But this difference is only at high wavenumbers (k 4 h/Mpc) and does not exceed ∼ 1.5%. As the emulation process has not seen the test set data points during training, a good prediction capability at these points hints that the emulator has learnt a generic model. Therefore this emulator can be used to interpolate within the parameter space where it is trained.
To have an overall picture of the performances of our emulator, we turn our attention back to the solid lines of Figure 4. For all masses down to 0.1 keV we recover the correct suppression at < 2% level for all z ≤ 3.5, for all WDM fractions and masses. Minor problems may arise at high redshift (z 3), for high fractions and small masses. In these situations the down-turn occurs at the very same size of the box, so that some numerical errors are expected: these issues are related to the normalization of the suppression, so to the simulations themselves rather than to the emulator. While the accuracy of the emulator with respect to the simulations remains at percent level, we detect a small (∼ 2%) enhancement of the power spectrum due to this effects in the range of scales comparable to the box size itself This has a little impact on the angular power spectra of cosmic shear and galaxy clustering (see Section 4), where we have a non-physical ∼ 1% enhancement at ∼ 200 − 500. We remark anyway that this does not represent a problem for the purposes of this paper. This box-size effect only interests f wdm ≈ 1 and the smallest masses, a region which is already by far excluded by current constraints. Moreover, this imprecision only concerns z ∼ 3, where the typical window functions for weak lensing and angular galaxy clustering are very close to zero for benchmark future large-scale structure experiments.
Previous works have already tried to give a description of the non-linear matter power spectrum in WDM or CWDM scenario. In particular, Refs [41,51] provided fitting formulae for the non-linear WDM suppression along the lines of its linear counterpart [59]. Their claim was a 2% agreement for z ≤ 2 and M wdm ≥ 0.5 keV. We confirm that the agreement with our simulations with f wdm = 1 is good, although with a slightly lower accuracy (5%) at z ≤ 2 and k < 10 h/Mpc for masses M wdm ≥ 0.3 keV. This accuracy however improves to 2% if we limit ourselves to scales k < 3 h/Mpc. We also compared our simulations to the fitting formulae provided by Ref. [40]. While these are in principle valid for both WDM and CWDM, they were obtained by comparison to N -body simulations with a much smaller boxsize (10 Mpc/h). The fit performs similarly to the one in Ref. [41] for WDM scenarios. For CWDM we find an agreement of ∼ 5% for z ≤ 2 and M wdm ≥ 0.3 keV up to scales of 10 h/Mpc, even though the largest deviations occur at low redshift. Both these previous works however fail to reproduce the suppression for M wdm < 0.3 keV already in the mildly non-linear regime.
Dependence on cosmological parameters
We investigate possible dependencies of the CWDM suppression on cosmological parameters. While we do not expect that the suppression depends on parameters such as Ω b , h, or n s , there might be a potential difference when we vary the overall spectrum amplitude σ 8 and the total matter content Ω m . These two parameters are those that are better constrained by weak lensing and photometric galaxy clustering: in particular, the best constraints are typically achieved by combining the latter two into a single parameter S 8 = σ 8 (Ω m /0.3) α , where α is often set to 0.5 [60][61][62]. Interestingly, results from the KiDS survey highlighted some tension on this parameter when analyzing the cosmic shear in Fourier space rather than in configuration space [63][64][65]. Moreover, both of these values are in tension with Planck [66] so that extensions to the ΛCDM scenario − modifications of gravity, massive neutrinos, WDM, baryon feedback (see also Section 3.4) − are typically invoked to try to solve this issue. It is essential therefore to examine what happens in the Ω m − σ 8 plane to the suppression of the matter power spectrum also in the CWDM scenario.
We run a further set of CWDM simulations with larger and smaller Ω m and σ 8 values, together with the corresponding ΛCDM ones. We choose the differences to be ∆Ω m = ±0.02 and ∆σ 8 = ±0.045. The Ω m value is ∼ 3 times the error forecast in Ref. [67] by combining weak lensing and angular galaxy clustering. For σ 8 , it corresponds to ∼ 12 times this error, but we made this choice on purpose to take conservatively into account other possible effects that can alter the overall spectrum normalization, like e.g. massive neutrinos. Our 8 different cosmologies therefore have Ω + m = 0.335, Ω − m = 0.295, σ + 8 = 0.856, σ − 8 = 0.766 and the combinations of the two.
Results of this further test are plotted in Figure 6, where we show the ratio between the suppression of the power spectrum P CWDM /P ΛCDM in the varied cosmologies ("cosmo") and the one in our fiducial cosmology ("fid"). Differences are shown in percent. Red colors correspond to cosmologies where Ω m is enhanced while light blue lines label cosmologies with a lower Ω m ; analogously, dashed lines refer to cases with an enhanced σ 8 and dotted lines to cases where σ 8 is diminished. As it can be noticed from the figure, the effect of varying Ω m is well below 1% even at the highest redshift we consider (z = 3.5). On the other hand, when we vary σ 8 the suppression becomes slightly more cosmology-dependent at high redshift, helped by the large variation we introduce in the parameter. However, this difference is well within the 2% level at the scales we are interested in (k 10 h/Mpc) and the redshifts where upcoming surveys will be sensitive (z 2.5). We are therefore confident in claiming that our emulator can be used to predict the CWDM suppression also in the neighborhood of our fiducial set of cosmological parameters and, in general, in the range of interest of cosmological parameter for future surveys.
The baryonification model in CWDM models
Baryon feedback has been shown to be one of the leading mechanisms capable of modifying the distribution of matter within dark matter halos up to relatively large cosmological scales (see e.g. [68,69]). From a cosmological point of view, it constitutes an important systematic to be taken into account [70][71][72], while completely ignoring its effect on the matter power spectrum can lead to a ∼ 5σ bias in the estimate of Ω m and σ 8 [73,74]. Since the observational constraints are still poor, these phenomena are typically investigated through computationally expensive hydrodynamic simulations. Moreover, the uncertainty caused by different AGN feedback models can reach 50% for scales k ≤ 1 h/Mpc [75]. A novel approach circumventing the computational cost of the problem has been first proposed by Ref. [44] and subsequently improved by Ref. [43]. In this approach, called baryonification, baryon feedback is added on top of DM-only simulations. In particular, the modification of the halo profiles is taken into account through the displacement of DM particles from their positions. Such displacement depends on five parameters directly related to the physics of the gas: two parameters controlling the slope of the gas profile and its dependence on the host-halo mass (µ, log M c ); one parameter setting the maximum radius of gas ejection (θ ej ); two parameters describing the central-galactic and total stellar fractions within the halo (η cga , η tot ).
In this Section, we investigate whether the effects from baryons are separable from the suppression induced by the CWDM model. While this separability has been verified for the case of cosmologies with varying neutrino masses [76], it remains untested for more general CWDM scenarios. We apply the baryonification method to both our CWDM and ΛCDM simulations and compare the results in terms of the relative suppression effects from baryonic feedback. The results of this analysis is illustrated in Figure 7. For the benchmark simulation with f wdm = 0.75 and M wdm = 0.5 keV, we show the percent-difference of the baryonification model effect on a CWDM simulation with respect to the equivalent performed on top of the corresponding ΛCDM one. Different colors label the 8 different snapshots we took from z = 3.5 to z = 0. Shaded stripes represent the 1% (dark grey) and 2% regions (light grey), while the scale k = 10 h/Mpc is marked with a dashed black vertical line. The parameters we used for displacing the particles (log(M c [M /h]) = 13.8, µ = 0.21, θ ej = 4.0, η tot = 0.32, η cga = 0.6) correspond to a model in broad agreement both X-ray observations and hydrodynamic simulations (see Ref. [43] for a more detailed discussion). Figure 7 shows that the difference of the baryonic suppression between CDM and our benchmark CWDM scenario remains below the percent level for k ≤ 5 h/Mpc (growing to 2-3 % for k ≤ 10 h/Mpc). This is significantly smaller than the expected total baryonic suppression effect (which is of the order of ∼ 10 − 30%) and of similar size of the expected precision of N -body codes in the same range of scales [77]. We can therefore assume that CWDM suppression is independent of baryonification. As a consequence, we can treat baryonic and CWDM power suppression effects independently, which considerably simplifies the analysis regarding the cosmological inference pipeline.
Cosmic shear and galaxy clustering
The next observables we focus on in the framework of CWDM models are the angular power spectra for weak lensing and galaxy clustering. Here, we want simply to show the quantitative behaviour (together with some quantitative discussion) of the CWDM suppression on projected spectra.
In this Section we assume the Limber approximation, valid for large multipoles ( 10) (see e.g. [78]). In this picture, assuming a flat Universe and a single redshift bin for simplicity, the angular power spectra can be written as where χ(z) is the comoving distance to redsfhit z and {X, Y } = {L, G} so that LL, GG and GL stand for cosmic shear, galaxy clustering and galaxy-galaxy lensing respectively. The two window functions W L (z) and W G (z) are a measure of the lensing efficiency and the galaxy bias of the sample, respectively, and they are tightly related to the galaxy distribution in redshift n(z). The galaxy clustering window function is given by where for the bias we assume the functional form by Ref. [79], which reads [43] for an insight of the baryonification parameters).
correlations of galaxy orientations coming from pairs aligned by the tidal field (IA). Hence, we have Ref. [67], where the authors took the luminosity functions of early and late type galaxies separately and joined them assuming a given fraction of ellipticals. The resulting function fairly reproduces fig. C.1 of Ref. [80] in a certain z range and is subsequently extrapolated to match our own redshift range 7 . This model is an extension of the so called non-linear alignment model [67] first introduced in Ref. [81].
In Figure 8, we show the ratio between the angular power spectra in the CWDM scenario with respect to ΛCDM for the three different cases of weak lensing (top row), galaxy-galaxy lensing (middle row), and galaxy clustering (bottom row). Different columns label different WDM fractions and different colors of the solid lines refer to different WDM masses: red for 0.1 keV, blue for 0.3 keV, green for 0.5 keV, and yellow for 1.5 keV. The presence of the two grey shaded areas represents multipoles not considered for cosmological exploitation in the Euclid forecasts [67]: light grey areas are eliminated when we use a pessimistic range of multipoles; dark shaded areas are excluded even in the most optimistic scenario. In particular, the lower limit is set to min = 10 due to the fact that at lower mulitpoles the Limber's approximation is not valid. For the maximum multipole we assume max = 1500, 750, 750 for LL, GL, GG in the pessimistic case and max = 5000, 3000, 3000 for the optimistic case. These numbers are chosen following the lines of Ref. [67]. The values max = 5000 and 3000 are rather optimistic for upcoming surveys, as the signal-to-noise generally saturates quickly above 500 − 1000. On the other hand, neglecting non-Gaussian contributions (like they do) in the data covariance matrix results in an unjustified boost in the signal-to-noise. For cosmic shear, for instance, the signal-to-noise with a full covariance matrix (i.e. including non-Gaussian contributions) up to max = 5000 is the same to the one with a Gaussianonly covariance matrix cut at cut ∼ 1500. The golden shaded area represents the cosmic variance limit for a survey with a sky coverage of f sky = 0.363. Finally, the vertical lines mark the point where shot/shape noise equals the cosmological signal, depending on how many galaxies are in the sample: we show it for 3, 10, and 30 galaxies per square arcminute, which are reasonable numbers for upcoming surveys. We assume that galaxies follow the distribution with z 0 = 0.636. The non-linear matter power spectra for ΛCDM models are computed with the HMcode2020 halofit version implemented in CAMB [82], while to account for the presence of CWDM we use the emulator we built in Section 3.2 8 . Figure 8 is organized in such a way that, if the underlying cosmology is ΛCDM, each model whose line falls outside the golden shaded area at multipoles lower than the ones where shape/shot noise becomes dominant can in principle be excluded. In general and as expected, it is easier to exclude lower values of M wdm and high values of f wdm , for which the suppression of the matter power spectrum is more pronounced. In the optimistic scenario and accounting for a low noise (30 arcmin −2 ), cosmic shear alone could be able in principle to exclude M wdm 0.3 keV for f wdm > 0.75. Galaxy clustering exhibits a less pronounced suppression but galaxy bias enhances the signal enough to allow to go to higher multipoles before being dominated by shot noise: all in all, for f wdm > 0.75 masses smaller than ∼ 0.5 keV can already be excluded when sampling 10 galaxies arcmin −2 . The suppression in galaxygalaxy lensing, finally, has an intermediate behaviour between the previous two, but it has . Different columns report the suppression for different WDM fractions, while the WDM mass information is color-coded: red for 0.1 keV, blue for 0.3 keV, green for 0.5 keV, and yellow for 1.5 keV. For simplicity, for this plot a single redshift bin has been used. The golden shaded area represents cosmic variance for a survey with the same specifics of Euclid. The vertical lines represent the multipole at which shot/shape noise equals the cosmological signal, depending on the number of sample galaxies per square arcminute (the number written on the side of the line itself). The vertical grey shaded areas remove the multipole regions that are likely not be used in the cosmological exploitation (see text for details): the light and dark area represent a pessimistic and an optimistic setting, respectively. In particular, min = 10, enough to ensure the validity of the Limber's approximation, while max = 1500, 750, 750 for LL, GL, GG in the optimistic case and max = 5000, 3000, 3000 for LL, GL, GG in the optimistic case, respectively. the advantage of being noise-free: even for the lowest WDM fraction we may be able to exclude M wdm < 0.3 keV. One can also increase the signal-to-noise by dividing galaxies into more redshift bins and combining the three observables. Of course, this plot and this analysis have a few caveats. First, we are completely ignoring other sources of uncertainty, like super-sample covariance [83,84] or non-Gaussian contributions that can suppress the signal-to-noise especially at high multipoles. Moreover, here we are fixing our cosmology: we expect a worsening of the posteriors when relaxing this assumption and in particular when marginalizing over parameters like Ω m and σ 8 . In Ref. [70], the authors focused on the possible degeneracies between baryon feedback and massive neutrinos in cosmic shear spectra. In particular, they found interesting degeneracy patterns between neutrino mass and both the baryon feedback parameter log M c and the intrinsic alignment parameter. Since massive neutrinos could be considered WDM, we expect a similar behaviour for the case of CWDM: we leave this study to a companion paper [42] where we run MCMC forecasts for CWDM models in a Euclid -like survey, with a proper marginalization over astrophysical, nuisance and cosmological parameters.
Halo mass function
The last physical quantity that we investigate is the halo mass function. We focus on the mass function since many observable quantities are directly linked to it, for example, the galaxy mass function, the (conditional) mass function on the number of Milky Way satellites, the number of high-redshift galaxies capable of driving reionization processes and even the strong lensing signal. Even if the accurate modelling of the observables would require astrophysical assumptions, the underlying dark matter mass function will always be the fundamental ingredient of any theoretical effort.
The most rigorous way of deriving it is through the excursion set of peaks [85][86][87] which extends the Press & Schechter formalism [88]. In this framework, the number of halos per unit mass per unit volume can be written as In the equation above, f (ν) is a universal, cosmology-independent function of the peak height ν = δ c /σ(M ), where δ c ≈ 1.686 is the linearly-extrapolated spherical overdensity for collapse and σ(M ) is the root mean square mass fluctuation The mass M is related to the radius R depending on the kind of window function chosen. In the ΛCDM framework, the window function W (kR) that smooths the density field is typically chosen to be a top-hat in configuration space, which in Fourier space translates to Moreover, the universal f (ν) function is often chosen to be the Sheth-Tormen one [89,90]: with A = [1 + 2 −p Γ(1/2 − p)/ √ π] −1 , p = 0.3, q = 0.707. However, when dealing with freestreaming species or, in general, with models where the power spectrum has a small-scale cut-off, the top-hat filter is not the best suitable choice as it predicts an excess of low-mass halos [45,46,91]. A sharp-k filter was invoked by Ref. [45] to solve this problem: our findings show that despite being able to predict fairly well the low-mass suppression, this filter suffers from problems in modelling the absolute mass function. More recently, Ref. [92] proposed the use of a smooth-k filter, namely where β is a free parameter that can be fitted against N -body simulations. This two new filters have the advantage of being able to alleviate the issues caused by the small-scale cutoff in the linear power spectrum. The downside of using them is that they do not have a well-defined mass associated to the filter scale, i.e. their integral over the volume diverges. What is typically done to restore the scaling M ∝ R 3 is to introduce a second free-parameter c to be fitted against simulations such that For the smooth-k filter, Ref. [92] found that β = 4.8 and c = 3.3 provide a reasonable fit to N -body simulations, while Ref. [93] obtained comparable results (β = 3.0, c = 3.3) in scenarios where dark matter produces acoustic oscillations at small scales. We show our results in Figure 9. In each subplot, we show the suppression in the halo mass function with respect to the ΛCDM case. The model used is analogous to the one described above with the only difference that we use q = 1 in Eq. 5.4 [45]. Different rows refer to different redshifts, different columns label different WDM fractions while WDM masses are color-coded: 0.1 keV in red, 0.3 keV in blue, 0.5 keV in green, and 1.5 keV in yellow. As already mentioned in Section 2, we define the halo mass as the mass enclosed in a radius where ρ > 200ρ crit . For reference, vertical dotted lines represent the mass enclosed in a sphere of radius given by the free-streaming horizon [37]. Solid lines represent the theoretical prediction using a smooth-k filter and (β, c) = (4.8, 3.3). We find that this combination of parameters performs slightly better than (β, c) = (3.0, 3.3), especially at low redshift. We also show the results for a sharp-k filter with c = 2.5 (dashed lines) [45]. For the case of CWDM, we conclude that both approaches work similarly well, with the smooth-k mass function providing slightly better than the sharp-k mass function at low and slightly worse at higher redshifts. Finally, it may be noticed that at small M wdm and large f wdm the halo mass function for CWDM models assumes larger values than the ΛCDM one. This once again comes from the fact that we chose to parametrize the power spectrum amplitude with σ 8 rather than for A s and it is connected to the non-physical "bump" we were discussing in Sections 3.2 and 4.
We want to further address the discussion on the halo mass function by linking it to actual observables, focusing on the cluster number counts. The cluster abundance is particularly helpful in breaking the degeneracy between Ω m and σ 8 thus providing tight constraints on these two parameters (see e.g. [94][95][96]). We want now to qualitatively investigate which CWDM models will be excluded in upcoming cluster surveys. In the simple model we consider, the cumulative number of galaxy clusters of mass larger than a threshold M th is given by On the other hand, solid lines represent the theoretical prediction from the excursion set theory when a smooth-k window function is used with parameters β = 4.8, c = 3.3 (see Ref. [92]); dashed lines instead represent the same but for a sharp-k filter (see Ref. [46]). As a reference, we plot as vertical dotted lines the mass enclosed in a sphere of radius given by the free-streaming horizon [37]. where dV / dz = 4πf sky c/H(z) χ 2 (z), with f sky = 0.363, and we fix z max = 2 and M th = 10 13.8 M /h [96]. We compute this quantity for a large set of CWDM masses, generating with CLASS linear matter power spectra up to very small WDM masses (even below 0.1 keV) and smoothing them with a smooth-k window function with parameters (β, c) = (4.8, 3.3). In this part of the analysis, for all the models we decide to keep A s fixed, rather than σ 8 : in this way, we are able to push the WDM masses down to values for which the suppression in the power spectrum occurs at scales that influence significantly the value of σ 8 and in turn cluster abundance.
In Figure 10 we show the difference between the cluster number count in CWDM scenarios and the corresponding ΛCDM value, computed using eq. 5.7. Such difference is then normalized by the theoretical uncertainty on the number count itself, which is assumed to be Poissonian. In an ideal case, models for which the difference exceeds 1-σ will be excluded in future surveys. However, a few caveats must be specified, as the cluster number count is subject to a number of systematics. In particular, the complex cluster physics must be taken into account through some effective scaling relations that connect the theoretical mass function to a prediction of the distribution of clusters in the observables of the survey. These scaling relations depend in turn on some unknown nuisance parameters (see e.g. [96,97]). Ref. [98] showed how increasing the accuracy of the scaling relations leads to remarkable improvements on the constraints of the cosmological parameters; on the other hand, it causes the choice of the mass function, which for current data represents a subdominant systematic source, to become relevant. In light of this, we keep two conservative 3-σ and 5-σ discrepancies as rule-of-thumb for the exclusion (or detection) of a given CWDM model. We can crudely approximate the allowed regions as: 6 Discussion and conclusions The ΛCDM model has been shown to provide an extremely accurate description of our Universe at large scales. There are, however, remaining uncertainties at small cosmological scales, which have led to several claims of tensions between theory and observations: these include e.g. the missing satellites, the too-big-to-fail, the profile diversity and the cusp-core problems. In order to solve or alleviate these tensions warm dark matter (WDM) is often invoked. WDM introduces a suppression in the power spectrum on scales smaller than the free-streaming length λ fs or equivalently at masses lower than the free-streaming mass (see Eq. 1.1). Current constraints from Milky Way satellites show that M wdm > 2.02 keV [33], while Lyman-α studies set M wdm > 5.3 keV [31] at 95% C.L.. With these values, the suppression in the matter power spectrum occurs at scales which are smaller than the ones probed by upcoming surveys like Euclid. The current constraints do not forbid the intriguing possibility that dark matter exists in two phases, a cold one and a warm one. This scenario, called mixed dark matter or cold-warm dark matter (CWDM), is the object of study of this paper.
In this work, we ran a large set of cosmological N -body simulations spanning a wide range of parameter values in the plane M wdm − f wdm , where M wdm is the WDM component mass and f wdm is the WDM fraction with respect to total DM. We used the outputs to compute the suppression of the matter power spectrum with respect to the ΛCDM case and to build an emulator: this is able to predict the suppression in power with an accuracy of ∼ 1.5% over the range 0 < f wdm < 1, M wdm 0.1 keV, improving on previous existing fitting formulae [40,41]. We also tested whether the suppression depends on cosmological parameters, in particular those which weak lensing and angular galaxy clustering are most sensitive to, i.e. Ω m and σ 8 . We showed that such dependence is always below 2% at the scales and redshifts of interest, even for the most extreme cases we consider, with σ 8 more than 10σ away from our fiducial value. We also demonstrated that the difference of the baryonic suppression between CWDM and ΛCDM is much smaller than the expected total suppression effect and of the same order of magnitude of the expected precision of N -body codes for scales k 10 h/Mpc, thus proving that baryonic effects can be treated independently from the DM model assumed. We used the emulated suppression to qualitatively show the impact of CWDM on weak lensing and angular galaxy clustering power spectra, focusing on which combinations of WDM masses and fractions may in principle be detected in upcoming surveys. Finally, we studied the halo mass function. First, we confirmed that the smooth-k filter prescription proposed by Ref. [92] provides a good description both of the overall halo mass function and of its suppression in the CWDM scenario. Then, using the same prescription, we linked the halo mass function to an actual observable, the cluster number counts, performing a semi-quantitative estimate on the CWDM models which could be probed in upcoming surveys.
In a future paper [42] we plan to perform a full Monte Carlo Markov Chain analysis on synthetic data to have more realistic forecasts on the M wdm and f wdm parameters allowed by upcoming surveys.
|
2021-06-10T01:15:36.840Z
|
2021-06-08T00:00:00.000
|
{
"year": 2021,
"sha1": "ac3c07f07e611595c4c469e0bee98064a940f055",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.04588",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ac3c07f07e611595c4c469e0bee98064a940f055",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
20723934
|
pes2o/s2orc
|
v3-fos-license
|
Single stage transforaminal retrojugular tumor resection: The spinal keyhole for dumbbell tumors in the cervical spine
Background: Dumbbell tumors are defined as having an intradural and extradural component with an intermediate component within an expanded neural foramen. Complete resection of these lesions in the subaxial cervical spine is a challenge, and it has been achieved through a combined posterior/anterior or anterolateral approach. This study describes a single stage transforaminal retrojugular (TFR) approach for dumbbell tumors resection in the cervical spine. Methods: This is a retrospective review of a series of 17 patients treated for cervical benign tumors, 4 of which were “true” cervical dumbbell tumors operated by a simplified retrojugular approach. The TFR approach allows a single stage gross total resection of both the extraspinal and intraspinal/intradural components of the tumor, taking advantage of the expanded neural foramen. All patients were followed clinically and radiologically with magnetic resonance imaging (MRI). Results: Gross total resection was confirmed in all four patients by postoperative MRI. Minimal to no bone resection was performed. No fusion procedure was performed and no delayed instability was seen. At follow up, one patient had a persistent mild hand weakness and Horners syndrome following resection of a hemangioblastoma of the C8 nerve root. The other three patients were neurologically normal. Conclusions: The TFR approach appears to be a feasible surgical option for single stage resection in selective cases of dumbbell tumors of the cervical spine.
INTRODUCTION
Historically, the term "dumbbell" tumor was used to describe an intraextraspinal tumor of neural origin growing across a constricting anatomical space confined by the boundaries of the bony neural foramen and investing dural nerve root sleeve. [6,11] The most common forms are benign nerve sheath tumors (schwanommas and neurofibromas), although malignant transformation can rarely occur. [23] However, the descriptive term "dumbbell" has been widened to include heterogeneous tumors of vascular, dural, bone as well as neural origin such as hemangioblastomas, meningiomas, neuroblastomas, giant cell tumors, and gangliogliomas. [1,3,8,10,12,13,16,18,19,21,22] The common feature of "true dumbbell" tumors is an intradural component and an extradural/paraspinal component connected across an expanded neural foramen. The majority of dumbbell tumors occur in the cervical spine. [14,16,18,22] The usual clinical presentation is neck pain, radicular symptoms, and/or symptoms of spinal cord compression. [13][14][15][16]18] Nerve sheath tumors with at C1 and C2 root levels do not have a bony foramen, so the entire tumor is accessible through a standard posterior approach. Such tumors pose a different surgical challenge and are not considered here.
For the purpose of this study, we report single stage surgical resection of both intradural and extradural components of dumbbell tumors of the subaxial cervical spine in four patients through a transforaminal approach. As examples, the magnetic resonance imaging (MRI) findings of a patient with a predominantly intradural component and of another patient with predominantly extradural component are shown in [Figures 1 and 2], respectively.
MATERIALS AND METHODS
Following hospital ethics committee approval, a retrospective chart review was carried out. Between 2007 and 2013, 17 patients underwent retrojugular resection of laterally placed cervical spinal tumors. Among these were four "true" dumbbell tumors resected using a transforaminal retrojugular (TFR) approach. The lead surgeon was the same in all cases (JMD). Clinical and radiological presentation is outlined in Table 1. In all but one case [ Figure 1], the extraspinal component was larger than the intraspinal component. The index neural foramen was expanded in all four patients [ Table 1]. All patients were followed up clinically and radiologically [ Table 2].
Surgical technique
A skin incision is taken along the anterior border of the sternocleidomastoid (SCM) muscle. A subplatysmal flap is raised anteriorly until the midline and posteriorly up to the trapezius. The SCM is mobilized to expose the carotid sheath, and superiorly, the spinal accessory nerve. The inferior belly of omohyoid is incised as posteriorly as possible and mobilized antero-superiorly up to its insertion on the hyoid bone, for later use as a vascularized muscle flap for dural closure. The internal jugular vein (IJV) is skeletonized and mobilized along with the vagus nerve. The transverse cervical artery, scalene muscles, phrenic nerve, and trunks of the brachial plexus are identified. The V1 segment of vertebral artery (VA) is identified, and controlled with vessel loops, but is not mobilized or transposed.
For the tumor resection, standard microsurgical technique is used to internally debulk the extraspinal tumor component and amputate it at the neural foramen. The distal parent nerve root is divided. The tumor mass is followed through the expanded neural foramen and intradural portion is resected with division of the proximal parent root/fascicle. The foramen can be surgically enlarged (patient 4). The overlying VA is protected with a small spatula during bone drilling. Final transforaminal tumor resection inevitably results in a gush of cerebrospinal fluid (CSF). Dural closure is performed with local vascularized omohyoid or SCM flaps sutured onto the dural sleeve of the neural foramen, and reinforced with a synthetic dural sealant (Duraseal®).
No fusion was performed in any patient.
RESULTS
Average clinical follow was 30 months (range 6-78 months) [ Table 2]. Duration of surgery ranged from 185 to 416 min (average 306 min). The perioperative bleeding ranged from 100 to 400 ml (average 250 ml). Patients were discharged home on day 5 (3 patients) and day 12 (patient 2) of their hospitalization. Gross total resection (GTR) was confirmed by in all four patients by postoperative MRI. Patient 1 had a delayed pseudomeningocele in the neck several weeks after surgery. In this case, the muscle flap had not been sutured in place, and was found to have displaced with a resultant CSF leak. The dural closure was successfully revised. Patient 2 succumbed to disseminated meningeal metastatic disease from a malignant peripheral nerve sheath tumor (MPNST) with hydrocephalus and multiple radiculopathies 9 months after surgery, despite radiotherapy and chemotherapy. Patient 4 had a persistent Horner's syndrome likely due to dissection and manipulation of the stellate ganglion, lying beneath the C8 tumor. He also had mild hand weakness and transient neuropathic pain treated with pregabaline. None of the three patients with benign tumors showed evidence of recurrence at last follow up [ Table 2].
DISCUSSION
Spinal intradural tumors are rare with an incidence of 0.3-10/100,000. [7,17,20] However, true dumbbell tumors with intradural-extradural/extraspinal growth pattern are rarer and occur in 15% of all cases. [13,16,18,22] Tumor recurrence rate for these lesions has been reported to be 19.1% at 5 years and 43.4% after 10 years. [16] The multi-compartment location of such tumors requires careful surgical planning to achieve GTR, watertight dural reconstruction, and avoidance of destabilizing bone resection. All of these can be achieved by a single TFR approach.
Cervical dumbbell tumors are most commonly treated using a posterior laminectomy approach to access posterior or posterolaterally placed intradural tumors. [13,16] For tumors extending through and beyond the intervertebral foramen, partial or complete facet resection is needed for complete resection leading to segmental instability. [1,2,[13][14][15]19] Proximal or distal control of the VA and its feeding branches is not possible, and may lead to extensive perioperative blood loss and VA sacrifice. [13] The anterolateral retrojugular approach described by George and co-workers allows single stage tumor resection without fusion. [3][4][5]10,12,18,21] Mobilization and medial transposition of the sympathetic chain and the VA are necessary. [5] Other anterolateral approaches require partial vertebral body resection for tumor exposure, with reconstruction and stabilization. [13,15,19] Combined anterior/posterior approaches have also been described to achieve complete tumor excision. [1,9] The TFR approach is a simplification of the anterolateral retrojugular approach. Neither the sympathetic chain nor the VA is transposed. There is minimal to no bone resection. No fusion is required. Dural reconstruction is achieved using a novel local vascularized muscle flap.
|
2018-04-03T02:26:42.884Z
|
2015-04-01T00:00:00.000
|
{
"year": 2015,
"sha1": "63f2c3f8a85460a90940b4fddbc53d317177664e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2152-7806.154453",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "63f2c3f8a85460a90940b4fddbc53d317177664e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236883282
|
pes2o/s2orc
|
v3-fos-license
|
Quality of life of GIST patients with and without current tyrosine kinase inhibitor treatment: Cross-sectional results of a German multicentre observational study (PROSa)
Objective: We investigated the health-related quality of life (HRQoL) of patients with gastrointestinal stromal tumours (GIST). Methods: In the multicentre PROSa study, the HRQoL of adult GIST patients was assessed between 2017 and 2019 using the European Organisation for Research and Treatment of Cancer HRQoL questionnaire (EORTC QLQ-C30). We performed group comparisons and multivariate linear regressions.
predictive prognostic features, but small-intestinal tumours behave more aggressively than gastric tumours with similar parameters (Miettinen & Lasota, 2006).In a Swedish population-based study, 44% of all GIST-patients were high risk or overtly malignant cases, 14% had residual tumour after surgery (Nilsson et al., 2005).While surgical treatment of many localised GISTs is associated with a very good prognosis, treatment options for advanced GISTs were limited until the end of the 20th century.The discovery that most GISTs express a mutation in a tyrosine kinase, KIT (CD117) or PDGFR (Hirota, 1998;Miettinen & Lasota, 2006), or another mutation susceptible to targeted agents led to the development of and treatment with different tyrosine kinase inhibitors (TKIs) since the early 2000s.TKIs are associated with relatively high response rates in susceptible tumours.
However, because tumours may develop TKI resistance over time, a variety of TKIs are now available for advanced disease.Imatinib serves as the first-line treatment for most advanced tumours, sunitinib as the second-line treatment, and regorafenib as the third-line treatment.In localised GISTs, surgery is still the treatment of choice, followed by adjuvant imatinib for 3 years in patients with high-risk tumours (Casali et al., 2018).
So far, data on the health-related quality of life (HRQoL) of GIST patients have mainly focused on the treatment symptoms of individual TKIs.Imatinib has been described as well tolerated, yet almost all patients experience side effects of some grade (Dematteo et al., 2009).A comprehensive symptom list for patients with GISTs treated with targeted therapies includes 54 entries that derived from interviews with patients and health care professionals as well as a literature review (Sodergren et al., 2020).A systematic review of 82 papers with 5,977 total patients compared the side effects of the two most commonly used TKIs: imatinib and sunitinib (Sodergren et al., 2014).Common symptoms occurring with both drugs were diarrhoea (imatinib: 39% and sunitinib: 36%) and fatigue (both: 40%).
Few studies analysed HRQoL issues in GIST-patients beside treatment side effects or with regard to the heterogeneity of the disease given the different tumour sites.A qualitative study of 20 patients living with metastatic GIST in long-term clinical remission (median time in systemic treatment: 6 years) identified four major themes for long-term survivors: the adaptation and normalisation of family life, adjustment made to vocational life, limitations to one's social life, and managing negative mental-health issues.Lack of energy was one of the most frequent symptoms (Fauske et al., 2020).An ethnographic investigation on patient experiences and perspectives during the disease course identified five stages of disease management: crisis, hope, adaptation, new normal, and uncertainty (Macdonald et al., 2012).(Eichler et al., 2020;Eichler, Richter, et al., 2019;Schoffer et al., 2021).
Eligible adult patients and survivors were primarily asked to take part during visits to the recruiting study centres (for diagnosis, treatment, or follow-up), and some were invited to participate by phone or letter.Participation required written informed consent.The study was approved by the ethics committees of the Technical University of Dresden (EK1790422017) and the participating centres (Eichler, Schmitt, et al., 2019).Data were collected by the study coordination centre at the University Hospital Dresden.HRQoL data and sociodemographic data were sent by the participants to the study coordination centre by mail or online.Clinical information was submitted to the study coordination centre online by the participating study centres using documentation forms.Data collection was performed using REDCap electronic data capture tools (Vanderbilt University, Nashville, United States) hosted at the Technical University Dresden (Harris et al., 2009).For this analysis, we included adult patients and survivors with histologically confirmed GIST from all 13 study centres.
We excluded patients who were mentally or linguistically unable to complete the questionnaires.Only participants with HRQoL data were analysed.
For HRQoL measurement, we used the European Organisation for Research and Treatment of Cancer Quality of Life Core Questionnaire (EORTC QLQ-C30) (Aaronson et al., 1993).This instrument measures global quality of life with a range of values from 0 to 100 in five functioning and nine symptom scales (3 multi-item scales, 6 single-item scales).Higher scores indicate a better quality of life for the functioning scales and a higher symptom burden for the symptom scales.Additionally, we used 11 single items concerning symptoms from the EORTC item library-a list of them is to be found in Table 3 (Kulis et al., 2017).The items from the EORTC item library were chosen with regard to sarcoma patients in general in a twostage process.First, we tried to identify the most common issues not included in the EORTC QLQ-C30 in an unsystematic literature search.In a second step the issues found were circulated in and discussed and approved by our scientific advisory board.The decision to use all 11 items for the purpose of this analysis was made in consultation with the physicians involved in this publication.These items were transformed into single-item scales similar to the symptom scales of the EORTC QLC-C30.The same applies to the EORTC QLQ-C30 sum score (Giesinger et al., 2016).Here higher scores indicate a better HRQoL.
| Statistical analysis
For the description of the study population, we evaluated the variables from the multivariable model (see below) as well as metastases until baseline, tumour recurrence and treatment status.For age, time since diagnosis and socio-economic status, the median and the interquartile range (IQR) were calculated.For HRQoL measures, mean and standard deviation (SD) were calculated.Categorical variables were presented as absolute numbers and relative frequencies.
Descriptive variables were stratified according to the grouping of the univariate analysis (see below).A nonresponder analysis was performed to assess potential selection bias.
An age-and sex-standardised comparison was performed using reference values from the healthy German population (Nolte et al., 2019).The relevance of the differences was tested using reference values from Cocks et al. and Osoba et al. (Cocks et al., 2011;Osoba et al., 1998).Differences were classified by these publications as "small," "moderate," or "large" (Osoba) or "trivial," "small," "medium," or "large" (Cocks).The latter ones were defined as: "Large: one representing unequivocal clinical relevance.Medium: likely to be clinically relevant but to a lesser extent.Small: subtle but nevertheless clinically relevant.Trivial: circumstances unlikely to have any clinical relevance or there was no difference" (Cocks et al., 2011).
Differences between distinct groups of GIST patients were examined for all domains of the EORTC QLQ-C30, 11 additionally selected single-item scales, and the EORTC QLQ-C30 sum score.Independent samples were tested for significance with t tests.
A p value less than 0.05 was considered to be statistically significant.
All HRQoL domains were analysed by multivariate linear regression to control for potentially confounding variables in the analysis of the number of TKI treatment lines (0-1 line vs. more than 1 line), TKI treatment (none or former vs. current treatment), and treatment intention (curative vs. palliative).Unstandardized regression coefficient (B), 95% confidence intervals (95% CI), p values, and coefficient of determination (R 2 ) were evaluated in a model that was adjusted for age at baseline, sex, socio-economic status, surgery, disease status (complete remission, partial remission/stable disease, progress, unknown), time since diagnosis (up to 1 year, 1-2 years, 2-5 years, more than 5 years), tumour site (stomach, small bowel, rectum, other/ unknown) and tumour size (T1/T2, T3/T4, unknown).Socio-economic status (SES) was assessed using the Winkler Index (Lampert et al., 2013).
Full results of multivariable linear regression were shown and discussed in the online supplement.
| Sample description
The PROSa study recruited 1,309 sarcoma patients.The analysis included 130 GIST patients with questionnaire data, of whom, 54% were female, the median age at diagnosis was 58.6 years (IQR: 49.4-66.8),and the median age at study inclusion was 63.0 years (IQR: 53.3-73.4).Primary tumours were located in the stomach in 44% of the patients and in the small intestine in 28% of the patients.
Metastases developed in 43% of the patients during the course of their disease, and 23% had a local recurrence.With respect to treatment intent, 39% were receiving palliative care, 55% were being treated with curative intent, 39% were undergoing follow-up evaluation, and 5% had a treatment planned.With respect to treatment approach, 85% had at least one surgery, 56.9% received one TKI treatment, 21.5% had two TKI treatments, and 10% had three or more lines of TKI treatment (Table 1).Patients were stratified by number of TKI treatments and treatment intent for analyses.One group consisted of the 28 patients (21.5%) who never received TKI treatment combined with the 33 patients (25.4%) no longer receiving treatment, for a total of 61 patients (46.9%) not currently receiving TKI treatment.Of those patients, 51 (83.6%) were in follow up.The remaining 69 patients in the study cohort (53.1%) were currently being treated with a TKI: 41 (31.5%) with palliative and 27 (20.8%)with curative intent.Within the palliative group, 20 patients (15.4%) were being treated with first-line TKI and 21 (16.2%) received multiple lines of TKIs.
| Nonresponder analysis
Of the 159 recruited GIST patients 29 (18.2%)failed to return their questionnaires and were therefore classified as nonresponders.They were more commonly men (52% of the nonresponders compared to 46% of the responders).Nonresponders also had a longer mean time since their diagnosis than responders (4.5 years vs. 3.2 years) and were more often in complete remission (41% vs. 26%) (Table 1).
In age-and sex-matched comparisons with a healthy German population, most scales showed significant differences; exceptions were general health, physical functioning, pain, and dyspnoea.Large differences were observed in social functioning (20.3 points) and diarrhoea (17.3 points), and moderate differences occurred in financial difficulties (13.0 points), insomnia (15.8), and emotional functioning (14.7 points) (Figure 1).
| Stratified univariate analyses
Patients with no or former TKI treatment had a better overall HRQoL (EORTC sum score: 7.4-point difference, small difference) compared to those in treatment.Moderately significant differences were observed in cognitive functioning (11.2 points), fatigue (16.2 points), diarrhoea (13.7 points), financial difficulties (7.4 points), lack of energy (15.5 points), and burning eyes (16.2 points) (Table 2).
The curative treatment group had a slightly higher overall HRQoL sum score than palliative patients, but the difference was trivial (3.8 points) and not significant.Moderately significant differences were observed in diarrhoea (18.4 points) and hair loss (17.4 points), with the curative group performing better.On the other hand, the curative group reported less interest in sexuality (18.6 points) than the palliative group (Table 2).
Palliative patients receiving fist-line TKI treatment had a higher overall HRQoL sum score than those receiving multiple lines.The difference was moderate (12.7 points) but not significant.In general, patients in first-line TKI treatment had lower symptom loads.Large, significant differences were found in physical function (14.5 points), cognitive function (21.3 points), mouth pain (22.2 points), and headache (28.1 points) (Table 2).
| Multivariate linear regression
The linear regression tended to reduce the differences detected by the stratified analysis between patients with no/former TKI treatment and patients in current TKI treatment.The EORTC sum score differed by 4.0 points (95% CI: À13.1-5.6, trivial difference, not significant).T A B L E 2 Health-related quality of life scores of GIST patients Large differences (Cocks et al., 2011, Osoba et al., 1998).
e Patients in current TKI treatment stratified by treatment intention do not add up to 69 because for one person treatment intention was unknown.
No outcome was significantly associated with treatment intention.
The moderate effects observed in the stratified analysis were similar in strength but not statistically significant.The EORTC sum score differed by 0.3 points (trivial difference, not significant) (Table 3).
| Results in context
As expected, GIST patients had worse HRQoL scores than the general German population.Social functioning and diarrhoea are the most Age and sex standardised comparison to a German norm population (Nolte et al., 2019).C30 sum score not standardised and without comparable data.↕ 95% confidence interval.Large differences: Social functioning, Diarrhoea; medium/moderate differences: Financial difficulties, emotional functioning, insomnia; small differences: Role functioning, cognitive functioning, fatigue, nausea/ vomiting, appetite loss, constipation; trivial differences: Global Health, physical functioning, pain, dyspnea T A B L E 3 Health-related quality of life scores of GIST patients affected domains measured by the EORTC-C30, while general health, physical functioning, pain, and dyspnoea were in ranges similar to the general population.The small or non-existent observed differences in general health could be due to patient adjustment over time.
While a variety of significant differences was observed in the stratified univariate analysis, the differences between patients currently receiving TKI treatment and patients with no/former TKI treatment did not remained significant after the multivariate logistic regression.This does not mean that differences do not exist, but on the basis of our relatively small sample size we were not be able to verify them statistically.Especially in those domains in which moderate/medium differences were found (notably fatigue, lack of energy, burning eyes and less interest in sexuality) further research is needed.
The observed differences due to the number of TKI treatment lines received during the disease course were remarkable.Patients who received multiple lines of treatment had stronger impairments in all functioning scales (except emotional functioning) than all other patients.We also observed higher symptom loads in a variety of domains, notably, fatigue, mouth pain, rash, burning eyes and headache.There are a variety of potential causes for this observation, which this study could not further disentangle.One possibility is that the specific medications given at later treatment lines have more negative effects.However, it is also possible that the duration of the disease course played a role or that an increase in disease severity precipitated the change to a subsequent line of treatment.We adjusted for disease severity through a variety of variables (treatment intention, tumour size at diagnosis, disease status) but it might be the case that we could not fully measure the impact of disease severity.
We were not able to calculate interaction terms between disease severity and number of treatment lines.
Our observational HRQoL study of GIST patients used different instruments to evaluate HRQoL than previous studies that almost exclusively focused on evaluating treatment side effects of different TKIs.Side effects are often not measured as patient-reported outcomes but as expert-reported adverse events (Sodergren et al., 2014).
With the caveat, that we were not able to collect information on the specific medication patients received, symptom loads in univariate group comparisons were in line with previous studies regarding diarrhoea, fatigue, and nausea/vomiting.In the multivariate regression, those results could not be statistically verified.A similar result is to be found in Poort et al., who analysed the prevalence of fatigue in distinct groups of GIST patients (n = 89) and matched healthy controls (n = 234; (Poort et al., 2016).In that study, 30% of all GIST patients experienced severe fatigue compared to 15% of matched healthy controls.Within the three groups of GIST patients (treatment completed, curative treatment, and palliative treatment) no significant differences were found.
The non-significant but medium-sized differences we found with regard to interest in sexuality between patients currently receiving TKI treatment and patients with no/former TKI treatment should be further investigated.One study in 51 men (49 in TKI treatment) with a variety of cancers reported low to none sexual desire in 29% (no control group) (Tsai et al., 2017).A 2017 review came to the conclusion, that the vast majority clinical trials in TKI reported no effects on sexual function.Exemptions were reported for pazopanib and sorafenib (Atallah et al., 2018).It should also be noted that lesser interest in sexuality was less pronounced in the palliative group as in the curative group.This difference was not statistically significant, but of medium relevance.
Functioning scales are not usually evaluated as treatment side effects; and therefore, comparable data is scarce.An exception is impaired cognitive functioning.An observational study of 30 patients with GIST or metastatic renal cell cancer who were treated with sunitinib or sorafenib found that the group receiving TKIs (20 patients) showed worse performance in a variety of cognitive domains than healthy controls (30 individuals; (Mulder et al., 2014).In an online survey of 485 GIST patients, 63.9% reported cancer-related cognitive impairment, regardless of receiving TKI or not.In this study, patients at least 5 years since their diagnosis had significantly worse perceived cognitive impairment scores than survivors less than 5 years since their diagnosis (Ferguson et al., 2019).Our observations showed the highest symptom loads in patients with multiple lines of TKI treatment.Observational study results therefore seem to indicate that cognitive impairment is a problem within the population of GIST patients, but it remains unclear when this impairment sets in and which factors influence its development.
| Strengths and limitations
This is, to our knowledge, the first evaluation of the HRQoL of GIST patients in a standard clinical-care setting.We identified HRQoL domains with a high symptom or restriction load and groups of GIST patients that are particularly affected, especially those with multiple TKI treatment lines.Because the EORTC QLQ-C30 is a generic cancer questionnaire and the additional questions from the EORTC item library were chosen with respect to sarcoma patients in general, it is possible that relevant GIST and TKI -specific symptoms were not recorded.This applies, for example, to oedema, hand-foot syndrome, and specific kinds of pain.
Participating patients were recruited in several study centres across Germany.One limitation of the study is that we could not perform a nonparticipant analysis of those who GIST patients who did not wish to take part in our study.Furthermore, the observed differences between responders and nonresponders indicate that the responding patients more commonly had severe disease compared to nonresponding patients.This implies that the absolute figures of HRQoL restrictions and symptoms may be overestimated.The present analysis is an exploratory cross-sectional analysis.Causal conclusions are therefore not possible.It is potentially subject to selection bias.
We see this possibility mainly at the level of the study centres.The majority of our patients were recruited in university hospitals and/or specialised centres and those might not to representative for GIST-patients in general.
The comparison between different groups of GIST patients, as well as the multivariate linear regression, sometimes included only a
Because
HRQoL analyses in GIST patients are rare, often focused on the symptoms of TKI treatment, and not undertaken in a standard clinical care setting, this explorative analysis aimed to tackle the following research questions: 1.How does the HRQoL of GIST patients in Germany compare to the general German population? 2. Are GIST patients receiving current TKI-therapy, later lines of TKI therapy or palliative care more affected by HRQoL limitations than other GIST patients; and if so, to what extent? 2 | METHODS We analysed cross-sectional data from the prospective PROSa cohort study (www.uniklinikum-dresden.de/prosastudie), which was conducted nationwide between September 2017 and February 2019 in 39 German study centres (ClinicalTrials.govID: NCT03521531).The PROSa study (Burden and Medical Care of Sarcoma in Germany: Nationwide Cohort Study Focusing on Modifiable Determinants of Patient-Reported Outcome Measures in Sarcoma Patients) aimed to gather information on a variety of patient-reported outcomes (for example, HRQoL and distress), clinical data (diagnosis and treatment), as well as structural data of the participating study centres (certifications and numbers of treated patients).More detailed descriptions of the PROSa study have previously been published Three patient-group comparisons were performed: (a) patients who never received or only formerly received TKI treatment vs. patients currently receiving TKI treatment, (b) patients receiving TKI treatment with curative intent vs. patients receiving TKI treatment with palliative intent, and (c) patients being treated with palliative intent with a first-line TKI vs. a multiple-line TKI.
Significant differences were not observed, in three domains p value was below 0.06.Patients in TKI treatment showed higher fatigue T A B L E 1 Description study population stratified by TKI treatment Abbreviation: TKI, tyrosine kinase inhibitors; IQR, interquartile range.a Patients in current TKI treatment stratified by treatment intention do not add up to 69 because for one person treatment intention was unknown.
|
2021-08-04T06:17:16.233Z
|
2021-08-03T00:00:00.000
|
{
"year": 2021,
"sha1": "36a595617c936f16d44741a2fcb0ac925455c9c8",
"oa_license": "CCBYNC",
"oa_url": "http://publikationen.ub.uni-frankfurt.de/files/63961/European_J_Cancer_Care-2021-Eichler-Quality_of_life_of_GIST_patients_with_and_without_current_tyrosine_kinase.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5b578abf6ab436a1f018a33e5d83b9b4d1abe401",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53213623
|
pes2o/s2orc
|
v3-fos-license
|
A Survey of NFC Sensors Based on Energy Harvesting for IoT Applications
In this article, an overview of recent advances in the field of battery-less near-field communication (NFC) sensors is provided, along with a brief comparison of other short-range radio-frequency identification (RFID) technologies. After reviewing power transfer using NFC, recommendations are made for the practical design of NFC-based tags and NFC readers. A list of commercial NFC integrated circuits with energy-harvesting capabilities is also provided. Finally, a survey of the state of the art in NFC-based sensors is presented, which demonstrates that a wide range of sensors (both chemical and physical) can be used with this technology. Particular interest arose in wearable sensors and cold-chain traceability applications. The availability of low-cost devices and the incorporation of NFC readers into most current mobile phones make NFC technology key to the development of green Internet of Things (IoT) applications.
Introduction
Near-field communication (NFC) is a radio-frequency identification system (RFID) that enables fast communication between devices over a short range using the 13.56-MHz RFID band [1]. Although near-field communication has existed for over a decade [2], this technology did not become widespread until its extensive use in payment systems. NFC technology enables simple and safe two-way interactions between electronic devices, enabling consumers to perform contactless transactions, access digital content, and connect electronic devices with a single tap. Most current smartphones also incorporate an NFC reader. NFC systems are, therefore, gaining importance in the Internet of Things (IoT) scenario [3,4]. NFC is also interesting for the development of low-cost sensors since it provides a quick and easy way of obtaining data from them simply by approaching the reader to the tag without having to pair the devices. The upcoming fifth generation (5G) of communication technology is expected to unleash a massive IoT ecosystem where networks can serve the communication needs for billions of connected devices, with the right trade-offs between speed, latency, and cost. RFID is one of the most important technologies for the massive deployment of IoT. It can bring IoT to unpowered objects with its ability to connect the unconnected. In addition, NFC can put IoT devices under a user's control and is easy to use with its "tap-and-go" nature. In particular, green NFC sensors based on energy harvesting can help in the design of a new generation of low-cost smart wearables and in the simplification of the man-machine interface, which opens the door to cooperative IoT for smart cities and Industry 4.0 applications.
Batteries in many electronic devices should be managed as hazardous waste because of their toxic contents or reactive properties [5]. In this context, green electronics technology provides solutions that are well suited to the broad needs of an energy-efficient society. Ambient energy harvesting is the process whereby energy is converted from the environment and stored for use in electronic be used since the identification can be done by other methods and the number of sensors in the read range may be small.
Concerning NFC, the most important NFC IC manufacturers, such as NXP, TI, ST Microelectronics, AMS, and Melexis, recently introduced advanced integrated circuits (IC) with energy-harvesting capabilities [28]. These chips collect part of the energy received by the magnetic field generated at the reader to provide an analog voltage output that can be used to power external electronics such as low-power microcontrollers or sensors. The progressive introduction of these ICs into the market enables the development of low-cost batteryless portable sensors [29,30].
A comparison of RFID technologies is shown in Table 1. Bluetooth low energy (BLE) is also included as an example of low-power, short-range wireless technology. The availability of low-cost standardized technology and the custom of users to use NFC technology and wireless power transfer makes NFC technology one of the keys to the development of a new generation of green sensors for IoT applications. Table 1. Comparison of radio-frequency identification (RFID) sensor technologies. NFC-near-field communication; UHF-ultra-high frequency; BLE-Bluetooth low energy; UWB-ultra-wideband; IC-integrated circuit; BAP-battery-assisted passive; ISM-industrial, scientific, and medical. Recent advances in NFC-based sensor technologies are reviewed in this paper. The paper is organized as follows: in Section 2, several practical considerations for the design of NFC-based sensors are provided. Firstly, wireless power transfer between the NFC reader and the IC is described. After that, the factors that limit the read range such as antenna coupling, the quality factor of the antennas, and detuning due to the metallic surfaces are examined. In this section, a survey of existing NFC IC with energy-harvesting capability is also conducted. Several green sensors found in the literature are summarized in Section 3. Finally, some conclusions are drawn in Section 4.
NFC Energy Harvesting
In batteryless mode, the tag is fully passive. In this mode, the NFC-enabled sensor harvests energy from incoming RF emissions (from a reader) to power the sensor interface and RF transmissions. In battery-assisted (semi-passive) mode, the NFC-enabled sensor can operate stand-alone in applications requiring autonomous and continuous monitoring working as data loggers. The life of a sensor tag may include operation in both modes: in semi-passive mode until the battery is exhausted, and thereafter, in passive mode. Data are stored in non-volatile memory and retained when the device is not powered. Figure 1 shows the block diagram of an NFC-based data logger assisted with a complementary energy source (e.g., solar cell). loggers. The life of a sensor tag may include operation in both modes: in semi-passive mode until the battery is exhausted, and thereafter, in passive mode. Data are stored in non-volatile memory and retained when the device is not powered. Figure 1 shows the block diagram of an NFC-based data logger assisted with a complementary energy source (e.g., solar cell). NFC employs electromagnetic induction between two loop antennas. It operates within the globally available unlicensed radio-frequency industrial, scientific, and medical (ISM) band of 13.56 MHz on the ISO/IEC 18000-3 air interface at rates ranging from 106 to 424 kbit/s. Communication in NFC systems is based on inductive coupling between the reader and the tag antennas. The receiver antenna is connected to the internal tag rectifier, which takes energy from the RF field that is used to power up the tag electronics. The internal logic demodulates the amplitude shift keyed (ASK) message from the reader. The tag transponder (which is assumed to be passive) responds, using the passive load modulation technique, by changing the antenna impedance of the tag [1,31]. The passive load modulation spectrum consists of the RF carrier, two sidebands (at 12.712 MHz and 14.408 MHz), and modulated sidebands on these two subcarrier signals ( Figure 2). All the transmitted data are carried in the two sidebands. The 13.56-MHz RF carrier, therefore, does not have to be transmitted by the transponder. The commands from the reader are transmitted in the sidebands of the carrier and the load modulation is carried in the sidebands of the two subcarriers shown in the blue triangles. Below, the conditions which must be met to enable communication between the reader and the tag are studied.
Figure 2.
A typical spectrum of an NFC radio-frequency identification (RFID) system illustrating the reader command around the carrier frequency and the load modulation at the sidebands. The impact of increasing the reader Q factor is also shown. NFC employs electromagnetic induction between two loop antennas. It operates within the globally available unlicensed radio-frequency industrial, scientific, and medical (ISM) band of 13.56 MHz on the ISO/IEC 18000-3 air interface at rates ranging from 106 to 424 kbit/s. Communication in NFC systems is based on inductive coupling between the reader and the tag antennas. The receiver antenna is connected to the internal tag rectifier, which takes energy from the RF field that is used to power up the tag electronics. The internal logic demodulates the amplitude shift keyed (ASK) message from the reader. The tag transponder (which is assumed to be passive) responds, using the passive load modulation technique, by changing the antenna impedance of the tag [1,31]. The passive load modulation spectrum consists of the RF carrier, two sidebands (at 12.712 MHz and 14.408 MHz), and modulated sidebands on these two subcarrier signals ( Figure 2). All the transmitted data are carried in the two sidebands. The 13.56-MHz RF carrier, therefore, does not have to be transmitted by the transponder. The commands from the reader are transmitted in the sidebands of the carrier and the load modulation is carried in the sidebands of the two subcarriers shown in the blue triangles. Below, the conditions which must be met to enable communication between the reader and the tag are studied. loggers. The life of a sensor tag may include operation in both modes: in semi-passive mode until the battery is exhausted, and thereafter, in passive mode. Data are stored in non-volatile memory and retained when the device is not powered. Figure 1 shows the block diagram of an NFC-based data logger assisted with a complementary energy source (e.g., solar cell). NFC employs electromagnetic induction between two loop antennas. It operates within the globally available unlicensed radio-frequency industrial, scientific, and medical (ISM) band of 13.56 MHz on the ISO/IEC 18000-3 air interface at rates ranging from 106 to 424 kbit/s. Communication in NFC systems is based on inductive coupling between the reader and the tag antennas. The receiver antenna is connected to the internal tag rectifier, which takes energy from the RF field that is used to power up the tag electronics. The internal logic demodulates the amplitude shift keyed (ASK) message from the reader. The tag transponder (which is assumed to be passive) responds, using the passive load modulation technique, by changing the antenna impedance of the tag [1,31]. The passive load modulation spectrum consists of the RF carrier, two sidebands (at 12.712 MHz and 14.408 MHz), and modulated sidebands on these two subcarrier signals ( Figure 2). All the transmitted data are carried in the two sidebands. The 13.56-MHz RF carrier, therefore, does not have to be transmitted by the transponder. The commands from the reader are transmitted in the sidebands of the carrier and the load modulation is carried in the sidebands of the two subcarriers shown in the blue triangles. Below, the conditions which must be met to enable communication between the reader and the tag are studied.
Figure 2.
A typical spectrum of an NFC radio-frequency identification (RFID) system illustrating the reader command around the carrier frequency and the load modulation at the sidebands. The impact of increasing the reader Q factor is also shown.
Figure 2.
A typical spectrum of an NFC radio-frequency identification (RFID) system illustrating the reader command around the carrier frequency and the load modulation at the sidebands. The impact of increasing the reader Q factor is also shown.
Forward Link
To establish communication between the reader and the tag (forward link) and to ensure enough power for the RF to direct current (DC) conversion to feed the electronic circuitry, the aim is to maximize power transfer from the reader to the tag. The power delivered to the tag IC (P d ) must, therefore, be above a threshold power (P th ): where P s is the power transmitted at the reader, and G T is the available gain at the center frequency, f c . To this end, efficiency must be maximized. The system is modeled as shown in Figure 3. Maximum efficiency is obtained from the available gain under matching conditions, G Tmax . This efficiency can be computed from the S-parameter measurements and expressed as a function of the parameter, where k is the magnetic coupling between the reader and tag coils (k = M/ √ L 1 L 2 ) and Q 1 and Q 2 are the quality factors of the reader and tag coils, respectively [32].
(2) To establish communication between the reader and the tag (forward link) and to ensure enough power for the RF to direct current (DC) conversion to feed the electronic circuitry, the aim is to maximize power transfer from the reader to the tag. The power delivered to the tag IC (Pd) must, therefore, be above a threshold power (Pth): where Ps is the power transmitted at the reader, and GT is the available gain at the center frequency, fc. To this end, efficiency must be maximized. The system is modeled as shown in Figure 3. Maximum efficiency is obtained from the available gain under matching conditions, GTmax. This efficiency can be computed from the S-parameter measurements and expressed as a function of the parameter, χ = k 2 Q1Q2, where k is the magnetic coupling between the reader and tag coils ( = / ) and Q1 and Q2 are the quality factors of the reader and tag coils, respectively [32].
(2) Power transfer can, therefore, be maximized by increasing the quality factor of the antennas or increasing the coupling that is a function of the distance between the antennas. However, a high Q factor would lead to limited bandwidth (see Figure 2) and long-time constants, causing severe distortion in the modulated signal (see Figure 4).
To ensure that the communication works properly, the maximum value of the Q factor of the initiator antenna must be such that the bandwidth B (at −3 dB), which is equal to fc/Q, is at least capable of channeling all the frequencies contained in the spectrum of the signal modulating the carrier frequency. The bandwidth of the forward link is the bandwidth of the modulation sidebands of the carrier and is dependent on the modulation scheme used by the reader. In the worst-case scenario (corresponding to the maximum value, Q1max, of the initiator antenna circuit), the fundamental bit rate of the square-wave signals (with a cyclic ratio of 50% in the case of non-return-to-zero (NRZ) bit coding) of that digital data stream must, therefore, be at least equal to half of the bandwidth B of the tuned circuit. The result is that Q1max is limited to fc/(2 × bit rate). Unfortunately, for NFC uplink ISO 18092 (or ISO 14443, type A), in order to ensure maximum energy transference, the carrier is modulated using ASK (with 100% of the modulation index) and Power transfer can, therefore, be maximized by increasing the quality factor of the antennas or increasing the coupling that is a function of the distance between the antennas. However, a high Q factor would lead to limited bandwidth (see Figure 2) and long-time constants, causing severe distortion in the modulated signal (see Figure 4).
To ensure that the communication works properly, the maximum value of the Q factor of the initiator antenna must be such that the bandwidth B (at −3 dB), which is equal to f c /Q, is at least capable of channeling all the frequencies contained in the spectrum of the signal modulating the carrier frequency. The bandwidth of the forward link is the bandwidth of the modulation sidebands of the carrier and is dependent on the modulation scheme used by the reader. In the worst-case scenario (corresponding to the maximum value, Q 1max , of the initiator antenna circuit), the fundamental bit rate of the square-wave signals (with a cyclic ratio of 50% in the case of non-return-to-zero (NRZ) bit coding) of that digital data stream must, therefore, be at least equal to half of the bandwidth B of the tuned circuit. The result is that Q 1max is limited to f c /(2 × bit rate). Unfortunately, for NFC uplink ISO 18092 (or ISO 14443, type A), in order to ensure maximum energy transference, the carrier is modulated using ASK (with 100% of the modulation index) and modified Miller coding. In this modified Miller bit coding, a pause on the carrier frequency of duration, T p , is made (see the top of Figure 4). This pause, T p , is equivalent to the transmission of a frequency whose period is 2T p , which is an equivalent bit rate of (1/2T p ). Another interpretation in the time domain can be made. Figure 4 shows the amplitude envelope that decreases exponentially with a time constant, τ = Q/πf c . Assuming that the envelope will vanish after a few time constants, the reader Q factor is limited by Equation (3) [31].
According to NFC forum standard ISO 14443, the quality factor is limited to 40 (35 considering design tolerances) at 106 kbit/s bit-rate transfers [31]. For applications that use NFC IP2-ISO 21481 with the authorized use of ISO 15693 and NFC-V targets, whatever the bit rate, the shortest time present in the uplink communication protocol is a "pause" lasting T p = 9.44 µs. In this case, Q 1max = 128, which generally can be reduced to Q 1max usable = 100 assuming design tolerances. Generally, these values of Q 1max usable are not difficult to obtain and are easy to reduce using serial resistors. Figure 4). This pause, Tp, is equivalent to the transmission of a frequency whose period is 2Tp, which is an equivalent bit rate of (1/2Tp). Another interpretation in the time domain can be made. Figure 4 shows the amplitude envelope that decreases exponentially with a time constant, τ = Q/πfc. Assuming that the envelope will vanish after a few time constants, the reader Q factor is limited by Equation (3) [31].
According to NFC forum standard ISO 14443, the quality factor is limited to 40 (35 considering design tolerances) at 106 kbit/s bit-rate transfers [31]. For applications that use NFC IP2-ISO 21481 with the authorized use of ISO 15693 and NFC-V targets, whatever the bit rate, the shortest time present in the uplink communication protocol is a "pause" lasting Tp = 9.44 µs. In this case, Q1max = 128, which generally can be reduced to Q1max usable = 100 assuming design tolerances. Generally, these values of Q1max usable are not difficult to obtain and are easy to reduce using serial resistors.
Reverse Link
The green line in Figure 2 shows the magnitude of the frequency response for a low Q reader, while the blue line shows the magnitude for a high Q reader. We can see that, as the Q factor increases, bandwidth decreases and attenuation increases at the subcarrier frequencies. The return signal, therefore, becomes smaller due to increased attenuation. In this case, the return signal power (Pb) is smaller than the reader sensitivity (Smin,reader) and the reader cannot decode the load modulation. This condition in the reverse link can be mathematically expressed as where GT(fsub) is the system transducer gain at the subcarrier frequency, and Pm is the modulating power (Pm = m 2 /4Pd), which depends on the modulating factor, m. Typically, reader sensitivity is 110 dB below the level of the transmitter carrier signal (Smin,reader = −110 dBc) [1]. The read range can be limited by the forward link (Equation (1)) or by the reverse link (Equation (4)). In both cases, transducer gain is a function of the coupling coefficient between the two antennas, k. The coupling depends on the design, shape, area, materials, and distance of the antenna. The reader design may be different for different mobile devices. Differences in the read range are, therefore, expected depending on the coupling of each reader. The loaded quality factor of the tag is also not constant because the input impedance of the IC is nonlinear. For short distances, the IC receives high power and decreases load resistance and quality factor. The quality factor of the tag antenna is, therefore, not adjusted with external resistors, because it depends on the distance, which also simplifies the tag.
Reverse Link
The green line in Figure 2 shows the magnitude of the frequency response for a low Q reader, while the blue line shows the magnitude for a high Q reader. We can see that, as the Q factor increases, bandwidth decreases and attenuation increases at the subcarrier frequencies. The return signal, therefore, becomes smaller due to increased attenuation. In this case, the return signal power (P b ) is smaller than the reader sensitivity (S min,reader ) and the reader cannot decode the load modulation. This condition in the reverse link can be mathematically expressed as where G T (f sub ) is the system transducer gain at the subcarrier frequency, and P m is the modulating power (P m = m 2 /4P d ), which depends on the modulating factor, m. Typically, reader sensitivity is 110 dB below the level of the transmitter carrier signal (S min,reader = −110 dBc) [1]. The read range can be limited by the forward link (Equation (1)) or by the reverse link (Equation (4)). In both cases, transducer gain is a function of the coupling coefficient between the two antennas, k. The coupling depends on the design, shape, area, materials, and distance of the antenna. The reader design may be different for different mobile devices. Differences in the read range are, therefore, expected depending on the coupling of each reader. The loaded quality factor of the tag is also not constant because the input impedance of the IC is nonlinear. For short distances, the IC receives high power and decreases load resistance and quality factor. The quality factor of the tag antenna is, therefore, not adjusted with external resistors, because it depends on the distance, which also simplifies the tag.
Tag Antenna Design Considerations
In order not to decrease the voltage that reaches the IC, a matching network is not used in the tag (as it is in the reader) that will introduce a voltage divider. Several studies [32,33] demonstrated that, for example, introducing a series capacitance for matching increases the backscattering modulated level but reduces the energy that reaches the IC. The tag design, therefore, consists of the design of the antenna and the adjustment of the tag resonance frequency with no matching network. From a system perspective, the analog RF performance of a batteryless transponder can be considered using a simplified equivalent circuit (shown in Figure 3b), where the IC chip is modeled as a parallel connection of a resistance (R IC ) and chip capacitances (C IC ). The tag's resonance frequency must be tuned to the central frequency of operation or slightly shifted to a higher frequency to avoid detuning caused by the presence of metallic materials or the reader's own loop. The tag's resonance frequency is approximately calculated using Equation (5).
where L a is the tag's antenna inductance, C IC is the internal IC capacitance, C p is the layout parasitic capacitance (which includes the antenna capacitance and the parasitic capacitance due to the interconnections), and C tuning is the capacitor used to adjust the resonance frequency to the operation frequency f c (13.56 MHz). The antenna inductance, L a , can be calculated from compact analytical formulas [34], from numerical methods [35], or using full-wave electromagnetic simulators.
The antenna losses are the result of conductor DC resistance, and the alternating current (AC) losses are due to the skin effect. Depending on the substrate material, additional (e.g., dielectric) losses may also be significant. Parasitic capacitance for planar loop coils is a function of the conductor area, the gap between turns, the dielectric substrate, and permittivity. This can generally be extracted from the antenna's resonance frequency. The losses in the dielectric are modeled with a resistance in parallel with the capacitor, R pa . This resistance can be neglected in antennas printed on low-loss substrates or in the air.
A key decision when designing the antenna is the size of the loop. Although this is restricted by the application, this decision plays a key role in the read range. To investigate the importance of the size of the antenna, the coupling between two circular loop antennas is considered as an example. Using Neumann's formula, mutual inductance M can be found as a function of the complete elliptic integrals (e.g., implemented in MATLAB using the ellipke function). Analytical expressions can be derived for this case [36]. Figure 5a depicts the coupling factor k between two loop antennas of radius r 1 and r 2 as a function of the ratio between the two radii for different axis distances x. It can be derived that there is an optimum ratio r 2 /r 1 that depends on the distance (see Figure 5b). The most widespread NFC readers are those integrated into smartphones. As the main application is for making payments, mobile antennas are often optimized to read payment cards (standard size = 85.60 × 53.98 mm). The reader radius r 1 is, therefore, on the order of 2-2.5 cm. The typical read range is on the order of 1 cm. Figure 5 shows that the optimum case is when the two loop antennas have the same size (r 2 ≈ r 1 ). Although this conclusion is derived for the special case of circular loop antennas, the result can be extended to other shapes. Modern mobiles often use metallic cases; thus, a special design is used to avoid any losses introduced by the metallic parts or batteries [37][38][39]. Some mobiles use ferrites to avoid this problem [40,41]. Another factor that can reduce wireless power transfer is the detuning of the tag antenna when the mobile's metallic parts are close to the tag. An example of a tag antenna was presented in order to study the effects of detuning [42]. Here, the antenna was designed using a two-dimensional (2D) full-wave simulator (Keysight-Momentum). To enable reading with a mobile NFC reader, a 50 × 50 mm loop antenna with six turns on a 0.8-mm-thick FR4 was chosen. The trace had a width of 0.7 mm and the gap between traces was 1 mm. By measuring the DC resistance, inductance, and resistance at the resonance frequency, the antenna model shown in Figure 3b could be obtained [42]. Figure 6 shows the inductance and quality factor of the tag antenna for several distances to a ground plane that simulates the mobile case. Agreement between the simulations and measurements is good. The measurements were taken by measuring the parameter S11 of a test antenna connected by means of a SubMiniature version A (SMA) connector to a vector network analyzer (VNA). The antenna impedance (Z) can be obtained from parameter S11 as a function of frequency for different antenna-to-metal distances. The antenna quality factor is obtained from Q = Im(Z)/Re(Z) at the operation frequency. An important reduction in inductance due to the induced image currents and an increase in losses due to the metal are observed. Modern mobiles often use metallic cases; thus, a special design is used to avoid any losses introduced by the metallic parts or batteries [37][38][39]. Some mobiles use ferrites to avoid this problem [40,41]. Another factor that can reduce wireless power transfer is the detuning of the tag antenna when the mobile's metallic parts are close to the tag. An example of a tag antenna was presented in order to study the effects of detuning [42]. Here, the antenna was designed using a two-dimensional (2D) full-wave simulator (Keysight-Momentum). To enable reading with a mobile NFC reader, a 50 × 50 mm loop antenna with six turns on a 0.8-mm-thick FR4 was chosen. The trace had a width of 0.7 mm and the gap between traces was 1 mm. By measuring the DC resistance, inductance, and resistance at the resonance frequency, the antenna model shown in Figure 3b could be obtained [42]. Figure 6 shows the inductance and quality factor of the tag antenna for several distances to a ground plane that simulates the mobile case. Agreement between the simulations and measurements is good. The measurements were taken by measuring the parameter S 11 of a test antenna connected by means of a SubMiniature version A (SMA) connector to a vector network analyzer (VNA). The antenna impedance (Z) can be obtained from parameter S 11 as a function of frequency for different antenna-to-metal distances. The antenna quality factor is obtained from Q = Im(Z)/Re(Z) at the operation frequency. An important reduction in inductance due to the induced image currents and an increase in losses due to the metal are observed. The resonance frequency of the tag can be adjusted using Equation (5) if antenna inductance, IC capacitance, and parasitic antenna capacitance are known. A practical procedure for adjusting the tuning capacitance (Ctuning in Figure 3) can be conducted with a vector network analyzer. A test antenna (another prototype of the same antenna or a simple wire loop soldered to an SMA connector) is connected to port 1, and the S11 parameter is measured with the tag close to the mobile. The distance between the test antenna and the tag must be large enough (e.g., 1 cm) to avoid coupling between the antennas. After that, the tuning capacitance can be changed to tune the resonance frequency to the frequency of operation. Figure 7 shows the S11 parameter measured for a tag adjusted in the air with Ctuning = 22 pF and detuned due to the proximity of a mobile with a metallic case (model Huawei G8) at several distances between 1 and 12 mm. In order to solve this effect, the tag can be tuned again to 13.56 MHz for the desired distance by increasing tuning capacitance. Due to the power limitation of a standard VNA (often 20 dBm), the excitation field in this test is smaller than for a real reader. A modified VNA set-up with an external amplifier and a reflectometer was used in Reference [43] to characterize the tag under similar power conditions to the actual operation with a reader. IC impedance (especially the equivalent resistance) is, therefore, higher than for a real situation. The resistance of the chip typically decreases from RIC = 5 kΩ to 1 kΩ under high power excitation when the tag is very close to the reader [43]. The quality factor of the whole transponder QT at the resonance frequency (which should not be confused with the antenna's Q factor) can be derived from the parallel equivalent circuit of the antenna: where RT is the tag's total equivalent resistance, calculated by The equivalent parallel resistance Rpa can be derived from the series antenna resistance: The resonance frequency of the tag can be adjusted using Equation (5) if antenna inductance, IC capacitance, and parasitic antenna capacitance are known. A practical procedure for adjusting the tuning capacitance (C tuning in Figure 3) can be conducted with a vector network analyzer. A test antenna (another prototype of the same antenna or a simple wire loop soldered to an SMA connector) is connected to port 1, and the S 11 parameter is measured with the tag close to the mobile. The distance between the test antenna and the tag must be large enough (e.g., 1 cm) to avoid coupling between the antennas. After that, the tuning capacitance can be changed to tune the resonance frequency to the frequency of operation. Figure 7 shows the S 11 parameter measured for a tag adjusted in the air with C tuning = 22 pF and detuned due to the proximity of a mobile with a metallic case (model Huawei G8) at several distances between 1 and 12 mm. In order to solve this effect, the tag can be tuned again to 13.56 MHz for the desired distance by increasing tuning capacitance. Due to the power limitation of a standard VNA (often 20 dBm), the excitation field in this test is smaller than for a real reader. A modified VNA set-up with an external amplifier and a reflectometer was used in Reference [43] to characterize the tag under similar power conditions to the actual operation with a reader. IC impedance (especially the equivalent resistance) is, therefore, higher than for a real situation. The resistance of the chip typically decreases from R IC = 5 kΩ to 1 kΩ under high power excitation when the tag is very close to the reader [43]. The quality factor of the whole transponder Q T at the resonance frequency (which should not be confused with the antenna's Q factor) can be derived from the parallel equivalent circuit of the antenna: where R T is the tag's total equivalent resistance, calculated by The equivalent parallel resistance R pa can be derived from the series antenna resistance: Measured S11 of the test antenna as a function of frequency for different distances between the tag and the mobile [42].
The tag quality factor, therefore, decreases and the tag bandwidth (BW) increases with the increase in distance. When the reader-tag range decreases, the input voltage increases, and the rectifier gradually becomes conductive. On the other hand, the voltage-dependent parasitic capacitance increases by about 1.5% [43]. Therefore, the resonance tends to decrease due to the variation of the chip capacitance. Moreover, taking into account the presence of metal (smartphone case), the variation in the resonance frequency is masked since the inductance is significantly reduced due to the proximity to the metal. Consequently, an overall increase in the resonance frequency (see Figure 7) is produced. This result can be interpreted by analyzing Equation (5). However, because Rpa >> RIC, the tag Q factor is mainly fixed by the chip resistance. Figure 8 shows the quality factor and the BW as a function of the tag-to-mobile distance. We can see that the BW is higher than that required for ISO15693 (968 kHz) and that shape distortion is due more to the reader than to the tag. The tag quality factor, therefore, decreases and the tag bandwidth (BW) increases with the increase in distance. When the reader-tag range decreases, the input voltage increases, and the rectifier gradually becomes conductive. On the other hand, the voltage-dependent parasitic capacitance increases by about 1.5% [43]. Therefore, the resonance tends to decrease due to the variation of the chip capacitance. Moreover, taking into account the presence of metal (smartphone case), the variation in the resonance frequency is masked since the inductance is significantly reduced due to the proximity to the metal. Consequently, an overall increase in the resonance frequency (see Figure 7) is produced. This result can be interpreted by analyzing Equation (5). However, because R pa >> R IC , the tag Q factor is mainly fixed by the chip resistance. Figure 8 shows the quality factor and the BW as a function of the tag-to-mobile distance. We can see that the BW is higher than that required for ISO15693 (968 kHz) and that shape distortion is due more to the reader than to the tag. . Measured S11 of the test antenna as a function of frequency for different distances between the tag and the mobile [42].
The tag quality factor, therefore, decreases and the tag bandwidth (BW) increases with the increase in distance. When the reader-tag range decreases, the input voltage increases, and the rectifier gradually becomes conductive. On the other hand, the voltage-dependent parasitic capacitance increases by about 1.5% [43]. Therefore, the resonance tends to decrease due to the variation of the chip capacitance. Moreover, taking into account the presence of metal (smartphone case), the variation in the resonance frequency is masked since the inductance is significantly reduced due to the proximity to the metal. Consequently, an overall increase in the resonance frequency (see Figure 7) is produced. This result can be interpreted by analyzing Equation (5). However, because Rpa >> RIC, the tag Q factor is mainly fixed by the chip resistance. Figure 8 shows the quality factor and the BW as a function of the tag-to-mobile distance. We can see that the BW is higher than that required for ISO15693 (968 kHz) and that shape distortion is due more to the reader than to the tag. Table 2 lists several NFC IC representatives with energy-harvesting capabilities. Some of these support ISO14443-3 or ISO15693. The table includes the maximum sink current (usable for external electronic devices such a microcontroller or sensors) and the nominal voltage (for low current consumption). The level of energy-harvesting voltage at the output is generated by the rectification of an RF signal in a non-regulated DC voltage that is only limited by the RF input clamping circuit. The maximum sink current is a function of the magnetic field present at the input. This value is obtained for the highest magnetic field; however, in most ICs, the voltage at this point decreases. Typically, currents around 5 mA for output voltages between 2 and 3 V and magnetic fields of the order of 3.5-5 A/m are obtained (e.g., 6 mA current and 1.7 V at 3.5 A/m is obtained for the M24LR-E-R or 4 mA and 3 V at 5 A/m for the ST25DV). Silicon Craft recently reported an IC with up to 10 mA (for the maximum field of 7.5 A/m) integrating an analog-to-digital converter (ADC) oriented to chemical sensors. Each series has a different memory size from 4 to 64 kbit and can be connected to other devices or microcontrollers using the I 2 C bus or serial-to-parallel interface (SPI). Although most of them are designed to be connected to a microcontroller, the MLX90129 from Melexis, the SL13 from AMS, and the SIC43x from Silicon Craft integrate an analog/digital (A/D) interface for autonomous sensor acquisition. Another special case is model RF430FRL152H from TI, which integrates a low-power microcontroller MSP430 and a 14-bit digital signal A/D interface. For a well-designed batteryless tag, the main restriction in the read range is the power-up condition (Equation (1)) compared with the load modulation sideband amplitude (Equation (4)). Equation (1) can be expressed in terms of the magnetic field. For correct RF to DC conversion, the average magnetic field (H av ) (Equation (9)) received by the NFC IC must be above a threshold H-field (H min ). If the magnetic field is above that threshold, the harvesting voltage output can be below the desired value for the required current load. H av depends on both the reader and tag antennas and the coupling, and therefore, on the distance between the reader and the tag. H av is measured by the magnetic antenna factor, AF (Equation (10)). A procedure for calibrating the antenna factor of the tag antenna is described in Reference [44].
Tag ICs with Energy Harvesting
where A is the loop area, N is the number of loops, Z 0 , is the reference impedance (50 Ω), and Z in is the input impedance of the antenna measured with the VNA. The root-mean-square voltage (V RMS ) is obtained from the power P measured with a spectrum analyzer (V RMS = (Z 0 ·P) 1/2 ).
The minimum H-field as a function of tag resonance frequency can be described using Equation (11) [43].
where f r is the resonance frequency of the tag, Q T is the total quality factor of the tag given by Equation (6), and U min is the minimum voltage required for the tag operation, which depends on the chip IC design and technology used. Equation (11) shows the importance of the tag resonance frequency being tuned to the operation frequency (13.56 MHz) and the highest tag quality factor for achieving a larger read range. In energy-harvesting tags, the maximum sink current for the sensor depends on the magnetic field. In order to increase this value, the tag is located at a lower distance compared to a conventional NFC tag, because an extra input power for the external devices is required. Therefore, the loading effect between tag and reader coils becomes significant when the distance decreases [43,45] resulting in detuning. Moreover, as shown earlier (e.g., Figure 6), the presence of metal under the tag can decrease antenna inductance and detune the tag, thus increasing H min . Other strategies for reducing H min involve increasing the tag antenna area or the number of turns. However, the increase in tag area forces an increase in reader antenna area in order not to degrade the coupling factor. In order to reduce the loading effect, it is important to choose the inductance value. Small inductance values, for instance, yield lower inductance voltages and require larger Q, while large inductance values require lower effective Q (larger number of coils, reduced current in the coil, and the resulting load effect is lower). However, from Equation (5), higher inductance results in smaller capacitances; therefore, the tolerances due to the layout fabrication and tuning capacitance must be reduced. In fact, less inductance (and more capacitance to result in equal resonance frequency) allows achieving higher chip currents, of course at the expense of increased loading and detuning of the reader [43].
Unfortunately, H min depends on parameters that are often not provided by the IC manufacturer, such as U min , harvesting power consumption, or other parameters that depend on the antenna and chip impedance, such as Q T . H min is independent of the reader used.
The experiment carried out in Reference [42] showed how the minimum H-field can be obtained for a tag design with the target current consumption. Figure 9 compares the measured H av and the harvesting output voltage generated by two mobiles (Huawei G8 and Xiaomi Mi Note 2) as a function of the tag-to-mobile distance. Although mobile Model 1 generated higher power and the read range was wider, the threshold value was approximately the same H min = 1.1 A RMS /m. In this experiment, the NFC IC (M24LR04E-R) was loaded with the microcontroller (Attiny85 from AVR) and the sensors, in order to take into account the nominal current consumption (about 900 µA) under normal operation. It can be seen that the harvested output remained almost constant throughout the read range before the IC deactivated this output. Figure 9. Measured average magnetic field (top) and harvesting output voltage (bottom) as a function of the tag-to-reader distance for two mobile models. We can see that, once the magnetic field goes below the threshold, the voltage output falls to zero.
If the application requires an NFC antenna to be very close to a metal plate or printed circuit board (PCB) electronics, a thin ferrite foil can help isolate the antenna from the metal [46][47][48]. In the case of the reader, it can also reduce interference from mobile circuits. Ferrite material can conduct the magnetic flux multiple times better than free air. The effect of the ferrite increases the antenna inductance by a factor µref (the definition of µref is analogous to the relative effective permittivity to take into account the increase of the capacitance on an inhomogeneous transmission line). The analysis performed earlier is valid if µ0 is replaced by µ0µref in Equation (11). The change in inductance leads to detuning of the tag in comparison with the case of air; therefore, tuning capacitance must be adjusted. Two types of ferrite foils are available on the market: polymer absorber sheets and sintered ferrite sheets. The former has higher losses, and the effective permittivity Re(µr) is on the order of 20-60. The latter achieves higher Re(µr), on the order of 100-190, and fewer Im(µr) losses, typically 5-10 at 13.56 MHz (e.g., MHLL12060-000 from Laird). Ferrite foils can be ±15-20% the tolerance of µr, which translates to tolerance in the antenna inductances. From Equation (11), after taking into account the correction in the effective magnetic permeability, we should expect Hmin at the resonance frequency to remain unaltered compared with the antenna in the air. However, the ferrite losses slightly reduce the total Q factor QT, and Hmin(fr) with ferrite is slightly higher (roughly 15%) than in the case of air without ferrites [48]. Ferrite magnetic permeability is a function of temperature. Specific conductance has a significant temperature gradient. Inductance, quality factor, and resonance frequency are, therefore, temperature-dependent and Hmin, therefore, changes with temperature. This temperature dependence must be considered in industrial or automotive applications.
Since the direction of the magnetic field is almost parallel to the metal surface, a tag must be specially designed to obtain enough flux through the tag antenna coil surface. Figure 10a,c show that the magnetic field is parallel to the metal surface and that the magnetic flux is concentrated in the proximity of the coil [46], whereas the magnetic flux is zero in the center of the coil because of the cancelation of the field due to the image currents with the opposite sign. The field boundary conditions imposed by the ferrite make the magnetic field almost perpendicular, which is similar to the situation in the case of free space (Figure 10b,d). Reference [48] compared the operating distance between a sticker with ferrite foil composite and air coil in the presence of metal and reported that communication was not achievable with the air coil when the metal distance was less than 1 mm. On the other hand, a communication distance of 30 mm was obtained with a ferrite composite when the Figure 9. Measured average magnetic field (top) and harvesting output voltage (bottom) as a function of the tag-to-reader distance for two mobile models. We can see that, once the magnetic field goes below the threshold, the voltage output falls to zero.
If the application requires an NFC antenna to be very close to a metal plate or printed circuit board (PCB) electronics, a thin ferrite foil can help isolate the antenna from the metal [46][47][48]. In the case of the reader, it can also reduce interference from mobile circuits. Ferrite material can conduct the magnetic flux multiple times better than free air. The effect of the ferrite increases the antenna inductance by a factor µ ref (the definition of µ ref is analogous to the relative effective permittivity to take into account the increase of the capacitance on an inhomogeneous transmission line). The analysis performed earlier is valid if µ 0 is replaced by µ 0 µ ref in Equation (11). The change in inductance leads to detuning of the tag in comparison with the case of air; therefore, tuning capacitance must be adjusted. Two types of ferrite foils are available on the market: polymer absorber sheets and sintered ferrite sheets. The former has higher losses, and the effective permittivity Re(µ r ) is on the order of 20-60. The latter achieves higher Re(µ r ), on the order of 100-190, and fewer Im(µ r ) losses, typically 5-10 at 13.56 MHz (e.g., MHLL12060-000 from Laird). Ferrite foils can be ±15-20% the tolerance of µ r , which translates to tolerance in the antenna inductances. From Equation (11), after taking into account the correction in the effective magnetic permeability, we should expect H min at the resonance frequency to remain unaltered compared with the antenna in the air. However, the ferrite losses slightly reduce the total Q factor Q T , and H min (f r ) with ferrite is slightly higher (roughly 15%) than in the case of air without ferrites [48]. Ferrite magnetic permeability is a function of temperature. Specific conductance has a significant temperature gradient. Inductance, quality factor, and resonance frequency are, therefore, temperature-dependent and H min , therefore, changes with temperature. This temperature dependence must be considered in industrial or automotive applications.
Since the direction of the magnetic field is almost parallel to the metal surface, a tag must be specially designed to obtain enough flux through the tag antenna coil surface. Figure 10a,c show that the magnetic field is parallel to the metal surface and that the magnetic flux is concentrated in the proximity of the coil [46], whereas the magnetic flux is zero in the center of the coil because of the cancelation of the field due to the image currents with the opposite sign. The field boundary conditions imposed by the ferrite make the magnetic field almost perpendicular, which is similar to the situation in the case of free space (Figure 10b,d). Reference [48] compared the operating distance between a sticker with ferrite foil composite and air coil in the presence of metal and reported that communication was not achievable with the air coil when the metal distance was less than 1 mm. On the other hand, a communication distance of 30 mm was obtained with a ferrite composite when the EMVCo test bench was used. Communication distance increased as the distance to the metal plate increased. When the metal distance was 10 mm, the air and ferrite composite reached the same communication distance. EMVCo test bench was used. Communication distance increased as the distance to the metal plate increased. When the metal distance was 10 mm, the air and ferrite composite reached the same communication distance. (a)
NFC for Wearable Applications
One of the most interesting applications for NFC sensors is wearable applications for which the tag is on the body. With wearable devices, the effects of the body on the antenna must be taken into account. At this point, it is important to note that inductance is not affected by the dielectric substrate and is, therefore, unaltered by the body in applications where the tag is attached to the skin. However, parasitic capacitance (Cp) increases due to the high permittivity of bodily matter. The tag's resonance frequency must, therefore, take into account the body's presence. The coupling coefficient is also essentially unaltered by the body's presence. This panorama is dramatically different at UHF or microwave frequencies, where the high losses and the detuning of the antennas due to the body reduce the efficiency of the antennas, thus noticeably reducing the read range. Therefore, as we describe below, NFC technology is highly compatible with wearable applications. The dielectric losses can be modeled by adding a resistance in parallel to the antenna (Rpa in Figure 3b). To quantify the effect of the body on the antenna, several simulations were performed. The same antenna as in Figure 6, printed on a 0.8-mm-high FR4 substrate was simulated in the air and on the body. It was assumed that the tag was on the arm, which was simulated using a planar stack of different dielectrics, as shown in Table 3. For the sake of simplicity, the curvature of the arm was ignored. The data for relative permittivity and conductivity were taken from Reference [49]. However, there was a large variation between individuals depending on the water content of their tissues. The results are shown in Table 4. Apart from the increase in capacity due to the high permittivity of the body, much deterioration in the quality factor was observed due to the losses. One solution is to isolate the body with a ferrite foil, as in the case of metal tags. A ferrite foil with a thickness of 100 µm (µr = 120 − j5) and an adhesive layer of a thickness of 100 µm (εr = 2.0, tanδ = 0.002) was inserted below the antenna. An increase in antenna inductance was observed due to the
NFC for Wearable Applications
One of the most interesting applications for NFC sensors is wearable applications for which the tag is on the body. With wearable devices, the effects of the body on the antenna must be taken into account. At this point, it is important to note that inductance is not affected by the dielectric substrate and is, therefore, unaltered by the body in applications where the tag is attached to the skin. However, parasitic capacitance (C p ) increases due to the high permittivity of bodily matter. The tag's resonance frequency must, therefore, take into account the body's presence. The coupling coefficient is also essentially unaltered by the body's presence. This panorama is dramatically different at UHF or microwave frequencies, where the high losses and the detuning of the antennas due to the body reduce the efficiency of the antennas, thus noticeably reducing the read range. Therefore, as we describe below, NFC technology is highly compatible with wearable applications. The dielectric losses can be modeled by adding a resistance in parallel to the antenna (R pa in Figure 3b). To quantify the effect of the body on the antenna, several simulations were performed. The same antenna as in Figure 6, printed on a 0.8-mm-high FR4 substrate was simulated in the air and on the body. It was assumed that the tag was on the arm, which was simulated using a planar stack of different dielectrics, as shown in Table 3. For the sake of simplicity, the curvature of the arm was ignored. The data for relative permittivity and conductivity were taken from Reference [49]. However, there was a large variation between individuals depending on the water content of their tissues. The results are shown in Table 4. Apart from the increase in capacity due to the high permittivity of the body, much deterioration in the quality factor was observed due to the losses. One solution is to isolate the body with a ferrite foil, as in the case of metal tags. A ferrite foil with a thickness of 100 µm (µ r = 120 − j5) and an adhesive layer of a thickness of 100 µm (ε r = 2.0, tanδ = 0.002) was inserted below the antenna. An increase in antenna inductance was observed due to the permeability of the ferrite foil and a smooth increase in losses compared to in the air. Another improvement when ferrite foil was inserted was that the design was insensitive to changes in individuals or parts of the body. One drawback, however, is that sintered ferrite sheets are expensive. Another simple solution to mitigate the effects of the body consists of introducing a spacer made with a low-permittivity material (such as a plastic ε r ≈ 2 or foam ε r ≈ 1) between the skin and the tag substrate. The separation of the antenna from the body significantly reduces the effects of the latter. In Table 4, simulated results for a spacer of foam (thickness: 1 mm) and plastic (thickness: 1 mm and 2 mm) are shown. In the case of plastic, to obtain results closer to the foam case (or air), it is necessary to double the thickness. The low-permittivity spacer allows for reducing the effective permittivity. Consequently, the parasitic capacitance decreases and the antenna resonance frequency increases, approaching the values expected for the air case. This solution is often implemented on wristbands [50,51] where the spacer is integrated into the belt, usually made with a biocompatible material such as silicone. In other cases, such as on body patch or tattoo tags, the thickness required is not allowed. The reduction of tag quality factor Q T due to the higher losses on the antenna caused by the presence of the body and the detuning due to the change of parasitic antenna capacitance increases the value of H min . In general, the first consequence is a reduction in the communication range; however, for NFC sensors with energy harvesting, the increase in H min reduces the maximum output current and output harvested voltage. Consequently, the sensor may not be powered up correctly. Thus, it is important to correctly adjust the tuning capacitance in the energy-harvesting NFC tags (see Table 4). Tags implemented with low inductance values (and adjusting the tuning capacitance for the resonance at the operation frequency) present lower sensitivity to variations in permittivity between persons or body parts. In addition, the use of a spacer helps further increase this tolerance. However, the area must be similar to the antenna used in the reader to maintain a coupling coefficient as high as possible.
NFC Reader Design Considerations
Current mobiles recently incorporated NFC readers. NFC-based sensors are, therefore, normally read with these devices. However, certain applications (e.g., industrial ones) require a specific reader. Manufacturers of integrated circuits have solutions based on a low-cost single-chip reader. Such cases require the antenna for the reader and the control software of the microcontroller connected to the reader to be designed. To achieve maximum read distance, maximum power must be transmitted to the antenna; thus, the impedance of the transmitter must be the conjugate of the antenna. For this, a matching network must be designed. In addition to reducing electromagnetic (EM) interference with other systems, a low-pass band filter is inserted after the transmitter to attenuate the harmonics of the transmitted signal. This filter introduces some extra distortion into the signal and increases the bandwidth (or reduces the quality factor). A simple L-matching network is used as the matching network. In practice, the Tx output is usually differential (to enable double output voltage swing from a single supply voltage). Figure 11a shows a model of a differential reader with an EM interference (EMI) filter (capacitance C 0 , inductor L 0 , and inductor losses R 0 ), the matching network, which consists only of capacitances (C s and C p ), and the antenna model. The antenna is assumed to be an inductive load (the antenna resonance frequency is higher than the operation band). This load impedance often falls within the allowed region on the Smith chart that can be matched with an L-matching network with two capacitors. The output transmitter resistance R out depends on the transmitter's current consumption and, therefore, on the transmitted power, and is generally given by the IC manufacturer. HF capacitors (with C0G or NP0 dielectric) have negligible losses and less tolerance. The input impedance from the receiver is typically capacitive and is modeled as capacitance C in . The resistance R x is inserted in order to attenuate the transmitter signal and to avoid receiver saturation. The antenna model can be obtained from electromagnetic simulations or from S 11 measurements with a VNA [27]. The C a capacitance is derived from the antenna's unload resonance frequency. The design procedure is described below.
1. Design of EMI filter: a filter cut-off is chosen between 15-20 MHz and is given by 2. Adjustment of the maximum quality factor: the second step is to adjust the quality factor to make it equal to Q 1max . In both this step and the design of the matching network, it is assumed for the design criteria that the tag is far enough away to make the coupling very weak. Tag proximity, therefore, has no influence on the load impedance and the tag is matched for small couplings, where efficient wireless power transfer is more important. If the tag is close to the antenna reader, the main effect is a reduction in the reader quality factor, which is given by [52] This reduction in quality factor leads to an increase in matching bandwidth. The unloaded quality factor (Q 1u ) is adjusted by adding two series resistance R s with values of 3. Design of the matching network: at this point, it is useful to design the L-matching network to use the single-ended equivalent circuit in Figure 11b. The values of C s and C p can be found with the help of the Smith chart or from the following equations: where Y L = 1/Z L = G L + jB L is the load admittance (see Figure 11b), and Z g = R g + jX g is the impedance at the input of the matching network (see Figure 11b). 4. Adjustment of the attenuation of the receiver path: the resistance R x controls the attenuation of the signal to the receiver. Usually, this resistance is high and the effect on the design of the matching networks is small. It is recommended that this adjustment should be made after checking with an oscilloscope with a low-capacitance probe that the voltage at the receiver input (R x1 or R x2 ) does not exceed the limit given by the reader manufacturer. 4. Adjustment of the attenuation of the receiver path: the resistance Rx controls the attenuation of the signal to the receiver. Usually, this resistance is high and the effect on the design of the matching networks is small. It is recommended that this adjustment should be made after checking with an oscilloscope with a low-capacitance probe that the voltage at the receiver input (Rx1 or Rx2) does not exceed the limit given by the reader manufacturer.
NFC Sensors
The progressive introduction of NFC ICs into the market enables the development of low-cost batteryless portable sensors. Figure 12 shows the number of NFC-enabled mobile devices worldwide between 2012 and 2018 [53,54]. Between 2013 and the end of 2018, worldwide shipments of NFC-enabled cellphones rose by 325%. Market estimations expect that, by 2020, 85% of smartphones will be equipped with NFC. In this section, we review several NFC-based sensors in the literature. Some of these sensors are listed in Table 5. The second column in this table describes the target application or sensor type. The third column shows the NFC IC used in commercial devices or custom IC designs. Although we focused on passive devices, we also included some interesting semi-passive (battery-assisted passive tags) or data logger implementations. Several comments are also included. The references were sorted based on application.
NFC Sensors
The progressive introduction of NFC ICs into the market enables the development of low-cost batteryless portable sensors. Figure 12 shows the number of NFC-enabled mobile devices worldwide between 2012 and 2018 [53,54]. Between 2013 and the end of 2018, worldwide shipments of NFC-enabled cellphones rose by 325%. Market estimations expect that, by 2020, 85% of smartphones will be equipped with NFC. In this section, we review several NFC-based sensors in the literature. Some of these sensors are listed in Table 5. The second column in this table describes the target application or sensor type. The third column shows the NFC IC used in commercial devices or custom IC designs. Although we focused on passive devices, we also included some interesting semi-passive (battery-assisted passive tags) or data logger implementations. Several comments are also included. The references were sorted based on application. One pioneering work is the NFC-WISP platform [29]. In this case, rectification is done externally using a full-wave rectifier with discrete diodes, and the ISO-14443 protocol is completely implemented in a low-power microcontroller (TI MSP430). Optionally, an E-ink screen can be used to show the measurements. In this case, a thin-film battery or supercapacitor must be used in order to provide the peak current for the E-ink screen. Temperature measurement for cold-chain data logging is shown as an RFID data logger, which provides temperature history for personnel without post-processing via the E-ink screen. Reference [55] reports a system inspired by the NFC-WISP design for monitoring the temperature of newborns in an incubator. The tag was positioned on the mattress inside the incubator and the reader (TI TRF7970A) was placed below the mattress tray. Temperature was recorded periodically.
Another implementation for cold-chain temperature monitoring and quality is found in Reference [56], which developed a critical temperature indicator (CTI) based on a solvent melting point. The smart sensor combines irreversible visual color changes and RFID. A Melexis MLX90129 was used to measure the change in resistance of multi-walled carbon nanotubes (MWCNTs) connected to two copper wires. The proposed CTI smart sensor integrates the microfluidic CTI to an RFID tag in order to remotely detect the melting of the solvent once the critical temperature is reached. The CTI smart sensor has a fast response to the critical temperature of 18-19 °C.
The medical market is especially poised to take advantage of NFC thanks to smart sensors that can measure the physical conditions of patients and wirelessly transmit the data to a nearby smartphone [57]. The measuring of vital signs for personalized healthcare is generating substantial interest from ambient assisted living solutions [58,59]. NFCs provide an intuitive user interface that is easy for patients to use. The latency between touching the device and displaying the result is typically less than one second. The main properties of these sensors are that they are wearable, low-cost, and green. Moreover, the tags can be disposed of in order to avoid contamination between patients. Smartphones enabled with NFC technology facilitate integration with cloud services because the same app that is used for sensor data reading can upload the data to the cloud using a mobile or WiFi internet connection. The continuous monitoring of medical parameters can help improve the diagnosis and follow-up of several diseases, while also reducing personal attention. In the long term, incorporating these sensors can help reduce the cost of healthcare in societies with aging populations. Another potential application is the development of devices for fast screening before deciding whether more expensive analysis is required. One pioneering work is the NFC-WISP platform [29]. In this case, rectification is done externally using a full-wave rectifier with discrete diodes, and the ISO-14443 protocol is completely implemented in a low-power microcontroller (TI MSP430). Optionally, an E-ink screen can be used to show the measurements. In this case, a thin-film battery or supercapacitor must be used in order to provide the peak current for the E-ink screen. Temperature measurement for cold-chain data logging is shown as an RFID data logger, which provides temperature history for personnel without post-processing via the E-ink screen. Reference [55] reports a system inspired by the NFC-WISP design for monitoring the temperature of newborns in an incubator. The tag was positioned on the mattress inside the incubator and the reader (TI TRF7970A) was placed below the mattress tray. Temperature was recorded periodically.
Another implementation for cold-chain temperature monitoring and quality is found in Reference [56], which developed a critical temperature indicator (CTI) based on a solvent melting point. The smart sensor combines irreversible visual color changes and RFID. A Melexis MLX90129 was used to measure the change in resistance of multi-walled carbon nanotubes (MWCNTs) connected to two copper wires. The proposed CTI smart sensor integrates the microfluidic CTI to an RFID tag in order to remotely detect the melting of the solvent once the critical temperature is reached. The CTI smart sensor has a fast response to the critical temperature of 18-19 • C.
The medical market is especially poised to take advantage of NFC thanks to smart sensors that can measure the physical conditions of patients and wirelessly transmit the data to a nearby smartphone [57]. The measuring of vital signs for personalized healthcare is generating substantial interest from ambient assisted living solutions [58,59]. NFCs provide an intuitive user interface that is easy for patients to use. The latency between touching the device and displaying the result is typically less than one second. The main properties of these sensors are that they are wearable, low-cost, and green. Moreover, the tags can be disposed of in order to avoid contamination between patients. Smartphones enabled with NFC technology facilitate integration with cloud services because the same app that is used for sensor data reading can upload the data to the cloud using a mobile or WiFi internet connection. The continuous monitoring of medical parameters can help improve the diagnosis and follow-up of several diseases, while also reducing personal attention. In the long term, incorporating these sensors can help reduce the cost of healthcare in societies with aging populations. Another potential application is the development of devices for fast screening before deciding whether more expensive analysis is required. Table 5.
Reference
Application IC Passive Comments [29] Cold-chain temperature NFC-WISP One example of the above is the design of biopatches for body temperature monitoring (see Reference [60]). Here, the sensor was based on a thermistor and a Wheatstone bridge, where the internal ADC of the RF430FRL152H NFC IC from Texas Instruments was used. The biopatch can be used as a data logger if a small 1.5-V battery (with 30 days of autonomy) is used or in passive mode for instant temperature measurement. Analog inputs are used to read the temperature sensor, and the values read by the ADC are stored in the ferroelectric random-access memory (FRAM) to be downloaded when required. The timer is responsible for managing the time intervals. Another example of a biopatch for measuring temperature or light intensity, implemented in the form of an adhesive E-tattoo, was presented in Reference [61]. An RF430FRL152H NFC IC drives a light-emitting diode (LED) and a phototransistor that is able to detect backscattered or ambient light. Analog signals from sensors such as the thermistor and phototransistor are digitized with the ADC inside the NFC chip. These data are then transmitted by NFC. A Cu foil is laminated on thermal release tape (TRT). The circuit is made with a mechanical cutter plotter and is transferred onto water-soluble tape (WST) backed by Kapton tape. The NFC chip and the components are attached with solder paste. By dissolving the WST with water droplets, the whole circuit is transferred to the target substrate, which is water-vapor-permeable Tegaderm adhesive. Finally, another Tegaderm layer is used to provide protection from the skin.
A biopatch for continuously monitoring hydration was reported in Reference [62]. The sensor, which measures the concentration of NaCl in sweat, was based on Melexis MLX90129, which is used for the potentiometric sensing of electrolytes in sweat, reading surface temperature, and sensing the potential difference between two electrodes. The flexible printed circuit board (PCB) was built from Dupont Pyralux. Double-sided medical adhesive tape is used below the patch, while, above the patch, a medical textile covering is added to protect it and improve visual aesthetics.
A non-invasive, flexible, and wireless pH-sensing system for monitoring wound healing and identifying the possibility of early-stage infection was reported in Reference [63]. Low pH is beneficial since it helps counteract microbial colonization from many human-pathogenic microorganisms that require a more alkaline environment for growth. The sensors consist of a working electrode and a reference electrode. The electrical potential across the two electrodes is a function of the concentration of H + ions in the solution. The sensor is interfaced to an NFC SL13 chip from AMS with a buffer amplifier (AD8603, Analog Devices Inc., Norwood, MA, USA). The pH sensor exhibits a linear sensitivity of −55 mV/pH and stable performance under mechanical bending in a pH range of 4 to 10.
A low-power complementary metal-oxide semiconductor (CMOS) ion-sensitive field-effect transistor (ISFET) array for pH sensing was inductively powered using NFC in Reference [64]. Each pixel in a 3 × 3 array contains an ISFET operating in weak inversion that detects changes in pH as a current. The output for all pixels is then averaged, and the resulting signal modulates the frequency of a ring oscillator. This provides simple analog-to-digital conversion suitable for reading and transmitting. The application-specific integrated circuit (ASIC) power consumption was 6 µW (at a 1.2 V supply). The SIC4310 from Silicon Craft NFC IC was used in the study.
The literature contains specialized ASIC designs that integrate sensing, signal processing, energy harvesting, and NFC communication. A batteryless wearable electrocardiogram (ECG) monitoring system-in-a-patch assembled by biocompatible and pliable silicon-in-parylene technology was reported in Reference [65]. The system is able to process the acquired ECG signal and detect arrhythmia using a built-in digital signal processor (DSP). An NFC communication system is used to interface the external reader. The system requires an additional power source. The energy harvested from a 5 × 5 cm 2 thermoelectric generator (TEG) module (60 W of output power) can be powered and stored in a supercapacitor.
An integrated system-on-chip (SoC) for long-term implantable continuous glucose monitoring was reported in Reference [66]. This integrates an amperometric glucose sensor interface, an NFC wireless front-end, and a fully digital switched mode power management unit for supply regulation and on-board battery charging. It uses the 13.56-MHz (ISM) band to harvest energy and backscatter data to an NFC reader. However, it does not use standardized protocol, and custom ASK demodulator circuits extract the modulating frequency that encodes the glucose concentration.
Another ASIC for a wireless fully implantable glucose sensor was reported in Reference [67]. In this case, the NFC was based on ISO15693 for passive wireless readout through an NFC interface. The IC is used as the core interface to a fluorescent, glucose transducer to enable a fully implantable sensor-based continuous glucose monitoring system. The whole system (photodiodes, transimpedance amplifier (TIA), ADC, electrically erasable programmable read-only memory (EEPROM), and NFC), except for an external LED, is integrated into the IC.
Chemical gas sensors based on NFC technology were recently reported in the literature [68,69]. Portable gas sensors are used for diagnosing point-of-care diseases, detecting explosives and dangerous chemical agents, indicating food ripening, and monitoring environmental pollution [69]. Reference [68] presents a fully passive flexible multigas-sensing tag for determining oxygen, carbon dioxide, ammonia, and relative humidity, readable by smartphone. The tag is based on NFC technology for energy harvesting and data transmission to a smartphone. The gas sensors show an optic response that is read through high-resolution digital color detectors. A white LED is used as the common optical excitation source for all sensors. The responses of the sensors were calibrated and fitted to simple functions, thus allowing fast prediction of gas concentration. Another gas sensor detection system was presented in Reference [69]. Sensitized single-walled carbon nanotubes (SWCNTs) whose resistance changes with gas concentration (NH 3 ) were inserted in series with the NFC IC. The effect of the gas causes the tag to detune. When gas concentration is high, power transfer is insufficient for effective smartphone-tag communication and the tag is unreadable.
An NFC bicycle tire-pressure measurement system (BTPMS) was presented in Reference [70]. The sensor comprises an ASIC that integrates an on-chip capacitive pressure and temperature sensor, an RFID interface for HF/NFC, and EEPROM. The IC is soldered with wire-bonding to a FR4 PCB with the antenna. The tag is incorporated into the bicycle tire. A marker on the tire's exterior indicates the position of the NFC BTPMS, and therefore, the NFC readable area. The pressure can be read using an ISO 14443 RFID-compatible reader. As the sensor presents linear dependence, a two-point calibration technique is sufficient for sensor calibration.
Low-cost monitoring systems are in demand for irrigation control at home, in greenhouses, or at garden centers. A low-cost, batteryless, NFC-powered device capable of measuring volumetric water content (soil moisture), temperature, and relative humidity was recently presented in Reference [71] (see Figure 13). The tag was based on a M24LR04E-R from an ST NFC IC connected to a low-cost microcontroller (Atttiny85 from Atmel). The data are shown on a smartphone application or uploaded to the cloud for sharing or storage. The temperature is measured using an I 2 C temperature sensor (LM75A), while air humidity is detected by reading the analog output from the HIH-5030 humidity sensor from Honnewey, designed for measuring soil volumetric water content. Capacitance measurement is based on a low-power timer 555 working as an oscillator, and a diode detector whose output is measured by the ADC of the microcontroller. The external circuitry requires less than 1 mA at 3 V to operate. A procedure was presented for calibrating the sensor based on a simple expression whose coefficients can be experimentally obtained. Figure 14 shows a measurement taken with the system. This reference shows that conventional low-power sensors can be integrated within the NFC tag for the new generation of IoT devices. NFC IC. The effect of the gas causes the tag to detune. When gas concentration is high, power transfer is insufficient for effective smartphone-tag communication and the tag is unreadable. An NFC bicycle tire-pressure measurement system (BTPMS) was presented in Reference [70]. The sensor comprises an ASIC that integrates an on-chip capacitive pressure and temperature sensor, an RFID interface for HF/NFC, and EEPROM. The IC is soldered with wire-bonding to a FR4 PCB with the antenna. The tag is incorporated into the bicycle tire. A marker on the tire's exterior indicates the position of the NFC BTPMS, and therefore, the NFC readable area. The pressure can be read using an ISO 14443 RFID-compatible reader. As the sensor presents linear dependence, a two-point calibration technique is sufficient for sensor calibration.
Low-cost monitoring systems are in demand for irrigation control at home, in greenhouses, or at garden centers. A low-cost, batteryless, NFC-powered device capable of measuring volumetric water content (soil moisture), temperature, and relative humidity was recently presented in Reference [71] (see Figure 13). The tag was based on a M24LR04E-R from an ST NFC IC connected to a low-cost microcontroller (Atttiny85 from Atmel). The data are shown on a smartphone application or uploaded to the cloud for sharing or storage. The temperature is measured using an I 2 C temperature sensor (LM75A), while air humidity is detected by reading the analog output from the HIH-5030 humidity sensor from Honnewey, designed for measuring soil volumetric water content. Capacitance measurement is based on a low-power timer 555 working as an oscillator, and a diode detector whose output is measured by the ADC of the microcontroller. The external circuitry requires less than 1 mA at 3 V to operate. A procedure was presented for calibrating the sensor based on a simple expression whose coefficients can be experimentally obtained. Figure 14 shows a measurement taken with the system. This reference shows that conventional low-power sensors can be integrated within the NFC tag for the new generation of IoT devices. Figure 13. Soil moisture NFC tag being powered and read by a smartphone, which retrieves the sensed data and the tag unique identifier (UID) to identify the species [71].
Conclusions
We recently witnessed the rapid deployment of NFC technology driven by contactless payment applications. Although NFC technology was developed over a decade ago, it was not until its massive incorporation into mobile phones that it became popular. This expansion led to the emergence of passive NFC sensors using the energy-harvesting possibilities provided by this technology. In this paper, we reviewed recent studies found in the literature. We also addressed the design of labels based on energy harvesting, as well as several aspects that can limit the transfer of power between the tag and the reader. The inductance (and the corresponding capacitance) chosen in the tag design has an important role in the energy harvesting and the loading effects between the tag and reader. This interest in passive NFC sensors led manufacturers of integrated circuits to present several ICs with energy harvesting, thus demonstrating the potential market for this technology. We also reviewed some of these ICs and highlighted their main characteristics. A review of the state of the art in batteryless NFC sensors revealed great interest in these sensors for food monitoring and wearable biomedical applications. In these applications, it is essential to eliminate potentially dangerous batteries due to their toxicity and high costs. Compared with other types of wireless sensor technology, such as UHF RFID, an inductive link at the 13.56-MHz band is more insensitive, where the body introduces high losses that limit the read range. Another advantage of NFC devices over UHF devices is the fast ROI. This is because a specific reader it is not required, since a smartphone can often be used as a reader. The data can then be uploaded to cloud database services. The ease with which NFC technology is used makes it ideal for use by elderly people in telemedicine and electronic health applications. The greater privacy and security of NFC communications compared to UHF RFID is another point to consider in biomedical and telemedicine applications. If a specific sensor must be designed because one is not commercially available, integrating NFC electronics within the ASIC and the sensor signal conditioning is justified. In other cases, standard commercially available NFC ICs were used in the NFC-based designs found in the literature. These ICs are often based on the standard ISO 15693 because higher communication distances are obtained compared with the IC under the standard ISO 14443.
Conclusions
We recently witnessed the rapid deployment of NFC technology driven by contactless payment applications. Although NFC technology was developed over a decade ago, it was not until its massive incorporation into mobile phones that it became popular. This expansion led to the emergence of passive NFC sensors using the energy-harvesting possibilities provided by this technology. In this paper, we reviewed recent studies found in the literature. We also addressed the design of labels based on energy harvesting, as well as several aspects that can limit the transfer of power between the tag and the reader. The inductance (and the corresponding capacitance) chosen in the tag design has an important role in the energy harvesting and the loading effects between the tag and reader. This interest in passive NFC sensors led manufacturers of integrated circuits to present several ICs with energy harvesting, thus demonstrating the potential market for this technology. We also reviewed some of these ICs and highlighted their main characteristics. A review of the state of the art in batteryless NFC sensors revealed great interest in these sensors for food monitoring and wearable biomedical applications. In these applications, it is essential to eliminate potentially dangerous batteries due to their toxicity and high costs. Compared with other types of wireless sensor technology, such as UHF RFID, an inductive link at the 13.56-MHz band is more insensitive, where the body introduces high losses that limit the read range. Another advantage of NFC devices over UHF devices is the fast ROI. This is because a specific reader it is not required, since a smartphone can often be used as a reader. The data can then be uploaded to cloud database services. The ease with which NFC technology is used makes it ideal for use by elderly people in telemedicine and electronic health applications. The greater privacy and security of NFC communications compared to UHF RFID is another point to consider in biomedical and telemedicine applications. If a specific sensor must be designed because one is not commercially available, integrating NFC electronics within the ASIC and the sensor signal conditioning is justified. In other cases, standard commercially available NFC ICs were used in the NFC-based designs found in the literature. These ICs are often based on the standard ISO 15693 because higher communication distances are obtained compared with the IC under the standard ISO 14443.
Author Contributions: Investigation and writing-original draft preparation, A.L. Review and editing, R.V. and D.G.
|
2018-11-15T17:45:15.084Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b291b9d20da8b73e7bdb691560b2931c4a41d4c0",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/sensors/sensors-18-03746/article_deploy/sensors-18-03746.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "098cd9b761b7df004d221324b79c13e89046012d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
]
}
|
267751903
|
pes2o/s2orc
|
v3-fos-license
|
Routine Use of Neck Drains Following Thyroid Operations to Prevent Complications Is No Longer Advisable
Background: The use of cervical drains to prevent cervical hematoma or seroma after thyroidectomy remains a controversial issue. Objective: Identify clinical and surgical risk factors for hematoma or seroma and evaluate the usefulness of routine use of drains following thyroid surgery. Material and methods: The authors conducted a retrospective multicentric study related to consecutive patients submitted to thyroid surgery in seven Portuguese hospitals between January 2018 and December 2020 (n=945). The data collected included the following parameters: age and gender of the patients, anticoagulation or anti-aggregating therapy, histological diagnoses, type of surgery, the presence or absence of postoperative drains, thyroid weight, length of hospital stay, postoperative complications, and reinterventions. In this study, surgical complications evaluated were limited to the presence of hematoma or seroma. A total of 945 patients who underwent thyroid surgery were included in the study. Twenty-seven patients (2.9%, n=27) experienced complications classified as hematomas or seromas. In the series, significant differences were observed between the two groups according to hypocoagulation or anti-aggregation status (OR=3.62; 95% CI 1.14-11.4) (p=0.001) and the nature of histological diagnosis (toxic vs. non-toxic benign disease) (OR=6.59; 95% CI 1.83-23.7). Hypocoagulation or anti-aggregation status were independently associated with a higher risk of complications. The presence of drains was associated with longer hospitalization periods (p<0.001) and not a decreased need for reintervention. Conclusion: Cervical hematoma or seroma are rare complications associated with both hypocoagulation and anti-aggregation therapy and with the presence of benign toxic pathology. The use of drains does not decrease the need for reintervention and is even associated with a longer length of hospital stay; therefore, their routine use should not be advised.
Introduction
Operations on the thyroid are the most common surgical procedures performed in the neck [1].However, the routine use of drains following thyroid surgery is still controversial [2][3][4][5].Supporters of routine drainage argue that drains will reduce post-operative collections and reduce the likelihood of hematomas or seromas that may cause compression of the airway or become infected [6,7].
On the other hand, authors who do not use drains argue that drains often become blocked, which limits their usefulness [2], which could increase the risk of infection, the length of hospital stay, treatment costs, and discomfort for the patient.Moreover, the routine use of drains should not be a substitute for meticulous surgical technique with careful hemostasis [1, [8][9][10].
The objective of the present study is to identify clinical and surgical risk factors for hematoma or seroma and to evaluate the usefulness of routine use of drains following thyroid surgery.
Materials And Methods
A retrospective multicentric study aimed to identify clinical and surgical risk factors for hematoma or seroma and to evaluate the usefulness of routine use of cervical drains was undertaken.Patients whose required information was missing were excluded from the study.No minimal age was used to exclude patients.Before data collection, all hospitals included in this study were granted approval from their ethical boards.A total of 945 patients undergoing thyroid surgery (hemithyroidectomy, total thyroidectomy, and totalization of thyroidectomy), with or without lymph node dissection, between the periods afore indicated were included in the study.Clinical and outcome data were collected, including the age and gender of the patients, anticoagulation or anti-aggregating therapy, histologic diagnoses, type of surgery, postoperative drains, thyroid weight, length of hospital stay, postoperative complications, and the need for reintervention.
Inclusion and exclusion criteria
The complications evaluated in this study were the presence of hematoma or seroma.The diagnoses considered were documented on the final pathology report of the surgical specimen, and accordingly, cases were divided into toxic benign pathology (toxic nodule, toxic multinodular goiter, and Graves' disease), nontoxic benign pathology, and malignant pathology.The cases were cataloged in two groups according to the presence of complications, which were compared according to the parameters evaluated.
Data analysis
The data were analyzed with the STATA 15.1 statistical package computer program (StataCorp LLC, Texas, USA).Continuous variables are expressed as the median value.Categorial variables are presented as percentages.Continuous variables were compared by the Mann-Whitney-U test and categorical variables by the Fisher's exact test.A multivariable logistic regression model was created to identify factors that are associated with the occurrence of complications in patients undergoing thyroid surgery.The results were expressed as an odds ratio (OR) value and its 95% confidence interval (CI).The significance level was set at 0.05.
Results
A total of 945 patients undergoing thyroid surgery in the seven hospitals that conducted the study were included: 303 patients in 2018, 298 in 2019, and 344 in 2020.
Cases were cataloged in two groups according to the presence of complications related to thyroid surgery.The distribution of cases between the two groups considered according to the several parameters evaluated in the study is summarized in Table 1.Significant differences between the groups were observed according to the age of the patients (p=0.002), the status of antithrombotic therapy (p=0.001), and the need for reintervention (p<0.001).The age of the patients with complications (68.0 years old) was higher compared to that of those without complications (57.0 years old) (p=0.002).The percentage of patients who were under antithrombotic therapy was higher in cases with complications (25.9%, n=7) compared to that of cases without complications (10.1%, n=93) (p=0.001).The percentage of patients submitted to reintervention was higher in the complications group (33.3%, n=9) compared to that observed in cases without complications (1.6%, n=15) (p<0.001).No significant differences were observed in the comparison between the two groups according to the other parameters evaluated.Although there was no statistically significant difference, the relative frequency of cases with complications whose diagnosis was benign toxic pathology was higher when compared to the proportion of patients with the same diagnosis and without complications (22.2%, n=6 vs. 9.3%, n=85) (p=0.083).
The results of the multivariate logistic regression to identify independent factors associated with the occurrence of complications are summarized in Table 2.As identified in Table 2, antithrombotic therapy had a significant association (OR=3.62;95% CI: 1.14-11.4).The presence of toxic, benign pathology documented in the surgical specimen was also identified as an independent factor in the occurrence of complications.
TABLE 2: Logistic regression analysis to identify the parameters associated complications in patients undergoing thyroid surgery
The frequency of cervical drainage in several hospitals ranged between 6.25% and 100%.In the global series, cases were also cataloged in two groups according to the use of cervical drains: with or without drains (35.3%, n=334 vs. 64.7%,n=611, respectively).Table 3 summarizes the distribution of cases between the groups considered based on the evaluated parameters.Significant differences were observed according to histologic diagnosis (p<0.001), the type of surgery performed (p<0.001),thyroid weight (p<0.001), the regimen of admission (p<0.001), and the need for cervical lymph node dissection (p=0.002).A trend of significance was observed in the comparison between the groups according to the need for reintervention (p=0.053).No differences were observed in the distribution of cases according to the other evaluated parameters.The percentage of cases with the use of drains was higher in non-toxic benign pathology (75.1%, n=251) compared to that observed in cases without drains (65.3%, n=399) (p<0.001).In the series, the percentage of cases with drains in total thyroidectomy was higher than that of cases without drains (64.1%, n=214 vs. 48.3%,n = 295), respectively.In cases of hemithyroidectomy, the reverse was observed (34.4%, n=115 vs. 47.3%, n=289, respectively).In cases where cervical drains were used, the median weight of thyroid specimens was higher (37 g vs. 26 g, respectively).There was a significant difference in the length of hospital stays: 56.6% (n=346) of patients without a drain were hospitalized for 0-1 days, while 94.3% (n=315) of patients with a drain were hospitalized for two days or more.The percentage of cases in which a drain was used in cervical lymphadenectomy (3.9%, n=13) was higher than that in cases without lymphadenectomy (1.0%, n=6).Regarding the need for reintervention, a trend toward a significant difference was observed between patients who had a drain placed and those who did not (1.2%, n=4 vs. 3.3%, n=20; p=0.054).
Discussion
Patients with thyroid surgery may experience complications ranging from minor to life-threatening, namely hematoma or seroma leading to neck compression and respiratory failure.The use of cervical drains has been a matter of debate in the literature, and some controversy remains.Some authors advise the routine use of cervical drains to prevent the occurrence of such complications [11], but others claim that drains are not needed or maybe more deleterious for the patient [9,12].
This study aimed to identify the utility of cervical drains in the population submitted to thyroid surgery in seven Portuguese hospitals.The study is multicentric, which may bias the interpretation of the results due to some inevitable heterogeneity of the procedures, surgical team experience, indications or options for the use of drains (ranges between 6.25% and 100%), and the identification of complications.
First, the study claims a low percentage of complications theoretically preventable by cervical drains, which were, by definition, hematomas or seromas.Although the frequency of complications is low in the study (2.9%, n=27), it is slightly higher than the data described in other literature reports (incidence of 0.1% to 1.1%) [13].
In the series, complications were mainly associated with the age of patients and the use of antithrombotic therapies.Indeed, in this series, despite the use of SPA (Portuguese Society of Anesthesiology) guidelines for the management of these patients, antithrombotic therapy was an independent factor in the occurrence of hematomas or seromas following thyroid surgery.These results are in agreement with the literature, where patients on anticoagulation are at a greater risk for bleeding [14].
The results of this study point to a higher level of complications in patients with benign toxic pathology as opposed to those observed in cases of non-toxic benign pathology and even malignancies.Indeed, enhanced vascularization verified in such cases can explain the complications observed and the increased use of drains in these cases.The literature in these cases is ambiguous.While some advocate that hyperthyroidism is a risk factor for hematoma [14][15][16], others failed to identify the same relationship [17,18].
The presence of a cervical drain did not seem to be associated with the occurrence of complications or with the need for reintervention.As observed, no significant association between the presence of cervical drains and the occurrence of complications was observed, which agrees with other reports in the literature [5,[19][20][21][22][23].The results of this study also showed that of the patients who had complications, 12 (44.4%)had had a drain placed.Indeed, no statistically significant differences were observed between patients who had a drain placed and those who did not (1.2%, n=4 vs. 3.3%, n=20; p=0.054) regarding the need for reintervention.These results agree with those of others [24].
The study identified a higher length of hospital stay in those who had drains, with 56.6% (n=346) of patients without a drain being hospitalized for 0 to 1 day, while 94.3% (n=315) of patients with a drain were hospitalized for two days or more (p<0.001).These results agree with those observed in other studies [1,4] and can be explained by the more liberal use of drains in more thyroid operations performed in the admission regime, partially explained by the anticipation of more complications from the surgical procedures.As the use of drains did not decrease the need for reintervention and was even associated with a longer hospital stay, the routine use of cervical drainage after thyroidectomy should not be advised.This conclusion, based on the present data, is consistent with prospective studies reported in the literature [4,5,12,[19][20][21][22][23]25,26].The results of this study identified a higher frequency of thyroid surgery, cervical lymph node dissection, and the option of cervical drainage.The results observed did not influence the need for reoperations compared to cases without drain placement [27].Therefore, based on this and other studies, we should not recommend routine use of cervical drains [4,27].This study includes a significant number of patients who underwent surgery but has several limitations.It is a retrospective and multicentric study, which may be associated with a heterogeneity of procedures and diverse criteria for the use of cervical drains.Due to the low number of complications, it is possible that some associations were not found.
Conclusions
Summing up, the percentage of patients who develop complications in this type of surgery is low, which should be considered a limitation in the interpretation of the results.However, it is recommended to limit the routine use of drains to patients under antithrombotic therapy or those with a benign toxic pathology.
As the use of drains did not decrease the need for reintervention and was even associated with a longer hospital stay, the routine use of cervical drainage after thyroidectomy, from our point of view, should not be advised.
We hope this article can be the basis for a multicentric prospective study centered on patients with complications, with unified surgical protocols to evaluate this matter.
|
2024-02-20T16:03:04.952Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "ce5b04955fb93cbfd061e89d545a766021e8002c",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/198103/20240218-24167-ho6gmh.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "95d265a8c2bfc09394aae2c24e2e1701a31bfbbd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
251018606
|
pes2o/s2orc
|
v3-fos-license
|
Development of a novel, windowless, amorphous selenium based photodetector for use in liquid noble detectors
Detection of the vacuum ultraviolet (VUV) scintillation light produced by liquid noble elements is a central challenge in order to fully exploit the available timing, topological, and calorimetric information in detectors leveraging these media. In this paper, we characterize a novel, windowless amorphous selenium based photodetector with direct sensitivity to VUV light. We present here the manufacturing and experimental setup used to operate this detector at low transport electric fields (2.7-5.2 V/$\mu$m) and across a wide range of temperatures (77K-290K). This work shows that the first proof-of-principle device windowless amorphous selenium is robust under cryogenic conditions, responsive to VUV light at cryogenic temperatures, and preserves argon purity. These findings motivate a continued exploration of amorphous selenium devices for simultaneous detection of scintillation light and ionization charge in noble element detectors.
Introduction
The application and ubiquity of noble liquid detectors in the fields of high energy physics [1][2][3][4][5][6][7][8], medical imaging [9][10][11][12], and rare event searches [13][14][15][16][17][18][19] is due to the many attractive properties these media provide. Charged particles traversing noble liquids deposit energy in the form of scintillation light and the ionization charge. Depending on the application, an experiment may choose to apply an external electric field and collect ionization electrons. Given the anti-correlation between the collected ionization charge and the light yield, this comes with a loss in the overall detected scintillation light.
The collection of the scintillation light is a central tool in noble element detectors as it provides a number of useful experimental handles. Firstly, the scintillation light provides a prompt signal (commonly referred to as 0 ) which allows to record an accurate time associated with the activity observed. This plays a central role in Time Projection Chambers (TPC's) [20] which collect both charge and light as it allows the inference of the position of the event along the drift dimension from the difference between 0 and the time the charge signal is registered. Secondly, the combination of the amount of light and charge collected provides a robust estimate of energy deposited in the noble element detector [21]. Thirdly, techniques in pulse shape discrimination based on the scintillation light allow to distinguish recoils due to electrons from recoils due to nuclear interactions [22,23]. This provides a powerful tool in rare event searches (such as dark matter applications) to separate signal from background.
Two of the most common liquid nobles, argon (Ar) and xenon (Xe), have very good scintillation light yields with excellent optical transmission properties. Thus, even detectors as big as several meter cubed preserve a high flux of photons observed at the photosensor. One key challenge is that both elements scintillate in the vacuum ultraviolet (VUV). The typical wavelengths are 128 nm for liquid argon (LAr) and 178 nm for liquid xenon (LXe). Common photosensors used to detect low levels of light, e.g. Multi-Pixel Photon Counters (MPPCs), Silicon photomultipliers (SiPMs), and photo multiplier tubes (PMTs), are largely insensitive to this wavelength of light due to their construction and fabrication. More recently, devices custom made to be more sensitive to VUV wavelengths have started to emerge [24][25][26], albeit with relatively low efficiencies, reaching at most 15-20%. A standard solution to the mismatch between photosensors' readout and VUV scintillation light is to deploy a wavelength shifting (WLS) material that absorbs the VUV light and re-emits it via fluorescence at a much longer wavelength (typically in the 'blue' wavelength). The past years have seen substantial R&D in the field of wavelength shifters and their application to liquid noble detectors [27]. Two of the most common WLS materials include 1,1,4,4-tetraphenyl-1,3butadiene (TPB) and polyethylene naphthalate (PEN). Despite their ubiquity in application, these WLS materials have a number of drawbacks including their deterioration due to environmental effects [28,29], a complicated delayed emission time [30,31], and a relatively low efficiency for the observation of the re-emmitted photon [32].
The difficulties associated with the detection of the VUV photons has inspired research into alternative materials which could potentially be sensitive directly to VUV light. In this paper, we explore an amorphous selenium (aSe) based detector. Ample literature on aSe based direct conversion active matrix flat panel imagers (AMFPI) [33] and digital breast tomosynthesis [34] has taken place in the field of X-ray imaging. The recently developed ability to perform single-photoncounting (SPC) X-ray experiments using CMOS technology [35] makes this material an attractive candidate to explore for different applications. The optical absorption properties of aSe [36] suggest that the material has excellent efficiency for converting the VUV photons into electron/hole pairs at shallow depths (nm), thus overcoming potential depth-dependent effects observed for X-rays [37]. Moreover, the transport properties of aSe suggest that with sufficiently small distances between the electrodes, the overall mobilities and lifetimes of the charge carriers should be sufficiently high to be viable for low photon flux applications. This paper explores the viability of a windowless aSe based device for collecting UV light in liquid noble element detectors. As such, the response of the device is characterized as a function of temperature in the range relevant for noble element detectors using UV light. The initial exploration is done at relatively low applied electric fields (≤ 5 V/ m), where SPC is not expected because of the limited charge yield. Future work is planned to explore significantly higher electric fields where the holes in aSe undergo impact ionization and thus liberate additional electron-hole pairs. This process has been shown to cause amplification of the initial signal and can result in avalanche gain [38]. The first commercial device utilizing impact ionization in aSe, referred to as high-gain avalanche rushing photoconductor (HARP) tubes [39] were initially commercialized in the late 1980s for the broadcast industry. More recently, novel designs in the electrodes has shown that avalanche multiplication with sensitivity down to SPC levels is possible [40]. Thus, the thrust of this work is to perform a characterization of a simple, but novel, aSe based photon detector with its deployment in a cryogenic environment to understand the feasibility and limitations of this device. Section 2 describes the aSe device and the testing apparati used to characterize the boards behavior as a function of temperature. Section 3 describes the observations and behavior of the aSe based detector. Finally, Section 4 offers some closing thoughts and conclusions.
Experimental Setup
In this section, we describe the aSe device under test. Section 2.1 describes the devices fabrication and characterization. Section 2.2 presents the experimental apparatus used to test the aSe boards at cryogenic temperatures and under UV light exposure. Finally, Section 2.3 describes the custom readout electronics and high voltage supply needed to collect both holes and electrons at various electric fields.
Amorphous Selenium Boards
A typical aSe device, as has been used for x-ray and gamma-ray detection [37], uses a geometric layout which can be described as a "vertical geometry". This geometry has the amorphous selenium sandwiched between two horizontal electrodes, as shown schematically on the left of Figure 2.1. The electrodes provide an electric field needed to achieve transport of the charge carriers created when a photon interacts with the selenium. The vertical geometry can be used in x-ray and gammaray applications because the electrodes are largely transparent to these photons and thus provides a simplified fabrication process. However, for use with UV light this configuration is unfavorable since even a thin amount of material typically used as an upper electrode (e.g. ITO, gold, copper, etc) will result in a large fraction of all the UV light being absorbed. To circumvent this problem and allow for feasibility testing of aSe, we consider a "horizontal geometry" such that in Figure 2.1.
This geometry consists of a bare printed circuit board (PCB) constructed with interdigitated electrodes to provide the electric field needed to achieve transport of the charge carriers. This configuration thus creates a "windowless" device where the selenium is thermally evaporated directly onto the board. The selenium is thus exposed directly to the UV source. This device, as will be shown in this paper, represents a low cost, simple to manufacture, and scalable solution to a large area VUV sensitive photosensor. The PCB manufacturing process for areas as large as 2000 cm 2 is commercially ready and low cost [41], the process of uniform and repeatable thermal evaporation techniques over these areas is well demonstrated [42], and the ability to scale together large area tiles into one uniform collection plane is commonly done in experiments [43,44]. Moreover, the ability for the device to respond to VUV light using this windowless approach simplifies the characterization and testing of the device. A study of the electric field present within the amorphous selenium given the interdigitated electrodes used in the device tested here is presented in the Appendix B. This study shows that for the device tested here, the electric field is uniform both across the electrodes as well as throughout the selenium and follows the geometric properties one would intuit. Figure 1. Schematic of the various geometries which an aSe device may be used. Left: "Vertical Geometry" which utilizes an electrode on the top most layer which is transparent to the radiation to be detected. Right: "Horizontal Geometry" which uses interdigitated electrodes to achieve a horizontal electric field in the aSe and resolves the problem of most electrodes being non-transparent to VUV light.
The horizontal geometry does present some design challenges. The most readily available commercial spacing between PCB produced interdigitated electrodes is limited by the PCB manufacturing process. This results in a limit to the electric field (in units of Volts/micrometer) which can be applied in such a configuration. For the experiment presented here, a small commercial board of 20mm×22.5mm was produced with the smallest electrode spacing of ∼ 127 m from a low-cost commercial vendor [41]. The board is shown in Figure 2 before the addition of selenium. In order to obtain avalanche gain in the aSe, it is necessary to apply higher fields to the prototypes than the case presented here. Follow-up work on these results will utilize a high density PCB manufacturing process to explore trace separations down to 25 m. However, as the applied field increases and the sensitivity to lower incident photon flux augments, the corresponding increase in dark current will need to be addressed via the application of electron/hole blocking layers. The details of which materials provide the best performance has been extensively studied for x-ray based aSe devices [37], and will need to be explored in this application. No charge blocking layers were applied for the device under consideration.
For the boards used in this experiment, the characteristic spacing was confirmed by obtaining high resolution images using a Nikon Eclipse ME600 microscope paired with a Nikon DXM 1200 digital camera and image editing software Paint.NET. The typical trace width and spaces were found to be 105.04 ± 1.94 m and 146.57 ± 1.94 m respectively. These values are used when evaluating the applied electric field in the subsequent measurements described in Section 3. The trace heights are set by the manufacturing process and are 35 m ± 5 m.
Thermal evaporation deposition of selenium onto the boards was performed at Oak Ridge National Laboratory. An NRC/Varian 3117 E-Beam Vacuum Evaporator was retrofitted with a molybdenum boat to hold 722 mg of selenium pellets. The selenium pellets are purchased from Sigma-Aldrich [45] and have a particle size < 5 mm with a purity of selenium rated for ≥ 99.999 %. The PCBs were placed in a 3D printed mask 10 cm above the boat and the selenium was heated under high vacuum. The selenium coating was actively measured using a quartz monitor crystal with an Inficon XTM/2 deposition monitor to ±1 nm precision to produce a 1.2 m aSe layer.
Cryogenic Temperature Test Stand
Testing the viability of the devices for noble elements detectors requires bringing the boards at liquid elements temperatures (∼ 80K). We achieve this task with the cryogenic test stand shown in Figure 3. The test stand is housed in a standard 8 in Conflat Flange (CF) cross (Lesker C-0800). The inner volume is evacuated via a turbo-molecular vacuum pump (Pfeiffer HiCube 80). A custom heat exchanger is fabricated from two 0.5 inch 304 stainless steel tubes with 0.125 inch walls which penetrate the top of the flange to allow the sample under test to be cooled. A block of 304 stainless steel allows the cryogenic fluid to circulate between the two tubes. The heat exchanger is cooled via a low pressure liquid nitrogen dewar, where the liquid is allowed to flow through the heat exchanger. In order to maintain flexibility with the setup, the sample holder is independent of the heat exchanger and mounts to the bottom of the heat exchanger. The sample holder used during the tests described here is machined from 101 copper and is bolted to the heat exchanger with a sheet of Indium (McMaster 8898N18) placed in between the heat exchanger and sample holder to aid in the thermal transfer.
The test samples are mounted to the copper sample holder via a custom carrier PCB's which is outfitted with a standard M.2 connector, shown on the right of Figure 3. This connector is used for ease of testing various samples without interfering/moving the electronics. The sample PCB plugs into the M.2 connector on a carrier PCB which manages the connections for the readout electronics (shown schematically in Figure 3 in green).
Two PT-100 thin film resistive thermal devices (RTD's) (P0K1.232.6W.Y.010 [46]) are mounted to monitor the temperature of the heat exchanger and the device under test. The rate of cooldown is determined by the flow of cyrogenic fluid through the heat exchanger. For the tests performed here, a manual valve was adjusted to maintain a cool down rate between 1.1 and 1.7 Kelvin/minute using liquid nitrogen. Figure 4 shows the typical cool-down curves over eight data runs compared to an uncontrolled cool-down where liquid nitrogen was allowed to flow at its maximum rate (resulting in achieving < 80K in under 30 mins). The samples can be kept at ∼80 K for extended periods of time by continuously flowing liquid nitrogen at a slow, but fixed rate.
The typical warm-up time varies slightly depending on the conditions in the lab, and is largely driven by the ambient temperature. Samples regularly reach room temperature within ∼10 hours after the nitrogen is shut off and the total data taking period per experiment lasting ∼40 hours in total. Figure 4. The data recorded from the PT-100 thin film resistive thermal devices during the cooldown of the experiment for the eight main data taking campaigns described below as well as one "uncontrolled" cooldown where the device was allowed to cool as fast as possible . The rate of temperature change was targeted to be between 1-2 Kelvin / minute during experimental operations. The relevant temperatures for various liquid cryogens (xenon, argon, and nitrogen) are noted on the plot for reference.
Data Acquisition and Readout Electronics
The data acquisition system and readout electronics are shown schematically in Figure 5. The system is driven by a Raspberry Pi 3 Model B [47] running a Python script which controls two Arduino's [48]. Two (P0K1.232.3K.B.010.M.U) RTD Platinum Sensors inside the cryogenic test stand are readout using the Arduino UNO coupled with a ARD-LTC2499 24-bit ADC data acquisition shield allowing the measured resistance to be converted to a temperature with milli-Kelvin precision. The temperatures are recorded to an external solid state drive via a USB port on the Raspberry Pi. The RTD's have a dedicated Rigol DP832 power supply set at 2.048 V to match the ADC threshold. The Arduino Nano serves as a trigger for a 5 Watt Hamamatsu L11316-11 Xenon flashlamp [49] and LeCroy 6050 WaveRunner oscilloscope by providing a 5V signal with a rate configurable by the Raspberry Pi. The flashlamp has a dedicated PS-305D power supply set to 24 VDC. The oscilloscope triggers on the input from the flashlamp signal. Data files from the oscilloscope are stored on an external solid state drive. The temperature data and recorded waveforms are merged offline via the file number and the timestamp. In order to provide a high voltage (HV) bias between ±750 volts to the aSe board and readout the subsequent signals generated from its exposure to UV light, a custom HV/readout setup was implemented. The HV is generated via an ENCO DCDC converter which is powered by a 12V lead acid battery. The HV is filtered and applied to the sample. The charge is read out off the HV line via a decoupling capacitor and the signal is amplified with an Amptec A250 charge sensitive amplifier with an intrafet IFN152 as the input jfet [50]. The output of the A250 is then sent to the oscilloscope for data collection. All electronics are housed in a shielded enclosure to further reduce noise.
The high voltage supply was tested for stability over a 12 hour period and found to be stable to less than 0.1% of the target value. The batteries on the readout box were regularly recharged to ensure no unexpected variation in the applied voltage. The xenon flashlamp output power was also tested for stability and repeatability by directly coupling the fiber optic to a THORLABS DET10A2 photodiode [51] and found to have a 'shot-to-shot' variation of ∼ 3% (consistent with the lamps design document) and to have a consistent light output over a period of 12+ hours to within 1%. Table 1 summarizes the nine data taking campaigns. The first campaign quantifies the effect of bulk trapping at various temperatures, referred to as ghosting. This phenomenon and the data used to understand its impact are described in Section 3.1. Six of the data campaigns were designed to test the response of the aSe device as a function of temperature at different applied electric fields. These results are discussed in Section 3.3 and detailed numerical results are summarized in Appendix A. Two data campaigns were taken to verify the repeatability of the results as a function of temperature and are described in the appendix Section C. Variability in the results found during the repeat measurements is treated as a systematic on the results. The results in section 3.4.1 report the robustness tests against cryo-cycling.
Exposure Dependent Signal Reduction in aSe
The phenomenon of a change in the sensitivity of aSe based x-ray imaging detectors as a result of previous exposure to radiation is referred to as 'ghosting' [52]. This phenomenon, which typically results in a decrease in sensitivity with subsequent exposures, has been determined to have the dominant mechanism due to bulk trapping of electrons which recombine subsequently with x-ray generated holes [53]. Holes may also become trapped in the aSe, affecting the response in either charge collection polarity. The typical lifetime, , for a charge carrier to be released from a trap has the form [54] where is the energy depth of the trap (eestimated to be 0.9 eV above the valence band for holes and 1.2 eV below the conduction band edge for electrons [55]), is Boltzmann's constant, is the absolute temperature, and is the phonon frequency (taken as 10 11 s −1 [53]). For room temperature operation has been found to be on the order of minutes for holes and hours for electrons.
The overall impact that ghosting has on the performance of the aSe based detector is found to depend on both the applied electric field (reducing the effect of ghosting with increased field) and the time interval between exposures (with the effect of ghosting decreasing with longer time between exposures). This phenomenon has been observed in aSe detectors when exposed to x-rays and when the detector is in a vertical geometry [56].
We observe a reduction of the signal peak amplitude when exposing our windowless horizontal geometry detector to repeated pulses of light from the xenon lamp. This phenomenon suggests the effect is likely due to ghosting. The top of Figure 6 shows an example of the pulse amplitude recorded when the board is exposed to the Xenon flashlamp at a pulse rate of 0.1 Hz over a period of 12 hours. After a period of ∼ 6 hours at room temperature, the system reaches an equilibrium state where the pulse amplitude is no longer noticeably changes. We attribute this to reaching a balance of the clearing of electron/hole traps at the given field and the creation of new traps with new electron/hole pairs created from the VUV light.
Given the application for the device under test, we explore this behavior in cold. Two dedicated runs were taken, the first at room temperature (∼ 270K) and the second at cryogenic temperature (∼ 80K). For both these runs the board was allowed to be at the designated temperature for a period of hours before being exposed to the Xenon flash lamp. The pulse amplitude, defined in Section 3.2, was then recorded over a six hour period. This data is shown in the bottom of Figure 6. As anticipated, the time it takes to reach equilibrium at room temperature is longer than in cold. This is because the lifetime of the traps depends on the the temperature of the sample and becomes larger at lower temperatures, as shown in equation 3.1 The relative equilibrium pulse amplitude is different between the "warm" and "cold" data, but the stability of this equilibrium is similar. This is consistent with the model that thermal motion is what leads to the clearing of traps and thus at cryogenic temperatures becomes more pronounced.
To mitigate the impact of the ghosting effect across the measurements described below, we expose the system to UV light at a fixed frequency of 2 Hz for ∼ 6 hours before beginning the cool down process. Moreover, we make the cool down process as slow as possible, typically ranging between two and three hours to allow for the system to stably transition. Finally, we remain at our lowest temperature we can achieve (∼ 80K) for a period of ∼ 7 hours before allowing the system to warm up. We continue to take data until the system returns to the previous peak amplitude equilibrium state when at room temperature before cooling down. The various stages described above are shown in Figure 7.
With the mitigation strategy in place to account for the effects of ghosting, Section 3.2 describes the data quality cleanup and analysis procedures used.
Data Quality and Analysis Procedure
The data recorded using the setup described in Section 2.3 has two files recorded. The first file is the waveform captured from the oscilloscope and the second is the temperature data recorded from the RTD's and saved on the Raspberry Pi. These two files are indexed such that they are matched in time and thus the data files are combined to provide a single file with both the recorded waveform and temperature.
Data quality was performed for all campaigns following the procedure described in this section. Applying positive polarity voltage resulted in "positive waves", while applying negative polarity voltage resulted in "negative waves". Figure 8 shows typical example waveforms in both cases. The illustrations are annotated to highlight the important features used to calculate the peak amplitude and area. Figure 6. Top: The peak amplitude, defined in Section 3.2, at room temperature in the cryogenic temperature stand under vacuum and biased to +400 volts over a 12 hour data taking period when exposed to the xenon flashlamp every 10.8 seconds. The reponse of the aSe board can be seen to degrade over time until eventually reaching an equilibrium state after ∼ 6 hours. Bottom: The recorded peak amplitude over a 6 hour period when the board is held at ambient temperature (red ∼ 290 K) and when held at cryogenic temperature (blue ∼ 80 K) in the cryogenic temperature stand under vacuum and biased to +400 volts and exposed to the xenon flash lamp every 10.8 seconds. The pulse amplitude drops much more quickly to the equilibrium state for the cryogenic temperature and can be interpreted as the longer lifetime for the charge traps associated with the ghosting effect.
To define the start time of each wave ( 0 ), we account for the known delay between the trigger pulse sent to the flashlamp and the actual formation of a light pulse. According to the flashlamp data sheet [49], the delay relative to the input pulse is ∼ 4.0 − 4.5 s. The data show a characteristic "pick-up" due to inductive coupling between the flashlamp and the signal line for the aSe board. This peak (which is labeled 'Amigo', as it provides a friendly reference point) appears reliably at 4.36 s after the trigger signal, thus defining our 0 .
Once the start of the waveform is defined, the waves are fitted using a LOcally WEighted Scatterplot Smoothing (LOWESS) [57,58] statistical package. The fit ranges between 0 and = 600 s providing a smoothed function of the waveform. The peak amplitude of the wave is found by sampling between 100 < < 600 s and locating the minimum for negative waves or maximum for positive waves. The area under the fit (integrated voltage) was calculated using the trapezoidal rule. Bounds of the integral were set by fixing the lower limit to 237 ns after the start of the fit and the upper limit to the fit and abscissa intersection following the peak amplitude. The lower limit was chosen to ensure integration occurs after any inductive noise seen in "the Amigo" has died out. The lower limit accounts for a documented delay and jitter time in the xenon flash lamp.
Waveforms of particularly small negative amplitudes fail to return to baseline, and thus no area can be calculated. An alternate baseline correction is attempted in these cases by taking the baseline mean sufficiently far from where an intersection would occur (700 < < 1100 s) and correcting that baseline. When the alternate baseline correction failed, the wave was removed from the data set. When the correction succeeded, the peak amplitude is recalculated, and the area found. Additionally, waves that produced integrated areas with an incorrect sign were removed from the data set. An illustration of the methods described above is given in Figure 8.
Values for peak amplitude and integrated areas are accumulated, averaged and the standard deviation of the set calculated. If an individual waveform is found to have a peak amplitude or integrated area greater than one standard deviation this waveform is removed as analysis shows these waveforms are typically saturated with external noise and thus shouldn't be considered. A new list index is then created keeping only waveforms which pass this filter.
With the data cleanup completed, we extract the relevant physics from the remaining waveforms. Figure 8. Example waveforms when collecting holes at ambient temperature (top), electrons at ambient temperature (middle), and electrons at cryogenic temperature (bottom). The figure highlights the methods described above of locating the start of the waveform via the characteristic inductive noise pickup (labeled "Amigo") from the start of the flashlamp, the effect of the smoothing algorithm are shown as the solid red line, the end of the pulse is identified as the "intersection", the peak amplitude, and integrated area are all illustrated. The bottom plot also illustrates the region of the waveform used when the alternative baseline correction is needed.
Characterization across temperatures
To characterize the response of the aSe device, the peak amplitude (mV) and integrated area of the pulse (mV· s) are calculated for different temperature ranges. Within a given temperature range, 20 independent waveforms are averaged. The same procedure described above is used to calculate the peak amplitude, area and the standard deviation. Tables 2, 3, 4 in the appendix. A few general trends are observed from this data: 1. While the magnitude of the peak amplitude is noticeably reduced at the lowest temperatures, it is definitively non-zero and has a pulse shape consistent with a response due to signal from the flashlamp 2. The magnitude of the peak amplitude scales approximately with the size of the applied field, as would be expected. As an example ratio of the fields 3.62V/ m 2.73V/ m = 1.32 and the ratio of the peak amplitudes at 265K-285K for those fields is 1.96 and between 75K-85K the ratio of the peak amplitudes is 1.42. Similar trends can be seen when looking at fields between 5.16V/ m 3.62V/ m = 1.43 and the ratio of the peak amplitudes at 265K-285K for those fields is 1.42 and between 75K-85K the ratio of the peak amplitudes is 1.1. The data is summarized in Tables 2, 3, 4.
3. The peak amplitude at the lowest temperatures is consistently higher when collecting electrons rather than holes. This trend holds true within the uncertainties of the measurement as the samples were warmed up.
It is worth noting that as the sample warms up, the effects due to ghosting dominate near 280 -290 K. As the exposure to light in warm continues, the samples return to a similar equilibrium state to the start of the data taking, prior to cooling. Figure 12 shows the integrated pulse area as a function of temperature and applied voltage. These results are also summarized in Tables 2, 3, 4 in the appendix. A similar set of observations can be seen in the amplitudes as was seen in the pulse areas. This consistency gives confidence that the same physics driving the pulse amplitude is present in the overall shape of the pulse, thus confirming that the signal is due to the response of the aSe detector.
Usability in liquid noble element detectors
This section describes the additional tests performed to ensure usability of amorphous selenium based devices in liquid noble elements time projection chambers. First, tests described in Section 3.4.1 demonstrate robustness of the prototypes against cryogenic cycling, checking if the deposited selenium remains on the board even after the exposure to extreme temperatures. Second, tests described in Section 3.4.2 explore whether the introduction of aSe degrades argon purity.
Robustness against cryogenic cycling
In addition to the repeated thermal cycling of these boards during the data taking campaigns -after which no noticeable damage was observed -the aSe coated boards were imaged using Scanning Electron Microscope (SEM) before and after submersion in a liquid nitrogen (LN 2 ) bath. The boards were lowered into the LN 2 bath from room temperature over a period of 10 mins with care taken to ensure no condensation formed on the board during the submersion. The LN 2 bath was then allowed to evaporate over a period of > 8 hours and then imaged again afterwards. Examples of the before and after SEM images can be seen in Figure 13. No apparent damage or cracking of the aSe layer can be seen in the SEM images and none could be seen during visual inspection. Taken together with the repeatable behavior of the aSe setup after multiple thermocycles provides confidence that such a windowless aSe detector is robust at the cryogenic temperatures of a liquid noble environment.
PRE LIQUID NITROGEN
POST LIQUID NITROGEN 30 X 100 X Figure 13. SEM images taken at 30x and 100x of the same region of an aSe board before and after cryocycling in LN 2 . A scan of the board showed no noticeable defects of the aSe layer following cryocycling.
Electronegative contaminants test
A key feature of noble elements for particle detectors is the dual response to the passage of charged particles in the active volume in the form of correlated ionization charge and scintillation light. When developing new concepts for detectors intending to use both mechanisms, it is important to test that the light detection system does not suppress charge collection. The presence of electronegative contaminants in the liquid element, such as oxygen and water, is particularly pernicious since these molecules quench the charge produced by the ionizing radiation. While noble element TPCs use hermetically sealed and leak-checked vessels to abate the leakage of external contaminants into the system, a sizable source of impurities can be introduced from the outgassing of internal surfaces. We tested whether the outgassing of the aSe boards reaches levels harmful to charge collection by performing a measurement of the electron lifetime and water content at the Fermilab Material Test Stand (MTS) [59] located at the Liquid Noble Test Facility (PAB). The MTS is a 250 l liquid argon cryostat which allows to monitor the level of electronegative contaminants introduced by the material under examination by positioning the material in the argon gas vapor (ullage) and submerging the material in the liquid. The MTS is equipped with an internal filtration system for oxygen and water contamination, which can be turn on or off as needed, and with a purity monitor to directly measure the effect of any material on the electron lifetime in the liquid argon.
We examined a PCB board coated with 35 m selenium on a surface area of 2×2 cm 2 . The board was cleaned by wiping the uncoated surface of the board with alcohol. After the insertion of the board in the MTS, the sample chamber was purged with argon gas and evacuated several times to eliminate the contaminants acquired during insertion before the introduction in the active volume. Three runs were performed in the MTS. The first had no sample in the vessel and serves as a control to understand the behavior of the system when when the filters are on and off. The second has the sample suspended in the ullage where the effects of degradation to purity due to outgassing should be the most pronounced. Finally, the sample is lowered into liquid argon and left submerged.
To allow for comparison across runs, the same testing procedure is repeated for each run. First, the filtration system is activated, the electron lifetime is allowed to stabilize, and data is collected for several hours. Next, the filter system is switched off, and the decay of electron lifetime is observed. Figure 14 shows the results of the testing procedure for the three runs. The average lifetime during active filtration and the shape of the decay following the shut off of the filters are consistent across all runs. Thus, we conclude that the presence of the coated board does not suppress charge collection and does not negatively impact electron lifetime. Figure 14. Left: average electron lifetime during active filtration period as read directly from the purity monitor (raw reading). The data is shown for the three run conditions: no sample (blue), sample in the ullage (orange), sample in the liquid (green). Right: calibrated lifetime as a function of time during the period when the filtration system is inactive.
Conclusions
In this paper, we have presented the response of a novel, windowless amorphous selenium based photon detector to UV light as a function of applied electric field and temperature. The device is constructed from low-cost commercially available printed circuit boards and simple thermal evaporation of selenium onto the board.
This initial exploration shows that such a device is: i) robust under cryogenic conditions, with the selenium remaining undamaged under cryogenic cycling and demonstrating the same performance after repeated thermal cycles, ii) responsive at cryogenic temperatures consistent with common liquid noble detectors (e.g. LAr and LXe), iii) the response of the device is consistent with similar results shown for x-rays and gamma-rays (e.g. the observance of ghosting effects and the strength of the electron signal compared to the hole signal), and iv) preserving noble elements purity.
Our finding that the device continues to respond at temperatures relevant to liquid noble detectors commonly used in high energy physics is particularly relevant to set future R&D directions. While the flux of photons used in this experiment is quite high, we have provided a proof-of-principle demonstration that such a device could be be sensitive to a lower photon flux in a cryogenic environment, provided that a higher applied electric field is applied. Exploration into the response of such an aSe based device is ongoing with additional results expected to follow this work shortly.
A device based on the concept tested here opens the door to the possibility of making an integrated charge (Q) plus light (L) sensor, referred to as a "Q+L sensor". Such a sensor could simultaneously be sensitive to both the VUV photons produced in a liquid noble detector as well as the ionization charge created during the interaction of a charged particle with the noble element medium. A conceptual sketch of such a device using the Q-Pix [60] charge readout architecture is shown in Figure 15. Such a device, using amorphous selenium as the photoconductor, could have a large effective surface area, provide increased sensitivity to low energy physics and greater fidelity in energy reconstruction. Figure 15. Conceptual sketch of an integrated charge and light (Q+L) sensor utilizing a windowless photoconductor, such as the device tested in this work, to directly detect the VUV photons produced in a noble element TPC. The conceptual device depicted here would use the same readout architecture used for the detection of ionization charge to detect the charge from the photoconductor. In this schematic, this is shown as the Q-Pix charge readout solution described in Reference [60].
The quantification of the improvement such a device will offer, as well as the realization of such a device in an experimental setup is the subject of future ongoing work.
A Summary of pulse characterization across temperatures
Here we provide the corresponding data to the plots in Figures 9-12 associated with the pulse height and area as a function of temperature for the different applied fields. The errors quoted in these tables reflect the standard deviation from the averaging techniques described in Section 3.2.
Field
Charge
B Modeling of the electric field
Using the online 3D computer-aided design (CAD) design software Fusion360 [61], a model of the interdigitated electrode found in the 'cookie' board was created with the appropriate spacings and materials. Traces are modeled as 107 m (W) ×1123 m (L) ×35 m (H) with the gaps evenly distributed at 147 m. The electrodes are assumed to be silver. This model was then exported to a online electric field modeling tool, QuickField [62], which was used to model the behavior of the electric field and electric potential in the presence of the aSe coating. The aSe layer is assumed to be 1.2 m thick and is present between the electrodes as well as on top of the electrodes. For simplicity, no aSe is found on the vertical walls of the electrodes. A permittivity of 5.8 and 6.9 is assumed for the aSe and Ag respecitvely. An example output of the simulation for an applied potential of 400 Volts can be seen in Figure 16. Overall, the electric field (and the corresponding gradient in the electric potential) across the board and within the selenium is uniform and consistent with the estimated field calculated using the geometry of the board. This can be better seen in Figure 17, which shows analytically how simulation predicts the electric potential and field vary as you traverse the gap between electrodes (left) as well as picking a single point in the middle of the gap and moving vertically through the simulated layer of aSe (right). In both cases, the field and potential are found to be as expected. The electric field is found to be uniform as a function of the thickness of the selenium and the potential is a constant.
The areas with the largest non-uniformity in the electric field occur in the regions where one electrode terminates in the gap of the opposite pair of interdigitated electrodes. In this region the edges of the electrodes cause the field to be more non-linear. Figure 18 shows analytically how the electric potential and field varies as you traverse the gap between electrodes (left) as well as picking a single point in the middle of the gap and moving vertically through the simulated layer of aSe (right). The point chosen here is an area where the variations can be seen in Figure 16 to be the largest. Due to the geometric effects of the edges of the electrodes, the electric field can vary 2-3 times larger then the uniform region between the electrodes. While this is a large variation in the field, this effect is ultimately determined to be of little significance to the main analysis presented here. Since the non-uniform gap region represents a small fraction of the overall surface area of the board (the non-uniform area represents < 1.5% of the total surface area), the effect on the reconstructed signal due to photons creating electron/hole pairs in this region is expected to be quite small. Figure 18. Left: The simulated electric potential (top) and electric field (bottom) for three different transport fields across the 147 m gap between electrodes in where the field was evidently non-uniform due to the edge effects of the electrode. The field can be seen to vary significantly from the geometrically calculated value due to edge and corner effects Right: The simulated electric potential (top) and field (bottom) for a single position in the middle of the gap between the electrodes (∼ 73.5 m from the electrodes) and then traversing the thickness of the aSe layer (1.2 m). Here too the electric field is found vary as a function of the thickness of the selenium. Table 5 summarizes the values for the measurements of the peak amplitude and integrated pulse area taken during the main data taking campaign as well as the repeated measurements made multiple days later at ±2.73 V/ m. The same analysis technique described in Section 3.2 was utilized for the repeated measurement data set. The results are seen to vary between ∼ 10 − 100% when the charge carriers are electrons and ∼ 10 − 300% when the charge carriers are holes. The largest variation is seen in the integrated pulse area and upon inspection of the various waveforms, can be primarily attributed to a shift in the baseline and the subsequent calculation of the integrated area. The room temperature measurements are seen to be generally consistent with one another, with the variation between tests being less than 25% in both the peak amplitude and integrated area and the largest variations being seen at in the temperature bins between 205K and 105K. Table 5. Summary of the peak amplitude and integrated pulse area found during the repeat measurement across the temperature range probed for an electric field of ±2.73 V/ m.
|
2022-07-25T01:15:58.425Z
|
2022-07-22T00:00:00.000
|
{
"year": 2022,
"sha1": "0777fc7d6bb033d55cbdcc01fc37cf22f850974e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0777fc7d6bb033d55cbdcc01fc37cf22f850974e",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248163562
|
pes2o/s2orc
|
v3-fos-license
|
Research and Application Validation of a Feature Wavelength Selection Method Based on Acousto-Optic Tunable Filter (AOTF) and Automatic Machine Learning (AutoML)
Near-infrared spectroscopy has been widely applied in various fields such as food analysis and agricultural testing. However, the conventional method of scanning the full spectrum of the sample and then invoking the model to analyze and predict results has a large amount of collected data, redundant information, slow acquisition speed, and high model complexity. This paper proposes a feature wavelength selection approach based on acousto-optical tunable filter (AOTF) spectroscopy and automatic machine learning (AutoML). Based on the programmable selection of sub nm center wavelengths achieved by the AOTF, it is capable of rapid acquisition of combinations of feature wavelengths of samples selected using AutoML algorithms, enabling the rapid output of target substance detection results in the field. The experimental setup was designed and application validation experiments were carried out to verify that the method could significantly reduce the number of NIR sampling points, increase the sampling speed, and improve the accuracy and predictability of NIR data models while simplifying the modelling process and broadening the application scenarios.
Introduction
NIR spectroscopy has many advantages, such as being nondestructive and accurate, and has been widely applied in areas such as food safety [1,2], drug analysis [3], agricultural testing [4], and basic chemistry [5]. The current common approach to NIR spectroscopy is to obtain the full continuous spectral data of the sample in the spectral range and to use the corresponding algorithms to model the correlation between the sample and the spectral data. However, the NIR spectral data of a sample has a relatively high dimensionality and also suffers from inter-spectral overlap, covariance, and noise, which negatively affects the performance of the NIR spectral model [6]. The selection of the effective feature wavelengths of the sample is extremely important at this point. Feature wave-length selection extracts spectrally valid variables and removes useless or interfering wavelength data, improving the accuracy and predictiveness of the data model. Only spectral data from a specific band or specific wavelength points are required to build a well-performing detection model, requiring significantly fewer wavelength sampling points.
Numerous algorithms exist to select the characteristic wavelengths of the collected NIR spectra of the samples to build reliable models [7], such as the adaptive reweighted sampling (CARS) [8], the random frog hopping algorithm (RF) [9], the PLS-genetic algorithm (PLS-GA) [10], etc. These algorithms have different performance for different data and problems, and in practice modelling usually involves human experience in selecting the most suitable model, which increases the complexity of the modelling process. In contrast, the recently proposed automatic machine learning (AutoML) [11] allows for automated model selection without human intervention. The model automatically generates the network structure that is most efficient for the task at hand, and the model automatically searches for the best sequence of combinations of operations under the different structures produced by itself. This approach can effectively reduce the complexity of modelling and also ensures the robustness of the model.
After modelling the characteristic wavelengths of the sample, the current instruments still need to collect the full spectrum first and then select the characteristic spectra to input into the model to obtain the final results [12,13]. However, this full-spectrum acquisition method results in slow data acquisition and processing. Although common grating-based spectroscopy instruments can acquire data quickly, the cost is significantly higher due to the use of array detectors, and problems such as non-uniformity between detectors can also affect the signal-to-noise ratio of the acquisition. In some applications where fast real-time processing is required, such as industrial on-line analysis, faster spectral acquisition and higher data quality are often required. Therefore, an NIR spectrometer that can be coupled with a feature wavelength filtering algorithm to achieve variable wavelength acquisition is a solution that can guarantee both speed and data quality and model robustness. An AOTF spectrometer can change the diffraction wavelength by changing the frequency of the RF power signal added to it, which can achieve sub-nm-level central wavelength picking in the full spectrum and use a unit detector to obtain data, which can effectively improve the signal-to-noise ratio and is an ideal device that can be used with the feature wavelength selection algorithm.
AOTF-based NIR spectroscopy has been extensively studied in food inspection and agricultural applications. Several studies have been conducted to implement commercial AOTF-NIR spectrometers for nondestructive detection of dried apple and olive fruit [14][15][16][17]. Diffuse reflectance spectra were acquired in the wavelength range 1100-2300 nm at 2 nm intervals using the Luminar 5030 AOTF-NIR Miniature 'Hand-held' Analyzer (Brimrose Corporation, Baltimore, MD, USA). The feasibility of using AOTF-NIR spectroscopy in an intelligent drying system for nondestructive detection and monitoring of physicochemical changes in organic apple wedges during the drying process was investigated. Partial least squares (PLS) regression models were developed to monitor changes in water activity, moisture content, soluble solids content, and chroma during drying. The classification models were computed using K-means and Partial Least Squares Discriminant Analysis (PLS-DA) algorithms in sequence [14]. AOTF-NIR was also satisfactorily applied to predict phenolic compounds [15] and monitor the ripening of olives [16]. Water content, fat content, and free acidity in olive fruit were predicted by online NIR spectroscopy combined with chemometrics techniques [17]. In addition to the PLS regression algorithm, a sensor software based on an artificial neural network (SS-ANN) was designed by Allouche et al. [18] for monitoring olive malaxation. In hyperspectral imaging, a prototype on-line AOTF-based hyperspectral image acquisition system (450-900 nm) has also been developed for tenderness assessment of beef carcasses [19]. However, in these previous studies, it is usually necessary to collect all spectral data before making predictions, and for AOTF spectroscopy more research is needed to optimize the sampling strategy according to specific applications. Therefore, this paper utilizes the features of AOTF programming to acquire specific wavelengths and combines an AOTF-based spectrometer with an automatic machine learning-based feature wavelength selection algorithm to achieve rapid output of target substance detection results in the field. This form of application will effectively improve the efficiency of online NIR spectroscopy systems and bring broader prospects for industry applications, as it can significantly reduce the amount of data to be collected and reduce the time required for the cyclic data collection process during online inspection.
AutoML-Based Feature Wavelength Selection
The efficiency of current machine learning algorithms often relies on human guidance, such as data preprocessing, feature selection, algorithmic models, and the determination of hyperparameters. With the complexity of machine learning algorithms, the number of available algorithms and processes is increasing. AutoML is a spatial search optimization method that can find the optimal solution in a finite spatial range in the shortest possible time, reducing time and labor costs while improving operational accuracy [11]. AutoML generally consists of two main components: an evaluator and a tuner. The evaluator is responsible for measuring the performance of the learning tool under the adoption number conditions provided by the tuner and for feeding the results to the tuner, while the tuner is responsible for making the feedback information for the learning tool to update the configuration information. In addition to algorithm selection, hyperparameter optimization [20], and neural network architecture searches, AutoML can also cover automatic data preparation, automatic feature selection, automatic flowline construction, automatic model selection, and integrated learning [21]. One of the solutions for AutoML is AutoGluon-Tabular architecture [22], which is very suitable for structured data regression problem solving with more accurate model robustness. The analytical modeling of NIR spectral data is a typical structured data regression problem, and the screening of feature wavelengths is also a variable selection problem in machine learning, so it is very suitable for AutoGluon-Tabular architecture, which is used in this paper for the analytical modeling and model-based feature wavelength screening of NIR spectral data. This architecture differs from the common hyperparameter search-based technique architecture in that it relies on fusing multiple models that do not require a hyperparameter search, thus avoiding hyperparameter search and increasing the number of trained models in a prescribed time. The operational steps and the flow chart for data feature wavelength selection of NIR spectral data using AutoGluon-Tabular architecture are as shown in Figure 1.
the efficiency of online NIR spectroscopy systems and bring broader prospects for industry applications, as it can significantly reduce the amount of data to be collected and reduce the time required for the cyclic data collection process during online inspection.
AutoML-Based Feature Wavelength Selection
The efficiency of current machine learning algorithms often relies on human guidance, such as data preprocessing, feature selection, algorithmic models, and the determination of hyperparameters. With the complexity of machine learning algorithms, the number of available algorithms and processes is increasing. AutoML is a spatial search optimization method that can find the optimal solution in a finite spatial range in the shortest possible time, reducing time and labor costs while improving operational accuracy [11]. AutoML generally consists of two main components: an evaluator and a tuner. The evaluator is responsible for measuring the performance of the learning tool under the adoption number conditions provided by the tuner and for feeding the results to the tuner, while the tuner is responsible for making the feedback information for the learning tool to update the configuration information. In addition to algorithm selection, hyperparameter optimization [20], and neural network architecture searches, AutoML can also cover automatic data preparation, automatic feature selection, automatic flowline construction, automatic model selection, and integrated learning [21]. One of the solutions for AutoML is AutoGluon-Tabular architecture [22], which is very suitable for structured data regression problem solving with more accurate model robustness. The analytical modeling of NIR spectral data is a typical structured data regression problem, and the screening of feature wavelengths is also a variable selection problem in machine learning, so it is very suitable for AutoGluon-Tabular architecture, which is used in this paper for the analytical modeling and model-based feature wavelength screening of NIR spectral data. This architecture differs from the common hyperparameter search-based technique architecture in that it relies on fusing multiple models that do not require a hyperparameter search, thus avoiding hyperparameter search and increasing the number of trained models in a prescribed time. The operational steps and the flow chart for data feature wavelength selection of NIR spectral data using AutoGluon-Tabular architecture are as shown in Figure 1. (1) Data pre-processing Firstly, the acquired transmission/reflection spectral data are converted into absorbance data, and the data are loaded into AutoGluon-Tabular. AutoGluon-Tabular will automatically check the label columns, determine the type of problem, and distinguish whether it is a classification problem or a regression problem; then the data are pre-processed, the feature data are classified, and the useless feature data are discarded.
(2) Model training After the data pre-processing, the data will be trained by a series of machine learning models, using integration and stacking techniques to combine multiple models. AutoGluon-Tabular will select models for training in a unique order, firstly selecting the models with reliable performance and then gradually selecting the more computationally intensive but less reliable models. The current AutoGluon-Tabular architecture supports algorithms such as Random Forest [23], Super Random Tree, k Nearest Neighbor, LightGBM boosted tree, CatBoost boosted tree, AutoGluon-Tabular deep neural network, etc. In this paper, we will train all of the above algorithms. AutoGluon-Tabular ensembles multiple models and stacks them in multiple layers, which offers better use of allocated training time than seeking out the best. We set 'root_mean_squared_error' as "eval_metric". AutoGluon-Tabular tunes factors such as hyperparameters, early-stopping, ensemble-weights, etc. in order to improve this metric on validation data.
(3) Wavelength importance ranking We calculated the permutation importance which measures the importance of a feature and ranked the wavelength variables according to this. The higher the permutation importance, the higher the contribution of the wavelength to the model and the more representative the wavelength is of the characteristics of the detection target.
(4) Feature wavelength combination screening After ranking the importance of wavelengths, we are not sure which wavelength combinations can achieve the best performance of the model. In order to filter the best combination of feature wavelengths, we select the wavelengths with the top X permutation importance to retrain the model, record the evaluation index of the model performance trained by X wavelengths, reduce the number of X by 1, and repeat the retraining process until X equals 0, then stop the training. Based on the number of different X's and the evaluation metrics of the model performance, we introduce the precision criterion to determine the optimal number of wavelength variables. If a precision value is entered, the model suggested according to the precision criterion is the model with the lowest number of variables among all models, whose evaluation metric differs from the minimum evaluation metric by no more than the precision.
Finally, we determined the optimal wavelength combination to achieve the minimum number of measurements for a more effective model.
Instrument Design and Working Principle
After filtering the feature wavelengths of spectral data by AutoML, the construction of the excellent performance model and the accurate prediction results can be achieved via the collection of spectral data at specific wavelengths; however, the extraction of the feature wavelengths of the current conventional spectrometer spectral data requires us to obtain the full-band spectrum by scanning in advance, which results in sampling time waste and resource consumption. Thus, the above problems can be effectively solved if the wavelength filtering results can be targeted and finely sampled, while the AOTF near-infrared spectrometer has the characteristic of freely adjustable wavelength, which allows it to conduct discrete sampling and perform targeted and finely spectral sampling for specific filtering results.
AOTF is an electrically tunable filter based on the acousto-optic effect, which is mainly composed of a birefringent acousto-optic crystal (the most widely used material TeO 2 ), a piezoelectric transducer, and an acoustic wave absorber, as shown in Figure 2a. In practical applications, the RF drive signal is converted to ultrasonic waves inside the crystal by a piezoelectric transducer fixed on the crystal surface, and the diffraction wavelengths are selected by changing the frequency of the RF signal, with a wide tuning range and fast scanning speed. Spectrometers based on AOTF spectroscopy have the advantages of small size, light weight, all-solid state, and strong environmental adaptability and have been widely used in many fields such as food inspection [24], environmental monitoring [25,26], and deep space exploration [27,28]. According to the condition of momentum matching, the tuning relationship between the driving frequency f (λ) of AOTF and the output diffraction wavelengths is shown as Equation (1) [29,30]: where λ is the incident light wavelength, V a is the ultrasonic propagation velocity in the crystal, f a (λ) is the corresponding ultrasonic frequency, θ i is the incident angle for a device, θ d is the output beam angle, n i is the refractive index for the incident light, and n d is the refractive index for the diffracted light. The relationship curve between the diffraction wavelength of the crystal and the driving frequency is shown in Figure 2b. The curve data were obtained by measurement. A HORIBA iHR 320 spectrometer was used to generate monochromatic light. The wavelength of the monochromatic light was first set, and then the scanning was performed by changing the driving frequency in fine steps. The frequency at which the most intense diffracted light was measured is the driving frequency corresponding to that wavelength. This procedure was repeated for the entire operating spectral range to obtain all data. It is possible to realize both sequences in the time-fine scanning of wide spectrum bands by wavelength point by point and the high-precision diffraction center wavelength picking of specific wavelength bands by coding control of RF driving frequency when using a good correspondence between RF driving frequency and diffraction wavelength, with simple wavelength picking and good repeatability.
crystal by a piezoelectric transducer fixed on the crystal surface, and the diffraction wavelengths are selected by changing the frequency of the RF signal, with a wide tuning range and fast scanning speed. Spectrometers based on AOTF spectroscopy have the advantages of small size, light weight, all-solid state, and strong environmental adaptability and have been widely used in many fields such as food inspection [24], environmental monitoring [25,26], and deep space exploration [27,28]. According to the condition of momentum matching, the tuning relationship between the driving frequency ( ) of AOTF and the output diffraction wavelengths is shown as Equation (1) [29,30]: where is the incident light wavelength, is the ultrasonic propagation velocity in the crystal, ( ) is the corresponding ultrasonic frequency, is the incident angle for a device, is the output beam angle, is the refractive index for the incident light, and is the refractive index for the diffracted light. The relationship curve between the diffraction wavelength of the crystal and the driving frequency is shown in Figure 2b. The curve data were obtained by measurement. A HORIBA iHR 320 spectrometer was used to generate monochromatic light. The wavelength of the monochromatic light was first set, and then the scanning was performed by changing the driving frequency in fine steps. The frequency at which the most intense diffracted light was measured is the driving frequency corresponding to that wavelength. This procedure was repeated for the entire operating spectral range to obtain all data. It is possible to realize both sequences in the timefine scanning of wide spectrum bands by wavelength point by point and the high-precision diffraction center wavelength picking of specific wavelength bands by coding control of RF driving frequency when using a good correspondence between RF driving frequency and diffraction wavelength, with simple wavelength picking and good repeatability. According to the characteristics of the AOTF spectrometer, the optical path structure of this system is shown in Figure 3a by combining an AutoML autonomous feature wavelength filtering algorithm and AOTF spectrometer-specific band sampling. The use of a dual optical path and a dual detector design, the use of an adjustable beam splitter to introduce the reference optical path, comprehensive data processing, and compensation of light intensity fluctuations caused by the instability of the light source and errors caused by environmental interference (in order to improve the accuracy of the instrument) allowed us to achieve a wide spectral scan range of 900-2400 nm. The actual spectra are sampled by a short-wave infrared AOTF, using both high-frequency and low-frequency drivers to achieve a wide spectral sampling range of 900-2400 nm, with a spectral resolution of 3.75-9.6 nm. The main performance parameters of the AOTF are shown in Table 1. The flow chart of the system is shown in Figure 3b. Firstly, the full spectrum data of the substance is obtained through the AOTF spectrometer, and the filter modeling of the feature wavelength of the target is completed using the AutoML algorithm to extract the feature wavelength of the sample. After that, the RF drive function of the feature wavelengths is generated using the selected feature wavelength combinations combined with the drive frequency function of AOTF, and the drive function is stored in the RF controller. When the target is measured again, the system can take advantage of the flexible adjustment of the AOTF central wavelength at the subnanometer level to accurately locate the desired band and achieve the adjustable, high-precision, and fast acquisition of the feature spectrum. By transferring the obtained data to the model established by the combination of feature wavelengths, the detection and analysis of the characteristic components or contents of the sample can be realized.
Application Results
To verify the effectiveness of the system, two different types of liquid samples (milk and baijiu) were selected, and validation experiments were performed according to the The flow chart of the system is shown in Figure 3b. Firstly, the full spectrum data of the substance is obtained through the AOTF spectrometer, and the filter modeling of the feature wavelength of the target is completed using the AutoML algorithm to extract the feature wavelength of the sample. After that, the RF drive function of the feature wavelengths is generated using the selected feature wavelength combinations combined with the drive frequency function of AOTF, and the drive function is stored in the RF controller. When the target is measured again, the system can take advantage of the flexible adjustment of the AOTF central wavelength at the subnanometer level to accurately locate the desired band and achieve the adjustable, high-precision, and fast acquisition of the feature spectrum. By transferring the obtained data to the model established by the combination of feature wavelengths, the detection and analysis of the characteristic components or contents of the sample can be realized.
Application Results
To verify the effectiveness of the system, two different types of liquid samples (milk and baijiu) were selected, and validation experiments were performed according to the detection system and testing process shown in Figure 3. The experimental results of the two measurement methods for fat content (%) in milk samples and alcohol by volume (ABV, %) in baijiu samples are shown.
Sample Preparation
For the milk samples, we chose nine different brands of liquid pure milk, and five samples of each milk brand were prepared for a total of 45 samples. The nine milk samples had different fat contents, as shown in Table 2, to establish the relationship between spectral data and fat content in the milk model using the nominal value of fat content. For the liquor samples, we selected two brands and five types of liquor divided into 209 samples marked and sealed for storage. The determination of alcoholic content was entrusted to the Shenzhen Institute of Measurement and Quality Inspection using the national standard (GB 5009.225-2016) alcoholometer method, and the measurement result was the volume fraction of alcohol, i.e., alcoholic content (%vol). The range of alcoholic content of the samples was 42.0-56.2, with an accuracy error of 0.1, as shown in Table 3. The NIR transmittance spectra of all samples were collected using this system with a spectral sampling range of 900-2400 nm and a sampling interval of 5 nm, and the NIR spectral data for each sample was 301 wavelength points. For the milk samples, a 0.5 mm optical path length quartz cuvette was used to hold the samples, and the transmittance spectra of 45 samples were obtained. For the baijiu samples, the transmittance spectra of 209 samples were obtained using a quartz cuvette with a 2 mm optical path length.
The absorbance spectra of all samples were obtained by taking log(1/T) of the transmittance spectra of all samples. Savitzky-Golay (SG) [31,32] smoothing was used to reduce the random noise. The absorbance spectra were processed using Multiplicative Scatter Correction (MSC) [33] to remove the influence of non-concentration factors such as baseline shift and granularity on the spectra. The spectral data with low signal-to-noise ratio at the head and tail were removed, and the final data were in the wavelength range of 1250-2300 nm and 5 nm sampling interval, with a total of 211 data points. The absorbance spectral data obtained by processing are shown in Figure 4. The NIR transmittance spectra of all samples were collected using this system with a spectral sampling range of 900-2400 nm and a sampling interval of 5 nm, and the NIR spectral data for each sample was 301 wavelength points. For the milk samples, a 0.5 mm optical path length quartz cuvette was used to hold the samples, and the transmittance spectra of 45 samples were obtained. For the baijiu samples, the transmittance spectra of 209 samples were obtained using a quartz cuvette with a 2 mm optical path length.
The absorbance spectra of all samples were obtained by taking log(1/T) of the transmittance spectra of all samples. Savitzky-Golay (SG) [31,32] smoothing was used to reduce the random noise. The absorbance spectra were processed using Multiplicative Scatter Correction (MSC) [33] to remove the influence of non-concentration factors such as baseline shift and granularity on the spectra. The spectral data with low signal-to-noise ratio at the head and tail were removed, and the final data were in the wavelength range of 1250-2300 nm and 5 nm sampling interval, with a total of 211 data points. The absorbance spectral data obtained by processing are shown in Figure 4. 1250-2300 nm and 5 nm sampling interval, with a total of 211 data points. The absorbance spectral data obtained by processing are shown in Figure 4.
Modeling and Feature Wavelength Selection
The spectra of milk and alcohol samples obtained by AOTF spectrometer were converted into spectral absorbance values as input features, and fat content and alcohol content were used as model target values. The data sets were randomly divided, with 70% (31 samples) of milk used as training data and the remaining 30% (14 samples) used as test data; 70% (146 samples) of baijiu was used as training data and the remaining 30% (63 samples) was used as test data, as shown in Tables 4 and 5. Only the training data was provided to AutoML framework at training time, while the test data was only provided at prediction time. The fit application programming interface (API) of AutoGluon-Tabular was used to
Modeling and Feature Wavelength Selection
The spectra of milk and alcohol samples obtained by AOTF spectrometer were converted into spectral absorbance values as input features, and fat content and alcohol content were used as model target values. The data sets were randomly divided, with 70% (31 samples) of milk used as training data and the remaining 30% (14 samples) used as test data; 70% (146 samples) of baijiu was used as training data and the remaining 30% (63 samples) was used as test data, as shown in Tables 4 and 5. Only the training data was provided to AutoML framework at training time, while the test data was only provided at prediction time.
Modeling and Feature Wavelength Selection
The spectra of milk and alcohol samples obtained by AOTF spectrometer were converted into spectral absorbance values as input features, and fat content and alcohol content were used as model target values. The data sets were randomly divided, with 70% (31 samples) of milk used as training data and the remaining 30% (14 samples) used as test data; 70% (146 samples) of baijiu was used as training data and the remaining 30% (63 samples) was used as test data, as shown in Tables 4 and 5. Only the training data was provided to AutoML framework at training time, while the test data was only provided at prediction time. The fit application programming interface (API) of AutoGluon-Tabular was used to train these models. Within the call to fit, AutoGluon automatically preprocesses the raw data, identifies what type of prediction problem this is (binary, multi-class classification, or regression), partitions the data into various folds for model-training vs. validation, individually fits various models, and finally creates an optimized model ensemble that outperforms any of the individual trained models.
The performance of these models was adjudged based on root mean square errors (RMSE) of validation (RMSEV) and prediction (RMSEP). RMSEV is obtained from Auto-Gluon which is named as "score val," and RMSEP is calculated based on the prediction results of the test set. This is as shown in Figure 5a,c. The performance of these models was adjudged based on root mean square errors (RMSE) of validation (RMSEV) and prediction (RMSEP). RMSEV is obtained from Au-toGluon which is named as "score val," and RMSEP is calculated based on the prediction results of the test set. This is as shown in Figure 5a,c. Figure 5. Results of the "score val" (RMSEV) and permutation importance of the two samples. (a) "score val" (RMSEV) of milk samples. We also calculated RMSEP as "score test." (b) degrees of permutation importance of the milk samples; (c) "score val" (RMSEV) of alcohol samples. We also calculated RMSEP as "score test." (d) permutation importance of alcohol samples.
The importance the ranking [34] of data variables was performed based on permutation importance for milk and baijiu samples, as shown in Figure 5b,d. Higher importance indicates that the feature variables have more influence on the model performance.
Based on the accuracy criteria, we calculated the difference between the RMSEV and the minimum RMSEV for all models, where the model with a difference not higher than 20% and with the least number of variables was selected. According to our precision criterion, eight characteristic wavelengths were selected for milk, and eight characteristic wavelengths were selected for baijiu. The actual results of the selected wavelengths are shown in Table 6.
Samples
Selected Wavelengths (nm) Figure 5. Results of the "score val" (RMSEV) and permutation importance of the two samples. (a) "score val" (RMSEV) of milk samples. We also calculated RMSEP as "score test". (b) degrees of permutation importance of the milk samples; (c) "score val" (RMSEV) of alcohol samples. We also calculated RMSEP as "score test". (d) permutation importance of alcohol samples.
The importance the ranking [34] of data variables was performed based on permutation importance for milk and baijiu samples, as shown in Figure 5b,d. Higher importance indicates that the feature variables have more influence on the model performance.
Based on the accuracy criteria, we calculated the difference between the RMSEV and the minimum RMSEV for all models, where the model with a difference not higher than 20% and with the least number of variables was selected. According to our precision criterion, eight characteristic wavelengths were selected for milk, and eight characteristic wavelengths were selected for baijiu. The actual results of the selected wavelengths are shown in Table 6.
Experimental Results of the System
The AOTF-NIR spectroscopy system combining the model and the corresponding sampling strategy was deployed to a real application scenario to perform experiments on the measurement of samples with unknown target value content. Using this instrumentation system with models, we were able to perform 211-point and 8-point measurements on the samples, respectively. The data obtained from the measurements could then be used to invoke the corresponding model for component content prediction. The analytical performance of the system was evaluated when the full-spectrum model and the characteristic wavelength model were applied separately. The performance of the prediction could be evaluated by calculating new RMSEPs.
The RMSEV shown in Table 7 is the "score val" obtained during the training phase. We also performed 211-point and 8-point spectral acquisitions for the above test set of samples, respectively. After data acquisition, the average spectrum calculated in the model building phase is used as the true spectrum for MSC processing. No smoothing operation is performed at this stage. The data are then input to the model to derive predicted values, which are displayed on the software interface. The RMSEP metric is obtained by calculating the error between the predicted and true values obtained from the inference of the input model after the spectral measurement; the inference time is automatically recorded by the program in the host computer; the sampling time is the estimated time taken by the instrument to acquire data at the desired wavelength point. As shown in Table 7 above, the use of the characteristic wavelength model and the corresponding sampling strategy achieves a slight improvement in prediction performance, while the sampling time consumption is~22-fold smaller. The prediction performance improvement indicates that there are many irrelevant wavelengths throughout the spectrum that do not contribute to effective prediction of the target values and negatively affect the performance of the prediction model.
Combined with the above experimental results, the automatic machine learning framework can accurately and robustly filter the wavelength combinations with the best prediction performance for the target values; combined with the eight-wavelength models and the corresponding new sampling mechanism that only collects data at the eight wavelength points, higher prediction performance and considerable improvement in detection efficiency can be obtained compared with the full-spectrum model and the full-spectrum sampling mechanism. This advantage is particularly critical for the application of AOTF-NIR analysis systems for online inspection. The combination of AOTF's flexibility and automatic machine learning provides a good reference case for the application of intelligent optical inspection in automated production by applying an automatic machine learning framework to easily build a well-performing model for different application objectives and adaptively change the sampling strategy.
Conclusions
In this study, an NIR detection system based on AOTF with AutoML feature wavelength screening was proposed and validated. Combining various chemometric algorithms, spectral detection based on the characteristic wavelengths instead of the full spectrum achieves the improvement of detection efficiency. Meanwhile, the model is simplified, interpretability is improved, and storage and computational resource usage are reduced. Taking milk and alcohol as examples, the validation experiments achieve a reduction in the number of sampling wavelengths while achieving accurate and nondestructive rapid determination of fat and alcohol content.
|
2022-04-15T15:28:56.450Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "8b54cd8fb742fe6558c60334e6fd6c5f27394fbf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/8/2826/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43fc8402ae8681619eca2aa74176539e77ea19c3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
15297553
|
pes2o/s2orc
|
v3-fos-license
|
Main Report
Background: States vary widely in their use of newborn screening tests, with some mandating screening for as few as three conditions and others mandating as many as 43 conditions, including varying numbers of the 40+ conditions that can be detected by tandem mass spectrometry (MS/MS). There has been no national guidance on the best candidate conditions for newborn screening since the National Academy of Sciences report of 19751 and the United States Congress Office of Technology Assessment report of 1988,2 despite rapid developments since then in genetics, in screening technologies, and in some treatments. Objectives: In 2002, the Maternal and Child Health Bureau (MCHB) of the Health Resources and Services Administration (HRSA) of the United States Department of Health and Human Services (DHHS) commissioned the American College of Medical Genetics (ACMG) to: Conduct an analysis of the scientific literature on the effectiveness of newborn screening. Gather expert opinion to delineate the best evidence for screening for specified conditions and develop recommendations focused on newborn screening, including but not limited to the development of a uniform condition panel. Consider other components of the newborn screening system that are critical to achieving the expected outcomes in those screened. Methods: A group of experts in various areas of subspecialty medicine and primary care, health policy, law, public health, and consumers worked with a steering committee and several expert work groups, using a two-tiered approach to assess and rank conditions. A first step was developing a set of principles to guide the analysis. This was followed by developing criteria by which conditions could be evaluated, and then identifying the conditions to be evaluated. A large and broadly representative group of experts was asked to provide their opinions on the extent to which particular conditions met the selected criteria, relying on supporting evidence and references from the scientific literature. The criteria were distributed among three main categories for each condition: The availability and characteristics of the screening test; The availability and complexity of diagnostic services; and The availability and efficacy of treatments related to the conditions. A survey process utilizing a data collection instrument was used to gather expert opinion on the conditions in the first tier of the assessment. The data collection format and survey provided the opportunity to quantify expert opinion and to obtain the views of a diverse set of interest groups (necessary due to the subjective nature of some of the criteria). Statistical analysis of data produced a score for each condition, which determined its ranking and initial placement in one of three categories (high scoring, moderately scoring, or low scoring/absence of a newborn screening test). In the second tier of these analyses, the evidence base related to each condition was assessed in depth (e.g., via systematic reviews of reference lists including MedLine, PubMed and others; books; Internet searches; professional guidelines; clinical evidence; and cost/economic evidence and modeling). The fact sheets reflecting these analyses were evaluated by at least two acknowledged experts for each condition. These experts assessed the data and the associated references related to each criterion and provided corrections where appropriate, assigned a value to the level of evidence and the quality of the studies that established the evidence base, and determined whether there were significant variances from the survey data. Survey results were subsequently realigned with the evidence obtained from the scientific literature during the second-tier analysis for all objective criteria, based on input from at least three acknowledged experts in each condition. The information from these two tiers of assessment was then considered with regard to the overriding principles and other technology or condition-specific recommendations. On the basis of this information, conditions were assigned to one of three categories as described above: Core Panel; Secondary Targets (conditions that are part of the differential diagnosis of a core panel condition.); and Not Appropriate for Newborn Screening (either no newborn screening test is available or there is poor performance with regard to multiple other evaluation criteria). ACMG also considered features of optimal newborn screening programs beyond the tests themselves by assessing the degree to which programs met certain goals (e.g., availability of educational programs, proportions of newborns screened and followed up). Assessments were based on the input of experts serving in various capacities in newborn screening programs and on 2002 data provided by the programs of the National Newborn Screening and Genetics Resource Center (NNSGRC). In addition, a brief cost-effectiveness assessment of newborn screening was conducted. Results: Uniform panel A total of 292 individuals determined to be generally representative of the regional distribution of the United States population and of areas of expertise or involvement in newborn screening provided a total of 3,949 evaluations of 84 conditions. For each condition, the responses of at least three experts in that condition were compared with those of all respondents for that condition and found to be consistent. A score of 1,200 on the data collection instrument provided a logical separation point between high scoring conditions (1,200–1,799 of a possible 2,100) and low scoring (<1,000) conditions. A group of conditions with intermediate scores (1,000–1,199) was identified, all of which were part of the differential diagnosis of a high scoring condition or apparent in the result of the multiplex assay. Some are identified by screening laboratories and others by diagnostic laboratories. This group was designated as a “secondary target” category for which the program must report the diagnostic result. Using the validated evidence base and expert opinion, each condition that had previously been assigned to a category based on scores gathered through the data collection instrument was reconsidered. Again, the factors taken into consideration were: 1) available scientific evidence; 2) availability of a screening test; 3) presence of an efficacious treatment; 4) adequate understanding of the natural history of the condition; and 5) whether the condition was either part of the differential diagnosis of another condition or whether the screening test results related to a clinically significant condition. The conditions were then assigned to one of three categories as previously described (core panel, secondary targets, or not appropriate for Newborn Screening). Among the 29 conditions assigned to the core panel are three hemoglobinopathies associated with a Hb/S allele, six amino acidurias, five disorders of fatty oxidation, nine organic acidurias, and six unrelated conditions (congenital hypothyroidism (CH), biotinidase deficiency (BIOT), congenital adrenal hyperplasia (CAH), classical galactosemia (GALT), hearing loss (HEAR) and cystic fibrosis (CF)). Twenty-three of the 29 conditions in the core panel are identified with multiplex technologies such as tandem mass spectrometry (MS/MS) or high pressure liquid chromatography (HPLC). On the basis of the evidence, six of the 35 conditions initially placed in the core panel were moved into the secondary target category, which expanded to 25 conditions. Test results not associated with potential disease in the infant (e.g., carriers) were also placed in the secondary target category. When newborn screening laboratory results definitively establish carrier status, the result should be made available to the health care professional community and families. Twenty-seven conditions were determined to be inappropriate for newborn screening at this time. Conditions with limited evidence reported in the scientific literature were more difficult to evaluate, quantify and place in one of the three categories. In addition, many conditions were found to occur in multiple forms distinguished by age-of-onset, severity, or other features. Further, unless a condition was already included in newborn screening programs, there was a potential for bias in the information related to some criteria. In such circumstances, the quality of the studies underlying the data such as expert opinion that considered case reports and reasoning from first principles determined the placement of the conditions into particular categories. Newborn screening program optimization – Assessment of the activities of newborn screening programs, based on program reports, was done for the six program components: education; screening; follow-up; diagnostic confirmation; management; and program evaluation. Considerable variation was found between programs with regard to whether particular aspects (e.g., prenatal education program availability, tracking of specimen collection and delivery) were included and the degree to which they are provided. Newborn screening program evaluation systems also were assessed in order to determine their adequacy and uniformity with the goal being to improve interprogram evaluation and comparison to ensure that the expected outcomes from having been identified in screening are realized. Conclusions: The state of the published evidence in the fast-moving worlds of newborn screening and medical genetics has not kept up with the implementation of new technologies, thus requiring the considerable use of expert opinion to develop recommendations about a core panel of conditions for newborn screening. Twenty-nine conditions were identified as primary targets for screening from which all components of the newborn screening system should be maximized. An additional 25 conditions were listed that could be identified in the course of screening for core panel conditions. Programs are obligated to establish a diagnosis and communicate the result to the health care provider and family. It is recognized that screening may not have been maximized for the detection of these secondary conditions but that some proportion of such cases may be found among those screened for core panel conditions. With additional screening, greater training of primary care health care professionals and subspecialists will be needed, as will the development of an infrastructure for appropriate follow-up and management throughout the lives of children who have been identified as having one of these rare conditions. Recommended actions to overcome barriers to an optimal newborn screening system include: The establishment of a national role in the scientific evaluation of conditions and the technologies by which they are screened; Standardization of case definitions and reporting procedures; Enhanced oversight of hospital-based screening activities; Long-term data collection and surveillance; and Consideration of the financial needs of programs to allow them to deliver the appropriate services to the screened population.
each condition. These experts assessed the data and the associated references related to each criterion and provided corrections where appropriate, assigned a value to the level of evidence and the quality of the studies that established the evidence base, and determined whether there were significant variances from the survey data. Survey results were subsequently realigned with the evidence obtained from the scientific literature during the second-tier analysis for all objective criteria, based on input from at least three acknowledged experts in each condition. The information from these two tiers of assessment was then considered with regard to the overriding principles and other technology or condition-specific recommendations. On the basis of this information, conditions were assigned to one of three categories as described above: 1. Core Panel; 2. Secondary Targets (conditions that are part of the differential diagnosis of a core panel condition.); and 3. Not Appropriate for Newborn Screening (either no newborn screening test is available or there is poor performance with regard to multiple other evaluation criteria).
ACMG also considered features of optimal newborn screening programs beyond the tests themselves by assessing the degree to which programs met certain goals (e.g., availability of educational programs, proportions of newborns screened and followed up). Assessments were based on the input of experts serving in various capacities in newborn screening programs and on 2002 data provided by the programs of the National Newborn Screening and Genetics Resource Center (NNSGRC). In addition, a brief cost-effectiveness assessment of newborn screening was conducted.
Results:
Uniform panel -A total of 292 individuals determined to be generally representative of the regional distribution of the United States population and of areas of expertise or involvement in newborn screening provided a total of 3,949 evaluations of 84 conditions. For each condition, the responses of at least three experts in that condition were compared with those of all respondents for that condition and found to be consistent. A score of 1,200 on the data collection instrument provided a logical separation point between high scoring conditions (1,200 -1,799 of a possible 2,100) and low scoring (Ͻ1,000) conditions. A group of conditions with intermediate scores (1,000 -1,199) was identified, all of which were part of the differential diagnosis of a high scoring condition or apparent in the result of the multiplex assay. Some are identified by screening laboratories and others by diagnostic laboratories. This group was designated as a "secondary target" category for which the program must report the diagnostic result.
Using the validated evidence base and expert opinion, each condition that had previously been assigned to a category based on scores gathered through the data collection instrument was reconsidered. Again, the factors taken into consideration were: 1) available scientific evidence; 2) availability of a screening test; 3) presence of an efficacious treatment; 4) adequate understanding of the natural history of the condition; and 5) whether the condition was either part of the differential diagnosis of another condition or whether the screening test results related to a clinically significant condition.
The conditions were then assigned to one of three categories as previously described (core panel, secondary targets, or not appropriate for Newborn Screening).
Among the 29 conditions assigned to the core panel are three hemoglobinopathies associated with a Hb/S allele, six amino acidurias, five disorders of fatty oxidation, nine organic acidurias, and six unrelated conditions (congenital hypothyroidism (CH), biotinidase deficiency (BIOT), congenital adrenal hyperplasia (CAH), classical galactosemia (GALT), hearing loss (HEAR) and cystic fibrosis (CF)). Twenty-three of the 29 conditions in the core panel are identified with multiplex technologies such as tandem mass spectrometry (MS/MS) or high pressure liquid chromatography (HPLC). On the basis of the evidence, six of the 35 conditions initially placed in the core panel were moved into the secondary target category, which expanded to 25 conditions. Test results not associated with potential disease in the infant (e.g., carriers) were also placed in the secondary target category. When newborn screening laboratory results definitively establish carrier status, the result should be made available to the health care professional community and families.
INTRODUCTION
The work reported here is pursuant to the HRSA/MCHB Contract No. 240-01-0038, Standardization of Outcomes and Guidelines for State Newborn Screening Programs. In 1999, the American Academy of Pediatrics (AAP) Newborn Screening Task Force recommended that, "HRSA should engage in a national process involving government, professionals, and consumers to advance the recommendations of this Task Force and assist in the development and implementation of nationally recognized newborn screening system standards and policies." The Task Force was concerned about the lack of unifor-mity among states, particularly with regard to their newborn screening condition panels.
In 2001, in response to that recommendation, HRSA/MCHB requested that ACMG outline a process of standardization of outcomes and guidelines for State newborn screening programs and define responsibilities for collecting and evaluating outcome data, including a recommended uniform panel of conditions to include in State newborn screening programs. It was expected that the analytical endeavor and subsequent recommendations be definitive and that the recommendations be based on the best scientific evidence and analysis of that evidence. ACMG was specifically asked to develop recommendations to address: 1. A uniform condition panel (including implementation methodology); 2. Model policies and procedures for State newborn screening programs (with consideration of a national model); 3. Model minimum standards for State newborn screening programs (with consideration of national oversight); 4. A model decision matrix for consideration of State newborn screening program expansion; and 5. Consideration of the value of a national process for quality assurance and oversight.
This report is a product of the work undertaken by ACMG for HRSA. A methods section begins by providing the broad context for the newborn screening system and the overarching principles for developing newborn screening guidelines. It then provides the criteria that were used in the analyses of conditions under consideration for newborn screening programs. This is followed by a description of the development and use of tools to collect data that would complement evidence gathered from a review of the scientific literature, and also by a description of the process for obtaining additional expert information and opinion. The results of these analyses are provided, as well as recommendations for moving forward.
Although the criteria by which the conditions are evaluated and the results of those evaluations are the primary goals of this effort, associated and supporting goals also are described because of their relevance to the newborn screening system. In order to realize the expected outcomes for newborns and their families, the full system must be operating efficiently and effectively. [3][4][5][6] Efforts have been made to assess the newborn screening system based on its component parts, which allows for the development of specific standards for program performance and for an assessment of status of the programs. This assessment also provides the opportunity to determine the extent to which a systematic national approach to quality assessment and assurance is possible.
SECTION I: DEVELOPING A UNIFORM SCREENING PANEL A. Background
In the United States, newborn screening is a highly visible and important State-based public health program 2,7-10 that began over 40 years ago. Since the early 1960s, when Robert Guthrie 11,12 devised a screening test for phenylketonuria (PKU) using a newborn bloodspot dried onto a filter paper card, more than 150 million infants have been screened for a number of genetic and congenital disorders. States and territories mandate newborn screening of all infants born within their jurisdiction for certain treatable disorders that may not otherwise be detected before developmental disability or death occurs. Newborns with these disorders typically appear normal at birth. The testing and follow-up services of newborn screening programs are designed to provide early diagnosis and treatment before significant, irreversible damage occurs. Appropriate compliance with the medical management prescribed can allow most affected newborns to develop normally. The generally acknowledged components of a newborn screening system 4,6,13 include the following: 1. Education of professionals and parents; 2. Screening (specimen collection, submission, and testing); 3. Follow-up of abnormal and unsatisfactory test results; 4. Confirmatory testing and diagnosis; 5. Medical management and periodic outcome evaluation; and 6. System quality assurance, including program evaluation, validity of testing systems, efficiency of follow-up and intervention, and assessments of long-term benefits to individuals, families, and society.
Based on cumulative data from newborn screening programs, reported annually to the HRSA-funded NNSGRC, it is estimated that about 1 in every 800 newborns in the United States-or 5,000 of 4.1 million newborns each year-is born with a potentially severe or lethal condition for which screening and the treatment for the prevention of many or all of the complications of the condition are available. As the model for public health-based population genetic screening, newborn screening is nationally recognized as an essential program that aims to ensure the best outcome for the nation's newborn population.
NEWBORN SCREENING PROGRAMS: THE CHANGING LANDSCAPE
The infrastructure landscape.
In the United States, every State (hereafter, the term "State" will include both States and territorial jurisdictions) presently has a statute or regulation mandating or allowing public health newborn screening. As such, newborn screening is universally available in varying forms to all infants born in the United States, regardless of ability to pay or other familial factors (e.g., ethnicity, area of residence, literacy level, or language). It is important that universal access to this screening and its central public health focus are maintained, while efforts move forward to bring uniformity and equity to State screening efforts.
Since the inception of newborn screening, the conditions screened for and the systems developed for follow-up have varied among States. Due to a dearth of national newborn screening standards (aside from the National Committee for Clinical Laboratory Standards (NCCLS) "Standard on Blood Collection on Filter Paper"), guidance from the HRSA-funded Council of Regional Networks for Genetic Services (CORN) and limited advice from national advisory committees and national medical or public health professional organizations regarding newborn screening policies and conditions to be included in screening mandates, each State independently determines the conditions and screening procedures for its program.
Many States utilize advisory committees and seek input from experts and other State newborn screening laboratories and private companies in addition to independently reviewing the available scientific evidence before making recommendations for test panels. In some States, decisions about newborn screening are in the hands of the State legislature, which controls the State public health system and its finances. Every State has a statute or regulation that allows or mandates universal newborn screening-sometimes specifying the conditions to be screened, the consent/dissent process, the laboratory, and the laboratory testing procedure to be used. In most cases, decisions about the newborn screening panel are delegated to State health officials, a State board of health, or a genetics or newborn screening advisory committee. Sometimes the decision-making process might involve a combination of agencies, advisory bodies, and policy makers.
Pilot studies usually precede the formal implementation of changes to the newborn screening panels. In addition, the mechanism to expand testing panels, change testing protocols, and fund newborn screening varies among the States, with the basic criteria from the inception of newborn screening being used by many. 14 Due to these factors and a lack of national consensus or guidelines, there is presently a large disparity in screening services available to newborns. For example, at the present time, eight States mandate screening for as few as four conditions, while a number of States screen for as many as 30 conditions (information taken from NNSGRC website www.genes-r-us.uthscsa.edu/ nbsdisorders.pdf July 20, 2004). This divergence among States regarding which conditions should be mandated for screening has resulted from several factors, including differences in: 1) the level of resources available (personnel, equipment and service capacity); and 2) interpretations of the available data concerning given conditions (incidence, treatability, impact) and new screening methodologies. 15 Approaches to calculating the number of conditions included in screening also are variable, with some programs counting hemoglobinopathy screening as a single test and others including it as one of several tests (given the simultaneous ability to detect over 700 variant conditions including SS-disease, SC disease, Sϩ-thalassemia, etc.). The expert group concluded that there should be standardization of what constitutes a screened condition. (This issue is discussed in greater detail in the section describing the conditions evaluated.) It is clear that States must retain strong oversight of mandated screening programs in order to ensure the appropriate delivery of quality screening and ancillary services to the screened population. However, how local ancillary services are to be directly provided within programs is less clear, particularly given the nationwide lack of the specialized medical expertise and laboratory testing that is needed to definitively diagnose many of these rarer inherited genetic conditions. One suggestion to address the maldistribution of needed medical expertise has been through the organization of that expertise at the regional level, as with the newly funded HRSA/MCHB Regional Genetics and Newborn Screening Collaboratives. This effort is supported by the history of regionalization (geographically close) and consolidation (geographically dispersed) of newborn screening laboratory testing services, which has been advantageous for States with low numbers of births. Regional programs have higher numbers of laboratory tests, which results in cost savings and decreased analytical variability.
Another challenge raised by the expansion of newborn screening is the lack of interconnecting relationships between child health professionals and subspecialists, particularly in rural areas-a problem complicated by the diversity of very rare conditions identified by the programs. There are limitations in the local availability of specific expertise for many conditions, and considerable needs exist in the areas of training and education throughout the health care system. Furthermore, improvements in the newborn screening system and the expansion of the number of conditions for which screening is offered have costs, and these costs and the associated benefits seem to accrue independently of the public and private health care delivery systems, which complicates their integration. Many States provide the programs necessary to ensure that screening and diagnosis will occur, but they are limited in their ability to ensure long-term management, including the provision of the necessary long-term treatments and services.
The societal implications of expanding newborn screening also are significant. For example, screening for additional conditions that occur with greater frequency in different ethnic groups could lead to discriminatory practices against individuals as well as the ethnic groups associated with particular disorders. In addition, difficult decisions must be made about the nature of the benefits that might be realized from newborn screening. Historically, screening has focused on conditions for which the improvement in outcome for the infant has been substantial. However, newborn screening could identify many conditions for which the improved outcomes may be more incremental, including disorders that are associated with mental retardation, such as fragile X syndrome, for which early intervention programs can improve long-term cognitive outcomes, but not with the expectation of a normal outcome. 16 Finally, the nature of genetic disease is such that knowledge of its presence can be of value to other family members. Previously, this factor has not been considered by newborn screening programs.
Other considerations arise from private sector testing availability and competition. Often, private laboratories-either commercially-or university-based laboratories-offer an expanded number of conditions screened through the technologies they employ. They may provide contracted services to programs or offer additional screening for conditions not mandated in the program in the State in which the family resides. As a result, some States now mandate that all parents be informed of the availability of additional screening tests. This type of information often is delivered at the last minute and its use may not be supported by hospital staff and medical personnel. However, even though additional screening may be available when initiated by consumers, it is only through State public health that access to newborn screening for all babies can be assured at the present time.
The changing technological landscape Three major technological challenges have occurred over the past few decades with regard to newborn screening. The first is the expansion of knowledge of the causes and treatment of genetic diseases. The second is the rapid expansion of diverse technologies that may be used in screening. The third is the proliferation of tiered testing strategies to enhance the positive predictive value of screening.
The sequencing of the human genome as a public/private partnership has allowed for a better understanding of the genetic bases of many diseases. This fundamental biological knowledge has led to the proliferation of new therapies stemming from intensive research efforts in both the private and public sectors. The pace of Food and Drug Administration (FDA) approval of innovative therapies has quickened. These and other factors are likely to continue to lead to an expanding panel of conditions for which newborn screening may be of benefit.
Simultaneously, there are new technological developments that allow more types of testing at reasonable cost that can be considered for application to universal newborn population screening. Examples include hearing screening, EKG screening for long QT syndrome, acylcarnitine screening, screening with molecular arrays, and screening with immunoaffinity columns. Particularly notable is the implementation of multiplex platforms that allow a single type of specimen preparation and simultaneous (or nearly simultaneous) screening for multiple different disorders. Going from one test for one disorder to one test for multiple disorders has the potential to reduce costs per condition tested and can lead to test expansion if these new technologies can be integrated safely and effectively into newborn screening programs. One potential concern associated with expansion of screening panels is the impact on follow-up testing and tracking. If the proportion of false positive cases requiring additional tests that are identified in screening laboratories rises excessively, this could undermine the acceptance of such testing by both the parental and medical communities, as well as potentially diminish the cost benefit of additional testing.
Multiplex testing technologies are emerging that can simultaneously identify multiple analytes from a single analytical process. Some multiplex testing requires that an analytical target first be identified and placed in the multiplex test (e.g., genomic arrays). Other multiplex testing provides the additional testing information without the need for specific target selection (e.g., DNA sequencing). For example, testing for hemoglobinopathies by isoelectric focusing (IEF) provides information not only about hemoglobin S, the primary target of screening, but also about more than 700 other possible hemoglobin variants, some of which may be clinically significant (e.g., Hb C and E). 17 In the case of MS/MS, the multiplex testing can occur in different modes, because it is possible to operate the instrument by either selecting specific targets or analyzing full profiles. 18 When used on selected targets, it is referred to as selective reaction monitoring (SRM), which is also called multiple reaction monitoring, a process that allows for the selective evaluation of specific ion species instead of a profile within a mass range. Increasingly, MS/MS is being used in newborn screening laboratories. 19 The technology is appealing for several reasons, including sensitivity for detecting ion species in low concentration, ability to quantify results relative to internal standards, high-throughput and precision, and the opportunity to simultaneously measure multiple ion species. 15,20 However, MS/MS is a complex testing platform requiring specific training and experience in order to optimize its use. 18 Although multiplex testing allows the addition of many more conditions to a screening panel, it presents a series of issues that influence the screening and health care system, ultimately affecting the screening services that might be available to the public. The availability of multiplex testing increases the number of conditions that can be considered for newborn screening that otherwise might not have been considered for screening using traditional criteria, such as incidence and treatability. Thus, our perception of screening performance characteristics is also modified. For example, multiplex technology might also reveal clinically significant conditions other than those that were the primary targets of screening but which are determined in the course of diagnostic confirmation of the screening test results. The screening laboratory may not have optimized the screening for the detection of these other conditions but they are typically part of the differential diagnosis of a primary target condition. Rather than evaluate single conditions for their inclusion in newborn screening, we must now consider how best to use the additional information revealed in the diagnostic laboratory about other related conditions.
Although information about conditions for which treatment options are scarce or not yet reported can lead to increased stresses on families and the health care system, early information can also lead to knowledge of the condition for the family, thus avoiding a potential diagnostic odyssey or inappropriate therapies. In addition, early information provides opportunity for better understanding of disease history and characteristics, and for earlier medical interventions that might be systematically studied to determine the risks and benefits. Multiplex testing and the identification of conditions falling outside of the uniform screening panel provides the opportunity for such conditions to be included in research protocols. Therefore, the criteria used to include a condition in a mandated newborn screening panel are not necessarily straightforward scientific or clinical criteria, but often involve complex ethical, legal, and social policy decisions.
Aside from new multiplex technology for screening, there has also been the introduction of tiered testing strategies to enhance the positive predictive value of screening and reduce the number of infants referred for additional testing. 21 For example, in the United States, the primary analyte used for congenital hypothyroidism (CH) newborn screening has been thyroxin (T4), because most newborns are screened before the optimal time for screening with thyrotropin (thyroid stimulating hormone, TSH). TSH primary screening offers improved specificity only after the period of neonatal surge and does not identify cases of central hypothyroidism. To decrease the recall rate, most screening programs have utilized a second-tier test with TSH following the identification of a certain number of increased-risk newborns through T4 initial testing. 22 In such cases, secondary hypothyroidism may also be detected on the basis of the test results, even though it is not the primary target of screening. Similarly, it has been shown that the rate of false positive results in CAH screening can be significantly reduced by profiling steroids by MS/MS as a second-tier test. 23 In addition, the testing of specific DNA mutations in newborn screening (e.g., CF screening algorithms utilize a secondtier DNA mutation panel following initial screening for immunoreactive trypsinogen (IRT) and hemoglobinopathy screening algorithms that include DNA testing) can minimize the recall rates. 24 The testing of DNA mutations also has led to a new category that includes unaffected or minimally affected cases (e.g., carriers, benign hyperphenylalaninemias, and detection of hemoglobin Barts). Confirmation of such results and explanation of their significance can be costly. These examples highlight the ongoing process that occurs in newborn screening laboratories whereby analytes are identified that are clearly abnormal in a particular condition but still need to be analytically and clinically validated in a population screening setting.
The evidence based landscape
Assessing the evidence on conditions as to their appropriateness for newborn screening is complex, and there are limitations in the availability and interpretation of data about many of the conditions. The incidence of rare genetic diseases is often variable among different populations and can be biased by the nature of the populations involved in research and the severity of the conditions in those coming to the attention of health care professionals. Many of the conditions are ultra-rare and they may have multiple genetic etiologies. For instance, the tetrahydrobiopterin (BH4) deficiencies are a heterogeneous group of disorders that affect phenylalanine homeostasis. 25 BH4 deficiencies are detected as a by-product of screening for phenylketonuria due to hyperphenylalaninemia. They include disorders that affect the regeneration or biosynthesis of BH4. The condition referred to as biopterin cofactor biosynthesis defect is caused by one of two genes-GTP cyclohydrolase I (GTPCH) and 6-pyruvoyl-tetrahydrobiopterin synthase (PTPS)and the condition referred to as biopterin cofactor regeneration defect is caused by one of two genes-pterin-4␣-carbinolamine dehydratase (PCD) and dihydropteridine reductase(DHPR). Due to the biochemical similarities of the deficiencies resulting from blocks in these interrelated pathways, the clinical courses are similar in those with the typical severe forms of GTPCH, PTPS, and DHPR deficiencies. Approximately 57% of the rare BH4 abnormalities involve PTPS deficiency. However, due to the similarities in phenotype and treatment, the BH4 abnormalities are commonly combined with the two aforementioned groups and the treatments are similar. Hence, incidence as it relates to the genetic etiology is usually combined for the two subtypes. Treatment for the conditions is related to the degree of hyperphenylalaninemia and to the degree of impairment of biogenic amine production, which varies among those affected. Further, a treatment involving BH4 administration is now approved in Europe, following clinical trials, that demonstrated that both GTPCH and PTPS are responsive to BH4. Due to the fact that GTPCH is very rare, yet quite similar to PTPS, the affected are aggregated when treatment is assessed. In any case, due to the rarity of these conditions, it is not until a very large general population has been identified through screening that penetrance and expressivity of disease are determined and a true incidence figure becomes available. In order to ensure that new therapies for these rare and severe genetic diseases will be available, regulatory agencies sometimes accept premarket evidence from smaller treatment groups while shifting the burden for the collection of additional information to FDA Phase IV postmarket surveillance, as was reported in FDA News for Fabrezyme® for the treatment of Fabry disease. (See http://www.fda.gov/bbs/topics/NEWS/2003/NEW00897.html) Having such treatments available earlier means that it becomes increasingly difficult to collect information on the natural history of the untreated condition. In fact, there has not been a natural history study of PKU conducted since the 1970s because the affected infants are routinely identified in screening are treated, respond well to the treatment. Understanding the genetic basis of these conditions has led to this relatively rapid transition between ability to diagnose and the development of treatments based on the underlying biology and pathology of genetic diseases, particularly those that involve the replacement of defective enzymes. Hence, it becomes increasingly important to develop national systems for the collection of clinical information about those individuals identified in screening to further inform our understanding of the screened conditions and to further evaluate treatment modalities through an iterative process.
The assessment of the evidence on the performance characteristics (analytical and clinical sensitivity, specificity, and positive predictive values) of the tests, as used in newborn screening is complex. Many of the screening tests use technologies that are the gold standard in the diagnostic setting, such as HPLC or IEF for hemoglobinopathies or MS/MS for the acylcarnitine disorders. Although one can demonstrate very strong analytical and clinical performance in a diagnostic setting, clinical performance in screening is a function of the cut-offs that are used by the screening laboratories to capture the most affected persons. States often assign varying cut-offs to analyte levels and often use different screening test algorithms, including second-tier tests or repeat tests to arrive at a determination of whether the specimen is within the normal range, with highly variable case definitions at screening. This lack of standardization makes it quite complex to assign a level of performance to the screening tests at a national level or to compare the performance of programs.
Finally, the evidence base for newborn screening is complicated by the differing views of the interest groups involved. For purely scientific and medical issues, the scientific literature provides objective information about different aspects of conditions, such as incidence, treatment efficacy, and diagnostic confirmation. However, some criteria have significant subjective aspects that require the consideration of more than just scientific and expert opinion. Cost is an example of a subjective criterion because it is a contextual concern and can only be measured against the value of the outcome. Other criteria may be perceived differently by the professional community or by other nonscientific or nonmedical interest groups. For example, parents often consider difficult the impact of treatments that health care professionals consider to be simple (e.g., maintaining a child on a specified diet). Some criteria are perceived differently among varying groups of professionals. For example, primary health care professionals in urban areas often have greater access to subspecialists than do those in rural areas. It is often difficult to balance the scientific evidence against the values that different groups place on newborn screening to reduce mortality and morbidity of diseases.
The need for evaluation of newborn screening systems
The lack of equitable newborn screening services offered for infants, the changing dynamics of emerging technology, and the complexity of genetics require an assessment of the state of the art in newborn screening and a perspective on the future directions such programs could take. In addition, programs must include an assessment of the availability of needed resources, both public and private, when determining which conditions should be included. A national, organized approach to differentiating among these many competing needs would help create a more informed process for deciding what tests should be included in newborn screening programs.
Since the first State newborn screening programs began, periodic assessments have been made. As early as 1968, the World Health Organization (WHO) issued a report urging that screening tests be appropriate and straightforward. 26 In 1975, the National Academy of Sciences (NAS) redefined genetic screening and established the fundamental principles and rules of procedure for genetic testing (these did not vary significantly from the 1968 WHO recommendations). NAS also made recommendations regarding the aims of testing and screening, criteria for testing, and the quality of testing. 13 In 1997, the Task Force on Genetic Testing, created by the National Institutes of Health-Department of Energy Working Group on Ethical, Legal and Social Implications of Human Genome Research, focused on the quality of testing and recommended that screening tests demonstrate analytical and clinical validity and utility 27 (Holtzman and Watson, 1997 available at http://www.genome.gov/10001733). In 1999, at the request of HRSA, AAP convened a Newborn Screening Task Force that provided a comprehensive evaluation of the current state of newborn screening programs in the United States. 13 The Task Force recommendations covered the public health and clinical care system, the roles of professionals and the public, issues of disease surveillance and research, and the economics of newborn screening. The report recommended that "HRSA should engage in a national process involving gov-ernment, professionals, and consumers to advance the recommendations of this Task Force and assist in the development and implementation of nationally recognized newborn screening system standards and policies." In addition, the AAP Task Force 13 thought that greater uniformity would benefit families, health care professionals, and the newborn screening programs. In 2000, the March of Dimes, an organization that has advocated on behalf of newborn screening programs, recommended that tests be rapid, high quality, and accurate and that cost should not be a major consideration. 28 Subsequently, the March of Dimes recommended that all States screen for nine conditions plus newborn hearing loss (see www.marchofdimes.com/ professionals/580.asp).
B. Methods used for assessing conditions
As an initial step in the process, ACMG convened a newborn screening expert group that included participants with expertise in various areas of subspecialty medicine, primary care, health policy, law, ethics and public health, and consumers. The expert group also formed two expert work groups to provide more in-depth analysis in two specific areas: the uniform panel and its criteria, and the diagnosis and follow-up system. Members of the expert group and work groups are listed at the beginning of this report. Work group members were selected based on their abilities to bring a strong scientific and clinical-rather than organizational-perspective to the issues under consideration. Not only were efforts made to ensure cultural, ethnic, and geographic diversity, there also were efforts to involve health care professionals and other interested parties from a wide range of fields and backgrounds, including expert representation from public health laboratories and program administration; individuals who are involved in the delivery of specialty care; primary care and nonphysician health care professional groups that are involved with the patients and families; and parents who have been directly affected by newborn screening programs.
The project depended on a variety of types of input obtained through expert reviews of the scientific literature, presentations from international and national invitees at six meetings of the expert group, solicitations for public and professional comment, and detailed assessments provided by the work groups. Considerable information was acquired through the use of disease-specific surveys that were broadly distributed and augmented by direct requests for input from acknowledged experts for the conditions under consideration. Areas in which deficiencies were found in the information available in the scientific literature were identified as well.
The expert group followed a two-tiered approach to assessing conditions that allowed for the views of experts of various types, including consumers, to be considered while still deferring to the evidence in the scientific literature. In the first level of the assessment, the expert group sought broad input through a survey of individuals and organizations with an interest in newborn screening. The expert group utilized a data collection instrument, distributed through a survey and directly to experts, to seek unpublished data and views related to the criteria by which conditions were to be evaluated. The opinions of experts and others were quantified using the scoring system assigned to each criterion in the data collection instrument. Conditions were then placed preliminarily into categories reflecting their overall scores on the data collection forms. In the second level of the assessment, the scientific and medical evidence bases relating to the conditions under consideration were developed. Each condition was then reassessed to ensure that the evidence base confirmed that three critical evaluation categories were met in order to define a uniform panel of conditions to be targeted by newborn screening programs.
Establishing principles for the development of newborn screening guidelines
Many factors could influence a decision to include a given condition in a newborn screening program, including, for example, the severity of the condition, the availability of effective treatment, the age of onset, and the complexity or cost of the test. 29 In developing the criteria to evaluate conditions and make recommendations, the expert group relied on a set of basic principles developed at the onset of the project. The order of these principles is not intended to suggest a prioritization.
An overarching concept is utility-that is, an approach that delivers the greatest good to the greatest number of people, while recognizing the need for some flexibility and the use of alternative mechanisms by screening programs. Newborn screening policies and practices have national, regional, and local implications. Although national uniformity is a goal for newborn screening programs, there also may be a need, in limited and specific circumstances (such as meeting local and community public health needs), to screen for certain genetic conditions identified only in given populations.
Newborn screening involves many parties. In addition to the child and his or her family or guardian, these include public health officials, health care professionals, private insurers, government officials, researchers, policymakers, educators, and others. This report seeks to acknowledge the full range of participants involved.
Universal newborn screening is an essential public
health responsibility that is critical to improve the health outcome of affected children.
To ensure that all United States newborns have access to screening and to promote a systems approach to population-based health care, it is critical that newborn screening remain a public health function.
Newborn screening policy development should be
primarily driven by what is in the best interest of the affected newborn, with secondary consideration given to the interests of unaffected newborns, families, health professionals, and the public.
A key factor determining the inclusion of particular conditions in newborn screening programs is the potential for the affected newborn to realize a significant improvement in quality of life as a result of the screen-ing. Although the expert group gives primary consideration to newborns that are being screened, it is clear that many others are also affected by newborn screening.
Newborns that do not screen positive can benefit from the elimination of certain diagnoses, and families benefit independent of the newborn that was screened. Furthermore, because these programs can decrease mortality and morbidity, public health professionals, the public, and the health care system may derive benefits from newborn screening programs, such as cost reductions for overall health care services. There may also be negative consequences for newborns and families that result from screening, including the potential negative impact of a false-positive screening result. Aside from the financial cost of a medical work-up to confirm that a suspected condition does not exist, there may be associated anxiety and stress for the family. 3. Newborn screening is more than testing. It is a coordinated and comprehensive system consisting of education, screening, follow-up, diagnosis, treatment and management, and program evaluation.
To realize the benefits from newborn screening, all components of the program must function well together. The six critical components of newborn screening programs-education, screening, follow-up, diagnosis, treatment and management, and evaluation-are important to the overall functioning of individual newborn screening programs and the system in which they operate. 30 There must be assurance of timely and accurate reporting and tracking of abnormal results. In order to know whether a program is functioning effectively and efficiently, it is important to know whether the expected health benefits are being realized. 4. The medical home and the public and private components of screening programs should be in close communication to ensure confirmation of test results and the appropriate follow-up and care of identified newborns.
The medical home concept has evolved as the central focus for the care of patients in their communities and should be the center of communication, primary care, and coordination of care for individuals. 31 There is increased recognition that enhanced communication between the clinical care system and public health programs is necessary to ensure optimal care and outcomes for the affected newborns. It is essential to establish close communication among State public health programs, the newborn's medical home, and the subspecialists commonly involved in the diagnosis and follow-up of affected newborns. 5. Recommendations about the appropriateness of conditions for newborn screening should be based on the evaluation of scientific evidence and expert opinion.
There are ever-increasing numbers of relatively rare conditions for which clinical knowledge is rapidly growing but for which the published literature may be sparse or outdated. Moreover, clinical expertise in treating many of these conditions may be limited. Given that all screening programs must rely on the same published knowledge base and a limited number of experts, a national process of scientific evaluation seems most practical. As new evidence emerges and opinions change, there should be a system in place for prompt review and release of updated recommendations. In 2003, the Secretary's Advisory Committee on Heritable Disorders and Genetic Diseases in Newborns and Children was established by the Department of Health and Human Services (DHHS). Its mandate was to advise and guide the Secretary of DHHS regarding the most appropriate application of universal newborn screening tests, technologies, policies, guidelines, and programs in order to effectively reduce morbidity and mortality in newborns and children who have or who are at risk for heritable disorders. The committee's purpose is to provide the Secretary with: ".advice and recommendations concerning the grants and projects and technical information needed to develop policies and priorities that will enhance the ability of State and local health agencies to provide for newborn and child screening and counseling and health care services for newborns and children having or at risk for heritable disorders." (Available at http://mchb.hrsa.gov/programs/genetics/committee/) 6. To be included as a primary target condition in a newborn screening program, a condition should meet the following minimum criteria: • It can be identified at a period of time (24 to 48 hours after birth) at which it would not ordinarily be clinically detected.
• A test with appropriate sensitivity and specificity is available.
• There are demonstrated benefits of early detection, timely intervention, and efficacious treatment.
Determining the appropriateness of a condition for newborn screening is a complex process. Although the emergence of new technologies such as MS/MS has altered views of which conditions should be included in mandated screening programs, in this report the primary targets of screening are those that meet the three criteria previously specified. A secondary target is one that is identified while searching for the primary target (e.g., HbC results from IEF while looking for HbS) or a clinically significant condition that is likely to be detected when performing a comprehensive profile of a given group of biochemical markers (e.g., GA2 may be identified while determining MCAD status (C8 acylcarnitine is elevated in both)). 7. The primary targets of newborn screening should be conditions that meet the criteria listed in #6 above. The newborn screening program should report any other results of clinical significance. Many technologies can be applied to screening for pri-mary targeted conditions. Some allow for more than one condition to be identified in a single procedure, and some provide important information about the presence of conditions that may not meet all of the criteria needed to be considered a primary target condition. The advent of molecular arrays and MS/MS has significantly broadened this potential. It is not necessarily the responsibility of the screening program to monitor the long-term follow-up of patients identified with clinically significant conditions that are not the primary targets of newborn screening. However, the significant costs of the diagnostic odysseys that may ensue following the birth of a child whose condition may have been suspected based on newborn screening results, and the related costs to families and the system of introducing futile therapies might be avoided if clinically significant results from newborn screening programs are shared with the newborn's primary caretaker.
Centralized health information data collection is
needed for longitudinal assessment of diseasespecific screening programs.
Mechanisms and systems that allow for the collection of short-and long-term data on affected individuals while protecting their right to privacy will allow for assessment and improvement of program performance and individual health outcomes. The pooling of information about health outcomes, treatment protocols, case definitions, and diagnosis and confirmation algorithms will improve care for the infants identified in the programs. Furthermore, it is often difficult to ascertain the natural history of rare diseases because of their low frequency and because they often exhibit genetic variability in severity and expression. Hence, data collection and shared data evaluation can significantly inform program decisionmaking and medical science. General population data are also needed to better understand certain approaches to screening (e.g., genomics), where the variability in expression of mutations is not entirely clear until individuals without the classical presentation of a condition are tested. 9. Total quality management should be applied to newborn screening programs.
As with any programmatic effort, improvements result from careful and continuous monitoring of key steps in the process, the assessment of that information, and the introduction of changes that continuously improve program performance. Uniform and consistent monitoring of system quality indicators can provide information about the relative performance of screening programs. 10. Newborn screening specimens are valuable health resources. Every program should have policies in place to ensure confidential storage and appropriate use of specimens. Specimens obtained for newborn screening have tre-mendous long-term value. They can be used for purposes of program quality management, to help inform deliberations about program expansion, for research on testing technology and treatment, and for epidemiologic studies. This is not to imply that every State should store all specimens forever but, rather, that there should be a sufficient number of States with diverse populations and long-term storage of residual specimens to provide this critical resource. Regardless, it is important to ensure the confidentiality of those persons whose specimens are stored. The use of specimens for nontherapeutic purposes must not alter the willingness of the public to participate in newborn screening programs and related activities. 11. Public awareness, coupled with professional training and family education are significant program responsibilities that must be part of the complete newborn screening system. Because newborn screening can have a significant impact on health outcomes for affected newborns, it is essential that the public as well as health care and public health professionals be informed of the availability of the programs and of changes that are made. Education and awareness are essential to both the quality of the screening programs and participation by the public and by health care professionals. As such, information sharing and education are critical program responsibilities.
Choosing the conditions
Eighty-four conditions were evaluated using these criteria (see Table 1). The conditions were chosen for several reasons. Any condition currently included in private, State, or national newborn screening programs was considered. Other conditions were included because they are known to be coincidentally revealed by some of the technologies used in newborn screening. Still others were identified by members of the public, the expert group, and work groups as worthy of consideration because they are important from a public health standpoint and/or there is a high level of public and/or scientific interest in screening for the condition. Hemoglobinopathy screening was mainly driven by the conditions associated with a hemoglobin S allele. Among these, Hb SS, Hb SC and Hb S-thalassemia were considered separately. Variant hemoglobinopathies included other conditions associated with an Hb S allele. Additional hemoglobinopathies revealed by screening, such as Hb E, are not the conditions to which screening currently is targeted. As discussed below, compromises were made in the lumping or splitting apart of conditions to be listed for assessment.
To a limited extent, the conditions listed as considered by the expert group represent a compromise among the various options. The intent was to distinguish many of the more common forms of the condition from others though there are still situations in which some very rare conditions are subsumed under a more general name for the condition.
The group considered it important to fully assess all conditions and to ensure that any apparent deficiencies were properly recognized so that disease-specific advocacy groups and the research community could focus on these deficiencies in developing their research agendas.
Developing evaluation criteria and their comparative values
Generally, a medical condition is assessed by itself to determine whether it should be included in a public health newborn screening program, 14,29 rather than being assessed along with a number of other conditions in a way that would allow for comparative ranking. Historically, this is primarily because individual conditions have been identified by individual testing platforms. Although conditions have usually been compared on the basis of relative incidence, there was little need for additional discriminating criteria given the general availability of traditional testing methodologies and treatments. Thus, comparative analyses of screened conditions or evaluations of the scientific evidence for or against inclusion of the conditions have not been formally conducted nationally, though this has often been done within individual programs.
Until recently, the capability of the currently available testing technology limited the conditions that could reasonably be included in a screening panel. Now, however, new information emerging from the clinical and scientific literature, combined with evolving technologies, has made it possible for increasing numbers of rare conditions to be detected simultaneously from single screening tests, making it reasonable to attempt more complex relative comparisons when deciding on conditions that should be added to a screening panel. Thus, it is no longer a simple matter to decide which condition should be added to a screening panel based on incidence, when a group of conditions may be simultaneously detected from a single analytical procedure and the group incidence (or impact to society) may be of higher relative importance than any of the single conditions within the group. In addition, even if multiple conditions could be detected, the question of whether they should be detected remains, when, for example, no efficacious treatment exists. Increasing the complexity of this decision-making process is the fact that all of the conditions detected may not have similar clinical outcomes for all children.
In recent years, professional groups in other countries have attempted to develop an organized, national approach to determining which conditions should be included in newborn screening panels. The Health Technology Assessment Program of the National Health Service of the United Kingdom has initiated a national program to systematically review the scientific and medical literature on inborn errors of metabolism, neonatal screening technology, and screening programs. Their goal is to analyze the costs and benefits of introducing MS/MSbased screening of amino acid disorders, fatty acid oxidation defects, and organic acid disorders, as well as other conditions screened on an individual test basis within the United Kingdom health system. 10 This extensive analysis assigned weights to various aspects of specific conditions and their associated Newborn screening panel and system tests and treatments, and assigned a qualitative value to the published information available. This effort has highlighted the difficulties inherent in attempts to balance costs and benefits against the value that the public and families place on such screening. The Human Genetics Society of Australasia developed criteria for placing conditions into one of four tiers. These tiers are determined by the nature of the benefit of the screening to the newborn, the benefit of the screening balanced against the cost, the suitability of the test, and the availability of appropriate and organized diagnostic and follow-up services (available at http://www. hgsa.com.au/Word/HGSApolicyStatementNewborn-Screening0204-18.03.04.doc).
More recently, Belgium has sought to assign values to the Wilson and Jungner criteria, 14 in order to weigh conditions against each other (see Box 1). Although novel, this system was considered to be less detailed than needed because many of the Wilson and Jungner criteria are subjective and therefore less amenable to the application of a metric and therefore quantification.
In the United States, several states, including Nebraska, Tennessee, and Washington, recently developed criteria and systems for assessing and comparing conditions. With the establishment of the 2003 federal Advisory Committee on Heritable Disorders and Genetic Diseases in Newborns and Children, the potential for development of national policies and recommendations should lead to a more uniform or equitable approach to newborn screening.
None of the existing systems allowed for adequate comparative analysis of conditions being considered for newborn screening. Further, the evolution of screening programs and the screening technologies used have added new variables to be considered when assessing conditions. The ACMG expert group chose to develop a modified system for the assessment of conditions for their appropriateness for newborn screening.
The Uniform Panel Work Group developed the data collection instrument to use during the project's first phase to quantitatively evaluate the features of conditions under consideration for inclusion in a potential uniform screening panel. Using a weighted scoring system, the conditions were evaluated according to criteria in three main categories: 1. The clinical characteristics of the condition; 2. The analytical characteristics of the test; and 3. Diagnosis, follow-up, treatment, and management of the condition.
Within each of these categories, 19 component criteria including six characteristics of the analytical tests were considered for assigning a comparative value, or score. Conditions already included in newborn screening programs were used to model the scoring system. Each of the criteria was weighted to reflect the presumed importance of the particular criteria to the overall assessments of conditions. Experts in the conditions under consideration for newborn screening were then asked to consider the criteria and the extent to which they cover the range of issues that arise among disparate types of conditions. They were also asked to consider whether appropriate weights were assigned to criteria, thereby acknowledging the criteria considered most relevant. The language describing the criteria and the scores associated with the range of responses to the criteria were adjusted by the expert group (see Table 2 for the criteria and the possible scores). Then, the weight accorded to each criterion was revised (i.e., the highest possible score within each category was the same). The criteria that were identified within each category were assigned a range of possible responses and related scores ranging from 0 to a maximum score that varied according to each criterion's overall importance. Conditions already included in newborn screening programs were then assessed for their performance in the system. Results were compared with those obtained by other systems de-veloped for this purpose to determine whether the outcomes were similar.
The scoring system recognizes the strengths and limitations found in each condition and summarizes them in a ranking system. Thus, a low score in a particular area does not necessarily mean that screening for that condition will never be conducted. In fact, low scores could be radically overruled by scientific evidence of new advances in testing and treatment and should be recognized as opportunities for targeted clinical or basic research endeavors and subsequent reconsideration of the condition for inclusion.
The criteria that were developed to differentiate the appropriateness of conditions for newborn screening include some Table 2 Combined criteria and distribution of scores in the data collection instrument(Highest possible score: 2100) I. Condition/Disorder (subtotal score 700) Early identification provides some benefits to family and society 50 No evidence of benefits 0 Early diagnosis and treatment prevent mortality Yes 100 No 0 that have a highly objective scientific basis and others that are more subjective. To the extent possible, the expert group relied on the scientific literature to provide the information on which its recommendations are based. Survey respondents were provided with the data collection instrument, questionnaires about the criteria themselves, the weight assigned to criteria, and the distribution of scores within a criterion. The respondents were asked to provide information on both objective and subjective criteria as a way of determining a respondent's familiarity with the condition(s).
THE THREE MAIN CATEGORIES AND THEIR CRITERIA
Clinical characteristics of the condition Three criteria were developed for this category: To prevent MOST negative consequences 100 To prevent SOME negative consequences 50 Treatment efficacy not proven 0 Diagnostic confirmation Providers of diagnostic confirmation are widely available 100 Limited availability of providers of diagnostic confirmation 50 Diagnostic confirmation is available only in a few centers 0 Acute management Providers of acute management are widely available 100 Limited availability of providers of acute management 50 Acute management is available only in a few centers 0 Simplicity of therapy Management at the primary care or family level 200 Requires periodic involvement of a specialist 100 Requires regular involvement of a specialist 0 NOTE: The two criteria marked with (*) above were combined in the data collection instrument, a score of 100 was attributed to a treatment that is inexpensive and widely available, 50 if expensive or limited availability, 0 if expensive and limited availability. The final version was prompted by feedback from several survey respondents who felt that not all options were actually considered (e.g., no treatment necessary).
Incidence Of The Condition
The incidence of the various conditions varies widely. In terms of public health importance, the more common the condition, the higher the justification for screening. Accordingly, any condition with a documented (or estimated) incidence of 1:100,000 or less received a score of zero, while an incidence of 1:5,000 or more received a score of 100. When technology allows for the condition to be detected in the course of screening for other conditions, points were added back through the appropriate testing criteria. (See "Screening Test: Availability and Characteristics," below.)
Clinically Identifiable Signs And Symptoms In The First 48 Hours
In the context of public health, it is more important to screen for conditions that generally would not be detected in the newborn period based solely on routine clinical evaluation. However, it is important to recognize that there could be differences of opinion regarding whether a particular phenotype could be recognized by a typical health care provider and/or specialist, and the phenotypic variability expected among newborns with a particular condition must be considered. Nonetheless, if clinical symptoms are never detectable within 48 hours after birth, the condition received a score of 100. If clinical manifestations are always detectable, the condition received a score of zero.
Burden Of Disease (Natural History If Not Treated)
This is an important criterion for prioritizing the use of public health resources because it favors screening for conditions that constitute greater burdens to those affected (if the burden is profound, for example, a score of 100 was given). It is recognized that some conditions have a wide range of severity and that the test may not necessarily discriminate the milder forms from the more severe forms.
The screening test: availability and characteristics
Seven criteria are included in this category:
Availability Of A Sensitive And Specific Test Algorithm
This criterion is a central consideration when assigning a test or a condition to a uniform screening panel. The expert group chose to define this criterion as a test algorithm because some tests might require that additional analytes or second-tier tests be incorporated to achieve sufficient specificity (e.g., the use of T4 and TSH for the screening of CH or the use of a second-tier molecular test to improve the specificity of the IRT test for CF). This criterion was considered the first step in a decision tree without which further consideration for inclusion in newborn screening would not be possible. One hundred points were allotted to this feature of a condition. If a condition had no sensitive and specific test available that could be used in population screening, it was assigned a score of zero. However, it is acknowledged that there is no agreed-upon level of sensitivity and specificity and that this may vary with the burden of the condition and its importance for screening.
Ability To Test On Either Neonatal Bloodspots Or An Alternative Specimen Type Or By A Simple, In-Nursery Procedure
Value was assigned if a test can be done on a dried bloodspot, which is a highly stable specimen type already integrated into newborn screening and on which many tests can be performed. Equal consideration was given to a screening test that could be conducted using a simple procedure or method, as with hearing screening, that would be appropriate for population screening. One hundred points were allotted to this feature of a test.
Test Is Based On A Platform That Offers High-Throughput Capability
Value was placed on the ability of a technology to operate in a high-throughput format that allows testing of at least 200 specimens per full-time employee equivalent per day. The ability to test a large number of specimens in a short time offers cost savings to programs and increases efficiency, both important for public health screening. Fifty points were allotted to this criterion.
Cost Of Test Is Less Than $1 Per Infant Screened
Value was placed on low-cost technologies. Cost was based on the personnel, reagents and other costs associated with testing only. Differences in the scoring of conditions detected by MS/MS were likely due to higher costs when a multiplex technology is used to screen for only a few conditions rather than for a larger number of conditions. Fifty points were allotted to this feature of a test.
Multiple Analytes Relevant To One Condition Can Be Detected In The Same Run
The ability to detect multiple markers of a given condition within the same test increases the specificity of the method by allowing the calculation of ratios that have been shown to improve the differentiation between true positives and potential false positives. Fifty points were allotted to this feature of a test.
Other Conditions (Secondary Targets) Can Be Identified By The Same Analytes
Value was assigned to the ability of a test to provide information about multiple conditions using the same analyte(s). Although these secondary targets may not independently meet all of the other criteria for inclusion in the uniform screening panel, they add value to the primary target condition because their detection constitutes a clinically significant result leading to tangible benefits to the affected newborn, family, and society. Fifty points were allotted to this feature of a test.
Multiple Conditions Can Be Detected By The Same
Test Nine criteria were developed to assess the combined aspects of diagnostic confirmation and treatment and management:
Availability Of Treatment
The availability of treatment is considered an important criterion for conditions in a core newborn screening panel. Fifty points were allotted to this feature of a condition, though additional value is assigned later depending on the effectiveness of the treatment.
Cost Of Treatment
The cost of treatment is an important consideration in newborn screening. Although this criterion does not necessarily differentiate cost from value, it should be factored into decision-making. Fifty points were allotted to this feature of the treatment.
Potential Efficacy Of Existing Treatment
More effective preventive or therapeutic interventions for a given condition increase the value of testing. For many conditions, treatments could result in near normal or normal outcomes. For others, the treatment may affect only a subset of the negative phenotypes possible or allow for only incremental improvements in optimal outcome. Moreover, treatment might not be equally effective in all individuals. This was considered a critical criterion and was assigned a value of 200 points.
Individual Benefits Of Early Intervention
This criterion is important because the benefit to the child being screened is the overriding consideration. This was considered an objective criterion based on the quality of available evidence showing that early intervention optimizes outcome. Two hundred points were allotted to this feature of a treatment.
Familial And Societal Benefits Of Early Identification
Early identification of an infant with a condition can bring benefits to families and/or society beyond the prospect of treatment. Because so many of the conditions detected through newborn screening are genetic, families can benefit from establishing that there may be a genetic risk to others in the family. Society could benefit by a reduction in medical diagnostic odysseys that are costly to the health care system. One hundred points were allotted to this feature of a condition.
Prevention Of Mortality Through Early Diagnosis And Treatment
Prevention of mortality was assigned a value indepen-dent of reduction of morbidity. One hundred points were allotted to this feature of a condition.
Availability Of Diagnostic Confirmation
Many conditions included in newborn screening programs are rare, and there may be poor access to diagnostic confirmation testing in the United States or even internationally. In such cases, it is more difficult to follow-up on cases with positive results, and the results provided by research laboratories may be more difficult to interpret and communicate to child health professionals and families than those from diagnostic laboratories. Furthermore, in the United States it may be ethically or legally problematic to report results from tests conducted by laboratories that are not certified by the Clinical Laboratory Improvement Amendments (CLIA). On the other hand, some conditions can be confirmed locally because of the wide availability and relative simplicity of the confirmatory test or service. Thus, different values were assigned based on the ease of diagnostic confirmation. One hundred points were allotted to this feature of a condition.
Acute Management
As with diagnostic confirmation, the availability of health care professionals who have expertise in the acute management of the condition could be limited. Thus, higher values were assigned to conditions for which acute disease management is readily available. One hundred points were allotted to this feature of a condition.
Simplicity Of Therapy
Therapeutic interventions range from highly specialized (e.g., bone marrow/umbilical cord blood transplantation) to extremely simple (e.g., vitamin supplementation, avoidance of fasting). A higher value was assigned to simpler therapies since simplicity determines whether infants requiring follow-up can be managed locally or whether subspecialist care is required. The acute management of many metabolic disorders often requires the involvement of metabolic disease physicians who are not readily available in many geographic locations. On the other hand, for example, aspects of CH may be managed by child health professionals, and when specialists are required, they are more widely available. Some conditions also might allow for greater levels of family involvement in treatment. One hundred points were allotted to this feature of a condition.
Collecting the data One goal of the data collection process was to include a broadly representative group of participants. A second goal was to use a method that would allow quantification of expert opinion. In addition to data gleaned from the scientific literature, input and opinion were sought from a wide array of child health professionals, subspecialty care experts and individuals interested in newborn screening. Respondents were not anonymous, and were asked to select one or more of the following categories to describe their personal and/or professional role(s) in relation to newborn screening: 1. Provider of screening services (TESTING) 2. Provider of screening services (FOLLOW-UP) 3. Provider of screening services (ADMINISTRATION) 4. Provider of screening services (POLICY) 5. Provider of diagnostic services 6. Child health professional 7. Specialty care provider 8. Consumer As discussed previously, many criteria were perceived differently by these diverse constituencies. Distinguishing among respondents allowed the expert group to independently assess the views of these different groups.
For each condition, steps were taken to ensure that those asked to provide information and those who provided information were broadly representative of the interest groups involved. A large number of acknowledged experts for each condition and specific consumer and professional organizations were asked to provide input through multiple professional groups (e.g., the Society for Inherited Metabolic Disease (SIMD), ACMG). Individuals from public health and newborn screening programs were offered the opportunity to participate through listservs of their representative organizations. This included listservs managed by HRSA/MCHB, NNSGRC, the Association of Public Health Laboratories, and others. To ensure that the perspectives of consumers were available for consideration, consumers were reached through listservs of NNSGRC, Genetic Alliance, and others. To ensure that there were several scientific and clinical experts for each condition, specific individuals were identified from recent publications, disease support groups, and professional groups. In addition, the data collection instrument used was made widely available through the ACMG web site (www.acmg.net). Due to the large and overlapping numbers of individuals participating in these listservs, it is not possible to state the number of potential participants who were contacted. Geographic origin and role or interest in newborn screening of survey participants was monitored to ensure that respondents were broadly representative.
Respondents were given the opportunity to score each criterion or mark it as unknown "U," an important option, because not all of those asked to participate were sufficiently familiar with the many aspects of all of the diseases for which responses were sought. However, the option also had implications for how the data were aggregated for analysis. The data were analyzed as means and medians for each criterion, as the average of total scores for each responder, and as sums of means and medians of all respondents to a particular criterion. After considering these different possibilities, it was decided that the results for any given condition would be expressed as the sum of the mean of the scores for each criterion. (The difficulty with using the sums of the means arises from different numbers of scorers, and scores varying in the comparisons, which obscures the distribution and confidence intervals of the final scores. The alternative approach using the sum of the medians was not used as the primary statistic because it tends to minimize dissent from the consensus. In later figures, conditions are ordered around the sum of the means and medians are otherwise shown. However, as previously discussed, for purely objective criteria, the data as evidenced by the scientific literature was applied and included in the sums rather than the survey information.)
Developing and integrating the evidence base
In the second tier of the assessment, the evidence base for the conditions was established and an algorithm through which conditions were reassessed was developed. The quantification of expert opinion or scoring system then becomes part of a broader assessment of the scientific literature related to the conditions, tests, and treatments in the second level of the assessments. The evidence from the scientific literature, with supporting references for each criterion of each condition, was reviewed as shown in the fact sheets (Appendix 1). Evidence was derived from a systematic review of: Epidemiology studies, when available, were assessed for study design, the nature of the subjects and the outcomes that were measured, and the effectiveness of the treatment.
Statistical analysis of survey results allowed for a score to be assigned to each condition which determined its ranking and initial placement in one of three categories (high scoring, moderately scoring, and low scoring or lacking a newborn screening test). After the assignment of conditions to one of the three categories, the evidence base on the condition, as validated by acknowledged experts in the conditions in question, was used to determine if the conditions met critical criteria categories. Experts in specific conditions were identified by the Conditions and Criteria Work Group and included many individuals who had participated in the data collection process.
Several critical criteria were identified that reflected the priorities and principles of the expert group. These include: 1. The existence of a sensitive and specific test that has been validated in a large general population; 2. The availability of an efficacious treatment; 3. A determination that the natural history was sufficiently well understood to justify placement in a core panel of conditions; 4. Determination of whether a clinically significant condition not in the core panel would be identified because it is part of the differential diagnosis of a core panel condition; and 5. Whether a clinically significant condition would be revealed by a multiplex technology and whether it was part of the differential diagnosis of a core panel condition. 6. Further, it was recognized that some tests allow for the definitive identification of unaffected carriers, and that such results should be communicated to a responsible individual in the health care system.
The fact sheets for each condition were reviewed by at least two experts for each condition to validate the information and assign a level of quality to the evidence. Levels of evidence correspond to those defined by the AAP Steering Committee on Quality Improvement and Management 32 as follows: Level 1: Evidence is derived from well-designed randomized controlled trials or diagnostic studies on relevant populations.
Level 2: Evidence is derived from randomized controlled trials or diagnostic studies with minor limitations; overwhelming, consistent evidence from observational studies.
Level 3: Evidence is derived from observational studies (case control and cohort design).
Level 4: Evidence is derived from expert opinion, case reports, and reasoning from first principles.
The evidence was aggregated into four groups (the condition, the test, the diagnosis and the treatment) and a level of evidence quality was assigned to each group by the experts for each of the conditions. Each fact sheet includes the names of the experts who validated the data and the level of quality of the studies from which the evidence is derived.
C. Results
Responses were received from 289 individuals, many of whom represented more than a single interest group, for a total of 582 represented areas of interest. The majority of the survey information was provided by experts in the clinical and scientific aspects of the individual conditions. The regional distribution of responses and areas of expertise of the respondents from the United States are shown in Table 3. The table also correlates the number of responses to the birth rate in each region (based on Census 2001 data). In the United States, no responses were received from the following States: Idaho, Kansas, Montana, North Dakota, South Dakota, West Virginia, and Wyoming. International responses were from Australia (4), Brazil (1), Canada (5), Chile (1), Croatia (1), Denmark (1), Finland (1), France (1), Germany (1), Italy (3), The Netherlands (1), Switzerland (1), and the United Kingdom. Most were from recognized experts in the field who were actively solicited by members of the working group for their input about specific conditions. At least three experts provided information on each condition.
Overall, a total of 3949 condition profiles were obtained. On average, seven conditions were scored per responder. Of the 84 conditions, 30 (36%) received more than 50 responses, and 5 (6%) Ͻ 20. The average number of profiles per condition was 47 Ϯ 20; the range was 14-120. The corrected total for the 84 conditions was 3796; the number of responses for each condition is listed in Table 4. This table also shows the proportion of respondents who were unable to respond to one or more of the individual criteria and is reflected as "missing data" for each condition. This option was most frequently used in scoring criteria related to attributes of the screening test itself, with 11% of respondents not including all of the requested information.
Additional input, both domestic and international, was provided by individuals who were asked to discuss many of the broad issues under consideration by the work groups. The committee is particularly grateful for the assistance of Dr. Rodney Pollitt (Sheffield, UK), who provided insights into the system used in the United Kingdom; Dr. Adelbert Roscher (Munich, Germany), who provided insight into the recent newborn screening and MS/MS decision-making process undertaken by the German Democratic Republic; and Dr. Edwin Naylor (Pittsburgh, PA), who provided insight into the decision-making process of NeoGen Screening (now Pediatrix). In addition, several opportunities were offered for public comment over the course of these deliberations. Based on responses to an independent survey that inquired as to the appropriateness of the criteria and the weights assigned, the expert group adjusted the scores assigned to some of the criteria. In particular, ambiguous language was clarified and a greater weight was assigned to the benefit of treatment to the infant. Scores for the parameters of the screening tests were increased to recognize the inherent value of multiplex technologies to public health. Figures 1 and 2 display the raw data for MCAD and PKU, which were selected as representative conditions for demonstrating how the data collected for individual criteria are charted and aggregated to reach the final scores. Each respondent is listed over columns and the score offered for each criterion is shown. The sums of the mean and median scores are shown. Figures 3a through 3e display side-by-side summary data for each of the criteria used to evaluate the conditions with MCAD on the left and PKU on the right. In the top panel, the total score for each respondent is shown. The remaining panels show the scores for 18 of the 19 individual criteria (the availability of test criterion is not included) used to evaluate the conditions. The complete data in tabular form are displayed in Table 4, in which the scores are reflected as sums of the means for all conditions. The number of respondents for each condition is shown. The sums of the mean scores for all of the conditions evaluated, regardless of whether a screening test is available, are shown in Figure 4, Figure 5. Figure 6 separates those conditions that have an acceptable, validated, population-based screening test from those lacking a test. The left side of the graph shows the conditions that have an adequate screening test currently available, while those shown on the right side lack a screening test. Among the conditions with a test, MCAD deficiency, CH, and PKU score the highest in this analysis, followed by BIOT, sickle cell anemia, CAH, isovaleric acidemia, VLCAD deficiency, MSUD, GALT, hemoglobin S/-thal disease, hemoglobin SC disease, LCHAD deficiency, glutaric acidemia type 1, and HMG. Conditions without a test are included because they reflect the need to focus on particular aspects of the disease in order for it to be considered for newborn screening.
D. Discussion
A number of considerations influenced the final decisions regarding which conditions should be included in a core screening panel. As previously discussed, using a two-step process, the information gathered with the data collection instrument and the review of the scientific literature provided information used to assign a score for each condition. This approach also allowed for those conditions with screening tests that have been validated in general populations to be distinguished from those conditions for which a population-based validated test was not available. The scores were first used to make some general decisions based on the highest scoring conditions. In particular, the inclusion of several conditions that are screened by either IEF or HPLC (hemoglobinopathies) and MS/MS (acylcarnitines and fatty acid oxidation disorders) led the expert group to make decisions regarding multiplex technologies and how the results should be handled. Once the conditions were separated into groups defined by either the individual condition or by the multiplex test that detects many conditions, the scoring system could be overlaid to see how conditions compare to one another within these groupings, or in total.
Defining and counting the conditions
Careful consideration of several factors is required to answer the seemingly basic question of how many conditions should be screened for in a newborn screening program and how they should be defined. These factors include: 1) the clinical, biochemical, and molecular complexity of the conditions under consideration; 2) the progress constantly made in our understanding of their natural history and etiology; 3) the impact of implementing multiplex platforms that allow the simultaneous detection of numerous biochemical markers; and 4) the gaps that appear to exist in the level of clinical knowledge among stakeholders involved with, or advocating for, the decision to pursue ever greater numbers of conditions. Indeed, counting has become increasingly problematic to the point that a competition seems to be taking place in which the apparent superiority of a newborn screening program or private laboratory is staked on the sole basis of quantity, with disproportionate consideration given to quality. This concept has caught the attention of the media that constantly tell the pub- Figure 5 shows the scores for all conditions that were evaluated, separated into groups based on the testing platforms (MS/MS for metabolic diseases, IEF or HPLC, for hemoglobinopathies, and all others).
lic-at-large that the more conditions that are screened in a particular State, the better that program must be. As a direct consequence of this behavior, the number of conditions is perceived by the public and policy-makers as a scorecard, often leading to either inflated or inaccurate figures. For example, 22 States offering screening by MS/MS have included LCHAD deficiency in their panels, yet only half of the same programs claim to be screening for trifunctional protein deficiency, perhaps being unaware that the biochemical phenotype in bloodspots is essentially identical between the two conditions. Thus, the context in which screening is "quantitated" must be standardized. This situation is not a new development brought on by modern technologies. Since the beginning of PKU screening, this has been a complex issue. The screening method for PKU led to follow-up testing to separate the patients with tyrosinemia and/or biopterin defects. Thus, many programs included tyrosine in their screened conditions, and considered biopterin defects as merely an anomaly of PKU screening that should be combined with PKU and given an asterisk when counting the number of PKU cases detected. This is hardly satisfactory when questions are asked about the incidence of the secondary targets or the outcomes of those subtypes.
When screening for sickle cell anemia became an important addition to screening panels, the singular condition of SS disease was usually counted even though the testing methodologies used could detect many different clinically significant hemoglobinopathies. Screening for sickle cell anemia progressed to screening for sickle cell diseases (SC and S-thal) but this screening was still counted as screening for a single disorder with many other conditions detected secondarily. Further, although these are the three primary targets of hemoglobinopathy screening, the methodologies of IEF or HPLC employed in hemoglobinopathy screening can reveal over 700 variant hemoglobins, of which about 25 are considered to be of clinical significance and are reported out by some screening laboratories. Some States may only report SS dis-ease, some SS, SC and S-thal, and others a variable number of the other clinically significant variants. Hence, just for this one group of conditions, one can argue that a program that reports out 28 of these variants actually screens for 28 conditions. For a test involving a functional endpoint such as severe hearing loss, there are a large number of "conditions" for which the test screens. 33 There are over 77 loci for nonsyndromal hearing loss conditions, 31 loci for syndromal hearing loss conditions, as well as some of the "environmental" causes of hearing loss that would be amenable to DNA-based testing such as presence of the cytomegalovirus or other infectious agent genomes. Hence, what is considered a single condition screen, congenital hearing loss, may be considered a screen for at least 108 individual conditions at the etiologic level.
If one takes the set of conditions included in both the proposed core panel and secondary target groups, each entity reflects the significance given to a spectrum of possible criteria. In the proceedings of the working group charged with this task, choices were made to strike the best compromise between established practices, the expert opinions, and scientific evidence. In reality, counting could have been very different if this had been approached in a pragmatic way using any of the following criteria: 1. Phenotype of the condition; 2. Established groups of conditions (e.g., organic acidurias, hyperphenylalaninemias); 3. Primary marker (e.g., tyrosine, C8 acylcarnitines); 4. Test (e.g., MS/MS, IEF); 5. Response to treatment (e.g., responsiveness to cofactors, vitamins); and 6. Number of loci linked to a common phenotype (e.g., hearing loss genes as discussed above). Table 5 shows how different "counting" could be if the criteria above were applied independently. For instance, hearing loss is a single phenotype of one group of conditions for which the primary marker is hearing loss that is detected by one test- Response to treatment (5) 32 14 46 Number of loci (6) 142 28 170 Expert group (7) 29 25 54 (1)All clinical subsets (e.g., severe, mild) considered as a single entity.
(4)Either singleton test or multiplex platform count as one. ing platform, audiometry. The single response to treatment for the group is improved hearing or communication. However, as previously discussed, there are at least 108 genes for conditions associated with hearing loss. Similarly, while C8 is a primary marker of MCAD, it's also a primary marker for GA-II, M/SCHAD and MCKAT. It is detected in a single multiplex platform, MS/MS. Treatments are similar but as indicated above, and multiple conditions are associated with the marker. It is evident that quantitation and categorization of newborn screening disorders remains imperfect and inconsistent and that, until standardized, there will continue to be confusion about the extent of screening in individual programs and the nation. The expert panel recognizes these disparities and their rationale, and recommends the implementation of a standardized and common nomenclature for an objective and scientifically sound description of the screening test panel being offered and the reporting of results. Such a classification system would require some consensus among the newborn screening and subspecialty communities, but should be possible. Standardization of panels, and consistent screening methods and case definitions will allow more pooling of available data on the utility of screening.
Integrating the evidence base with the survey results
Information obtained from the scientific literature and the surveys was used to create the fact sheets that were developed for each condition (see Appendix 1). The fact sheets are structured to provide summary information describing: 1. The type of condition; 2. The test; 3. The extent to which United States newborns are being screened for the condition; 4. Whether there is apparent ethnic variability in incidence; 5. The number of individuals providing information on the condition; 6. The proportion of scores from survey respondents considered valid; and 7. Citations in PubMed as of February 2004.
Information obtained from the surveys is shown on the left side of the first page. The percent of maximum score of the survey respondents is shown next to each criterion. The data from the two criteria for which there was the lowest correlation among respondents is also shown on the left side of page 1. The evidence from the literature is shown on the right side of the first page. Additional summary information including the scores (maximum of 2,100) is shown along with an assessment of whether the data from the surveys are consistent with the evidence from literature. Significant discrepancies are discussed in the comment box. Although the language of the criterion is often not identical to that expressed in the literature, there was significant correlation between the survey results and the evidence from the literature. The fact sheets for all other conditions evaluated are provided in Appendix 1.
Influence of testing technology
New technology has been one of the driving forces in the evolution of newborn screening programs in the United States and is a critical factor in the evaluation of a condition to determine how appropriate for screening it is. Typically, determining the appropriateness of newborn screening was based on the conditions themselves and their associated testing methods. However, new technologies often raise questions that have not yet been addressed. Multiplex methods such as genomic arrays require that the sequence tested deliberately be placed in the array. This is distinct from technologies that look globally at a class of molecules, for example, IEF or HPLC that reveal all hemoglobin variants, or an MS/MS run to detect acylcarnitines that reveal compounds in the C2 through C18 range. Complicating the use of MS/MS is the fact that many of the compounds identified are associated with more than one condition and these conditions may not have similar clinical and laboratory features. Thus, the criteria used to judge whether to include a condition in a newborn screening panel will vary among the conditions. It becomes difficult to compare a condition that has a unique test/technology that tests only for the condition of interest to a technology that can detect many conditions, some of which are related through their differential diagnosis, while others involve independent compounds in the MS/MS profile. The use of MS/MS for acylcarnitines, for example, differs from its use for detection of amino acid disorders in which there is little overlap between the analytes associated with the conditions. Table 6 shows the relationships between analytes for high scoring conditions and those of lower scoring conditions. Independent decisions were made about conditions screened using MS/MS and HPLC or IEF for hemoglobinopathies. One reason is that among the acylcarnitine disorders there is little differentiation between the highest and lowest scoring conditions. For many conditions, the difference is accounted for by differing incidence figures-a criterion that loses some of its importance when the test for the more common conditions also can detect less common conditions.
It is important to note that two approaches are currently being used in screening with MS/MS. A majority of screening laboratories now run full profiles that allow them to visualize the full range of acylcarnitines or amino acid compounds. However, a minority operate their systems in a selective reaction monitoring (SRM) mode, which allows them to obtain results only on the subset of compounds that are associated with those conditions that are being targeted in the screening programs. Some programs use a combination of SRM and profiling with either approach, the screening test is driven more by analytes than by the conditions with which they are associated. An assessment of the advantages and disadvantages of the test results for each approach led to an expert group preference for the full-profile approach for four reasons.
First, in reviewing those acylcarnitine-associated conditions that were high scoring in this analysis (MCAD, IVA, VLCAD, LCHAD, GA1, HMG and TFP) (see Table 4), it was apparent that several acylcarnitines must be analyzed in order to maximize assay specificity and sensitivity. A majority of the remaining conditions detected by MS/MS were also included in the differential diagnoses of the higher scoring conditions. Thus, screening for a core set of conditions ultimately results in screening for a much wider range of conditions. Second, the use of MS/MS profiles allows for the maximal use of the technology for the identification of clinically significant conditions. Third, the use of MS/MS profiles offers better quality control of preanalytic and analytic aspects of testing. Allowing all information to be assessed can reveal the presence of spurious signals and/or contaminants in the specimens or reagents and devices used in the test system.
Fourth, the use of MS/MS profiles enhances clinical interpretation of results by revealing anomalies in associated compounds or in compounds that provide internal standards against which excesses or deficiencies can be better interpreted. Hence, the expert group recommends that a full MS/MS profile should be analyzed, and any clinically significant results should be reported by the laboratory to the health care provider and family of the infant. Some of the conditions detectable by acylcarnitine profiling may turn out to be benign in a number of cases (i.e., SCAD, 2MBCAD, and 3MCC). The secondary conditions detectable by a multiplex technology such as MS/MS or HPLC and included in a differential diagnosis for the primary target conditions can be screened at minimal additional cost and are, in fact, determined in the diagnostic setting during follow-up. There could be additional cost associated with diagnosis and follow-up, although many of these cases would be detected clinically after birth and higher costs would inevitably be incurred by the health care system and the family, although not as a result of the newborn screening program.
The expert group also devoted considerable discussion to the question of how best to present the results of analyses of conditions. As previously discussed, the lists of conditions used are inherently longer than the lists many States use to describe the newborn screening tests they offer because the expert group chose to break down the heterogeneity of conditions by listing them by etiologic type or by the analytes associated with the conditions. It would be inappropriate to consider this list of conditions as a scorecard for the number of conditions screened. It is only by considering each condition in each of its etiologic forms that a direct analysis can be done.
In the following section, diseases are assigned to categories as a means of conducting the analyses (see Tables 7 and 8). The main category, referred to as the core panel, includes those conditions considered appropriate for newborn screening. The 29 conditions in this core panel are similar in that they all have: 1. Specific and sensitive screening tests; 2. A sufficiently well understood natural history; and 3 Available and efficacious treatments. The expert group concluded that conditions with evidencevalidated scores equal to or above 1,200 meet these key criteria and should be considered appropriate for newborn screening.
Analysis of the distribution of scores among the conditions in Figure 7 shows that around a score of 1,250, one moves into a group of conditions that are part of the differential diagnosis of higher scoring conditions, but for which natural history is less well understood or efficacious treatment is lacking. These conditions occupy the middle third of the curve. CF (1,200) is the only condition currently screened that scores in this range but is not part of the differential diagnosis of a higher scoring condition. (Its lower score may reflect the ongoing debate about the benefits of screening for CF, despite the evidence for screening and the lack of evidence of significant harms from screening.) 34 -35 Otherwise, all conditions in this middle third scoring between tyrosinemia type I (score ϭ 1,257; 63rd centile) and galactose epimerase deficiency (score ϭ 1,066; 35th centile) are part of the differential diagnosis of another higher scoring condition. The expert group recognizes that it is difficult to draw a line in a continuum that would reasonably discriminate between groups of conditions. Programs should appreciate that scoring cut-offs may have wide and varying confidence limits due to differences in numbers of responders. The final scores represent a rough relative approximation of ranking of disorders and serve only as an initial step to guide decision-making; analysis of the evidence base for the score needs to be included in the decision-making process.
Conditions then were redistributed between the core panel and the secondary target category on the basis of the evidence related to the availability of an efficacious treatment and a well understood natural history. Other conditions were moved from the "not appropriate for newborn screening category" to secondary targets if they were revealed by the multiplex technology used to identify core panel conditions. SCAD, IBG, ARG and DE RED were moved into the secondary target category on this basis. Among conditions initially placed in the core panel category on the basis of the survey score, CPT-II was shifted to the secondary target category on the basis of the lack of a proven efficacious treatment. Several conditions were moved to the secondary target category on the basis of scientific evidence indicating that the natural history was not sufficiently well understood. These include TYR-II, GA-2, and M/SCHAD. GALK deficiency was moved to the secondary target category on the basis of the relatively limited burden of disease and the fact that a second test is usually required to screen for the condition. G6PD was moved to the category of conditions not recommended for newborn screening because of a limited knowledge of the natural history of the mutations in the G6PD gene found in the United States. There is also limited knowledge of the implications of these mutations with regard to development of severe hemolytic disease in the United States population. Additionally, because G6PD is not identified in the course of screening for other core conditions, it was not placed in the secondary target category. Finally, a subset of conditions was identified for which carrier status could be established on the basis of the screening test result and for which reporting is considered appropriate. These include MCAD, VLCAD, Hb-pathies, 3MCC, CUD, and CF.
The next group of conditions includes those that are clinically significant and are part of the differential diagnosis of a condition listed in the core panel or that are revealed through a multiplex technology. Note that secondary hemoglobinopathies are revealed in the screening laboratory while most others are revealed in the diagnostic setting during follow-up. Table 8 lists the conditions in this secondary category. Table 5 shows the relationships among many of the core conditions and the conditions included in their differential diagnoses (or secondary targets). In particular, some of the metabolic conditions in this group are characterized by having a sensitive and specific test, but a deficiency in the availability of an efficacious treatment or limited knowledge of the natural history of the condition, although there may be sufficient knowledge to justify the reporting of test results to the family and health care provider of the infant.
The recommendation to report all clinically significant results is an approach similar to that taken for hemoglobinopa- Codes are as listed in Table 4. OA, disorders of organic acid metabolism; FAO, disorders of fatty acid metabolism; AA, disorders of amino acid metabolism; Hb Pathies, hemoglobinopathies. (*) Identifies conditions for which specific discussions of unique issues are found in the main report. thy screening, in which a core set of conditions is screened. The technologies of choice in many laboratories for hemoglobinopathy screening are IEF and HPLC, which can detect the full range of more than 700 hemoglobin variants, including those in the core panel, for which clinically significant variants are reported. 36 By handling hemoglobinopathies in a way similar to the acylcarnitine and amino acid disorders screened for by MS/MS, the expert group was left with a much smaller group of conditions to consider independently for screening suitability. These conditions have adequate screening tests and efficacious treatments, but they are detected by methods other than MS/ MS, and usually as singleton tests. Table 9 lists the conditions that were determined to be without a screening methodology that has been adequately validated for general population-based screening. Kernicterus risk as determined by the identification of hyperbilirubinemia stands out in this group as being a very high scoring condition. Figure 8 shows the distribution of conditions into the: core panel (29 conditions); secondary target category (25 conditions); no test available (23 conditions), those excluded from newborn screening categories due to other inadequacies in meeting the criteria (4 conditions), and the three conditions on which we deferred decision-making.
Selected condition discussions
The following conditions represent a group for which there was either deviation from the adopted data processing plan or for which unusual issues justify additional discussion. It is important to realize that the data on the laboratory sensitivity and specificity of many conditions identified by MS/MS is suboptimal, though it was sufficient to lead the expert group to classify them as it has done. Table 7 CAH includes a number of forms of the disease. The most common is 21 hydroxylase (21-OH) deficiency, which accounts for 95% of cases and is the general form that has been considered. The primary marker used in newborn screening for 21-OH, 17-hydroxyprogesterone (17-OHP), is most sensitive in identifying infants with the severe salt-wasting form in which elevations are very high. The degree to which 17-OHP is elevated in the nonsalt-wasting forms is variable. Hence, sensitivity in detecting this form by newborn screening is reduced. The 21-OH forms of CAH were not subdivided as were the hyperphenylalaninemias because the forms of 21-OH are caused by the same gene. However, many programs consider the identification of newborns with the nonsalt-wasting form to be a by-product of screening for the primary target, the salt-wasting form. In the salt-wasting form, most virilized females should be clinically detectable because of "ambiguous genitalia" or as virilized females. However, it is important to identify the males by screening to prevent early morbidity and mortality. The other CAH types found in the remaining 5% of patients are not detectable generally by current screening strategies.
Congenital Adrenal Hyperplasia (CAH)
Galactokinase Deficiency (GALK) Table 8 Galactokinase deficiency scored 1,286 points in the analysis. However, the only consistent phenotype is cataracts. Further, in order to screen for GALK, an additional test is required. Most screening laboratories include a combination of the Beutler fluorescent spot screening test and a fluorometric or bacterial inhibition assay for total galactose. Because GALK is very rare and is part of the differential diagnosis of GALT, it has been designated as a secondary target.
Glucose 6-Phosphate Dehydrogenase Deficiency (G6PD) Table 9 G6PD deficiency is included in newborn screening programs in some countries, particularly in Asia and the Mediterranean, where it is the most common enzymopathy. Newborn screening programs in the Philippines and in Taiwan have reported incidence figures of 1 in 65. In the United States, G6PD screening is provided as part of the screening panel for the District of Columbia -the only program to mandate and provide screening for G6PD deficiency (Missouri has mandated G6PD screening but has not yet implemented the screen- ing). The vast majority of the clinical data are from countries in which the risk factors (e.g., ingestion of fava beans, infections, and drugs such as sulfonamides and antimalarials) associated with G6PD status are more common and in which the prevalence is higher (e.g., tropical Africa, Middle East, tropical and subtropical Asia and in some areas of the Mediterranean). There is very limited data available from any screening program in the United States, and the opinion of hematology experts is that the variants that exist in the United States African American population are clinically benign unless the individual is in a severely compromised (i.e., oxidized) state, usually resulting from drug exposure./ Additional data are needed from programs now screening for G6PD before this condition can reasonably be considered for inclusion in a mandated core panel of screening conditions. Programs currently screening for G6PD are encouraged to collect and publish the data for determining clinical relevancy and analytical specificity and sensitivity of tests being used. Further, and as discussed below in the context of hyperbilirubinemia, some conditions are not mutually exclusive. Appropriate monitoring and management of jaundice could identify those cases at risk for Kernicterus or biliary atresia. Table 8 Hemoglobinopathies are screened by HPLC or IEF in most programs. The primary focus of the review of scientific literature was on sickling disorders, since they have been the primary targets of newborn screening. However, there are over 700 hemoglobin variants identified by the methods used for screening, and 25-30 are considered clinically significant. Many of these conditions are associated with an Hb SS allele, but not all. Among these variant hemoglobinopathies, Hb E is by far the most common. The expert group agreed with the current recommendations that all clinically significant hemoglobinopathy variants be reported to health care professionals. It is appreciated that there may be conditions that occur more commonly in subpopulations, such as the case of Hb E in the Hmong population, and that may alter local screening practices. Table 7 Homocystinuria is screened for by detection of an elevated concentration of methionine, a secondary biochemical marker of the condition. The differential diagnosis of HCY includes other defects of methionine metabolism, unrelated liver disease, common dietary artifacts (total parenteral nutrition), and analytical issues (lability of methionine internal standard). 37 Hence, screening for HCY has a lower sensitivity than other amino acid disorders included in the core panel, and requires special attention in result interpretation to minimize the rate of false positive results. Although a primary screening based on methionine is less than ideal, the identification of newborns with a potentially treatable condition was a determining factor for the high score assigned to HCY in the survey and its inclusion in the core panel. This situation is likely to evolve when a second tier test capable of measuring total homocysteine in bloodspots becomes routinely available by MS/MS or other methods; an improvement that will strengthen the inclusion of HCY in the core panel. Table 9 Based on the responses of seven experts asked to complete the data collection instrument, this was among the highest scoring conditions. However, the expert group determined that there was not a screening methodology that was sufficiently well validated in a large newborn population to justify mandated universal screening at this time. Although bilirubin test result nomograms have been validated in smaller studies, the current nomograms are not sufficiently reflective of the broad population. There are also risk factors for hyperbilirubinemia associated with other conditions such as G6PD deficiency that are assessed independently. Additionally, in order for bilirubin to be used as a marker of this condition, a specimen would have to be taken and testing would likely have to occur in the local nursery, because results would need to be rapidly available based on current understanding of hyperbilirubinemia. Therefore, the question is raised whether this should be a mandated newborn screen or, rather, be instituted as an appropriate standard medical practice for any newborn. 38 Currently, universal testing for hyperbilirubinemia is not routinely conducted in most hospitals.
Methylmalonic Acidemia
Methylmalonic acidemia (MMA) exists in several etiologic forms caused by defects of either the apoenzyme (MMA-CoA mutase) or the biosynthesis of the coenzyme (adenosyl-cobalamin). The forms associated with a coenzyme defect may overlap biochemically with acquired dietary deficiencies. The biochemical marker of MMA is propionylcarnitine. Overall, there is credible evidence of less than ideal sensitivity with the current testing technology (affected cases with normal concentration when tested at birth) and specificity (relatively high rate of false-positive results, including cases with relatively high levels that are followed up by perfectly normal plasma acylcarnitine and urine organic acid profiles). It is likely that the introduction of a second-tier test capable of measuring methylmalonic acid in bloodspots could improve the sensitivity and specificity of newborn screening for MMA and reinforce the inclusion of this condition in the core panel. Because newborn screening is considered a program that extends beyond the screening test itself, it was decided that the disorders characterized by an elevated propionylcarnitine (mutase deficiency, cobalamin A, B, C, and D deficiencies, as well propionic acidemia) should be subdivided, particularly since they have quite different natural histories and treatment options. Table 7 The natural history of 3MCC has been driven by the clinical ascertainment of patients presenting with severe acute episodes. However, since newborn screening with MS/MS began, several individuals have been identified with the analytes associated with the condition but without apparent clinical manifestations. This situation includes cases where the abnormal metabolites found in the neonatal bloodspot were of maternal origin, subjects who are usually biochemically affected but symptom-free. All elements being considered, it is in the best interest of newborns affected with 3MCC that the condition be identified in all cases. 3MCC was therefore included in the core screening panel with the expectation that long term follow-up will lead to a better understanding of this condition and its clinical significance.
3-Methylcrotonyl-CoA Carboxylase Deficiency (3MCC)
Tyrosinemia Type I (TYR I) Table 7 TYR I is a condition caused by fumarylacetoacetate hydrolase deficiency that presents with severe liver and renal disease and peripheral nerve damage. If left untreated, most patients die of liver failure in the first years of life. Treatment with the drug NTBC (2-(2-nitro-4-trifluoromethylbenzoyl)-1,3,-cyclohexanedione), diet, and liver transplant are now considered to be very effective. Newborn screening is based on the detection of an elevated concentration of tyrosine. There is evidence of less than ideal sensitivity with the current testing technology (affected cases with normal concentration when tested at birth) and poor specificity (very high rate of false positive results, mostly premature babies and newborns with liver disease of variable etiology). Although the introduction of a second-tier test capable of measuring succinylacetone in bloodspots could improve the sensitivity and specificity of newborn screening for TYR-I, the question of whether affected but asymptomatic newborns are being identified with any degree of consistency remains to be answered. It is a general and accepted concern that hepatorenal tyrosinemia may not be detected by MS/MS analysis of tyrosine concentration alone. However, TYR-I is included in the core panel for historical reasons and because of the effectiveness of treatment. It remains important not to exclude the diagnosis of tyrosinemia on the basis of a screen negative result.
Limitations of methodology
Over the course of this project a number of limitations became apparent. Conditions with limited available evidence reported in the scientific literature were more difficult to score and place in one of the three categories. Some conditions had been reported in 10 or fewer families in the world, and for other conditions, there were gaps in the evidence base in the literature. Many conditions were found to occur in multiple forms distinguished by age-of-onset, severity, or other features. In most cases, decisions related to newborn screening were based on the more severe and treatable forms of the conditions.
The knowledge base about genetic diseases grows through a common pathway and, unless a condition was already included in newborn screening programs, there was a potential for bias in the information related to some criteria. The most severe forms of genetic diseases are usually those first noted. As one moves into the families of these probands, this bias toward severity is reduced. However, it is not until a large general population has been studied that the true performance char-acteristics of the various screening tests are appreciated. Because many of the conditions under consideration are very rare and the genetic etiologies may vary by ethnicity and other parameters, a population of considerable size is required to acquire a broad understanding of the condition.
Due to the aforementioned limitations, expert opinion that considered reasoning from first principles and the quality of the studies underlying the data contributed significantly to the placement of the conditions into particular categories.
Numerous barriers to implementing an optimal screening and follow-up program were identified. Recommended actions to overcome these barriers include the establishment of a national role in scientific evaluation of conditions and the technologies by which they are screened, standardization of case definitions and reporting procedures, enhanced oversight of hospital-based screening activities, long-term data collection and surveillance, and consideration of the financial needs of programs to allow them to deliver the appropriate services to the screened population.
Finally, there were limitations in both time and resources available to accomplish a project as broad and comprehensive as this. A large number of conditions commonly managed by differing subspecialists were assessed and, due to their rarity, it was not unusual that there may only be a handful of acknowledged experts of particular conditions in the world. It was also necessary to include a significant number of experts not directly involved in the expert group or its work groups. In order to broaden the number of individuals from whom we might draw for assistance with data collection and validation, it was necessary to consult with international experts.
In many ways, the analyses done under this project provide a current snapshot of the knowledge base from which recommendations are drawn. Decisions were made as to the adequacy of the evidence on which the recommendations are based. However, as is common for rare diseases, the acquisition of new knowledge is ongoing and long-term surveillance is needed to ensure that the evidence continues to support the recommendations.
Decision making for conditions being evaluated
A primary consideration in evaluating conditions is the availability of the test. The parameters that determine "availability" are numerous and vary considerably among conditions. It is also difficult to compare tests because of the differing "value" of a technology (e.g., multiplex capability, appropriateness of the site to conduct the screening service). The expert group considered whether the tests are amenable to a screening laboratory; for example, some tests are functional, such as those for hearing screening, and must be performed in the nursery. Other tests may have significant time constraints and are therefore better conducted in the hospital or birthing facility laboratory, as would likely be the case for bilirubin screening for kernicterus risk. It also should be noted that some of the conditions considered by the expert group did not meet the criterion that the test must be performed in the 24-to 48-hour period after birth (e.g., Wilson disease, familial hypercholes-Newborn screening panel and system terolemia, Duchenne muscular dystrophy, congenital disorders of glycosylation, Turner syndrome screened by FSH levels). However, such conditions may be appropriate for screening at a later time in infancy or later in childhood. Although early and continuous screening of infants and children is a critical public health goal-as is lifelong screening-the expert group analysis was limited to conditions that should be and could be evaluated some time within the first few days of life. For the most part in the United States, the focus of traditional newborn screening programs has been on disorders detectable in the first 12 to 48 hours prior to discharge from the nursery. As such, the analyses were all predicated on testing done during this time frame. Initial screens in the neonatal period (i.e., first 28 days of life) would constitute a separate program with different costs and yields of cases and therefore should be separately analyzed.
Within this framework, the basis for decision-making as shown in Figure 9 starts with whether a screening test is available, a criterion without which decisions to screen cannot be made. Clearly, the first decision to screen is based on the availability of a sensitive and specific screening test that can be done in the 24-to 48-hour interval after birth. However, there is occasional disagreement as to whether a test is adequately validated for use in general populations. Hence, survey respondents may not necessarily give a 200-point score but may give a score between zero and 200. We defined the existence of the screening test as corresponding to a score between 100 -200 points. Conditions determined to have a screening test are then evaluated with respect to the criteria.
Understanding that the evidence for each criterion needs to be evaluated, conditions with validated scores, scoring above 1,200 are considered appropriate for inclusion as primary targets in a screening program. However, the expert group distinguishes between those that are primary target conditions and those that are included in the differential diagnoses for those primary target conditions. Those with tests available and scoring between 1,000 and 1,200 are secondarily reconsidered as to whether an efficacious treatment is available and, if so, they are then reconsidered as to whether the natural history of the condition is well understood. If one of these is answered "no" but the condition is part of the differential diagnosis of a core condition, it is placed in the secondary target category. If it is not part of the differential of another core panel condition, the condition would not be considered appropriate for newborn screening at this time. Conditions falling between 1,000 and 1,200 are also considered appropriate for the secondary target category while those with an overall score under 1,000 are not considered appropriate for newborn screening at this time. At the bottom of the algorithm, the expert group acknowledges that there are currently significant research studies and clinical trials in process involving screening tests and therapeutics for diseases that might make the condition amenable to newborn screening (e.g., lysosomal disorders). The information that determined the current recommendation of the expert group is not static. Conditions not considered appropriate for the core panel at this time should be reevaluated periodically to determine if their status has changed.
The data collection instrument used in this project provides information on only one aspect of a broader decision-making process required for evaluating conditions and establishing a uniform newborn screening panel (see decision tree in Fig. 9). There are also features of tests, such as costs, that are not factored into this diagram that State newborn screening programs may take into account. The algorithm can be used prospectively as a tool to evaluate conditions for their appropriateness for addition to or removal from a screening panel (Appendix 2). Reference information about each condition the expert group evaluated and the summary information can be compared to the results of an independent assessment of a condition. Review of the scientific literature should be conducted and expert opinion should be gathered for any condition evaluated. The preference is to use data from the literature. For the most subjective criteria, expert opinion is supplemented with the views of individuals involved with newborn screening programs and child health professionals and families.
Reporting responsibilities
Many factors affect the decisions about reporting of individual test results made by laboratories and programs. Some State newborn screening programs report directly to child health professionals, while others report to designated subspecialists. Some also report test results to families. Reporting also varies according to whether the results are screen-positive or screennegative. As noted earlier, all results of likely clinical significance that are apparent in the testing platforms targeting specific conditions should be reported. As recommended by the Sickle Cell, Thalassemia and Other Hemoglobin Variants Subcommittee of CORN (1995), each screening program should develop guidelines for follow-up of carriers of all clinically significant conditions. This currently includes hemoglobinopathies and also would now apply to CF, because for both conditions the primary-or second-tier tests reveal carrier status. Similarly, second-tier testing for molecular causes of MCAD and other disorders can lead to the identification of carriers of the conditions (for autosomal recessive disorders). The differences in expectations between the conditions in the core panel and those in the secondary target category should be noted. Inherent to conditions in the core panel is the need to maximize detection in screening while minimizing excessive false positives being referred into the health care system. For conditions in the core panel that are positive on screening due to specific analytes being elevated, the secondary targets are identified in the diagnostic laboratory. It was on the basis of firm knowledge about these conditions that most decisions were made. The identification of conditions in the secondary target category is based on the fact that results are available due to the multiplex or multianalyte nature of the screening technology used. However, it does not presume that screening tests have been maximized for the detection of these conditions or that the knowledge base is sufficient to have developed an expectation of maximum health outcomes following interventions.
Newborn screening program officials also make decisions about following patients after initial screening and reporting. For instance, false-positives are treated as true positives until proven otherwise. However, once shown to be a real falsepositive result, the State newborn screening program often treats the infant as they would a screen-negative infant, without pursuing further follow-up. The expert group believes that this situation warrants additional postconfirmation decisionmaking but acknowledges that the programs must minimally understand final diagnoses in order to discriminate false-positives from real-positives for these "secondary" targets.
State programs must decide whether the individual prevalence, costs and burdens of identifying these additional diseases-which may not be treatable and may take resources away from the treatable diseases originally targeted through these programs-can justify their inclusion in the program. They also must take into consideration the issues raised by child health professionals who will receive results about very rare conditions about which they have limited knowledge. Regardless of whether the State newborn screening program chooses to integrate secondary target cases into their full newborn screening program, it is important that an organized system of data collection and surveillance be available. The issues in newborn screening are similar to those that the FDA has faced with therapeutics for rare diseases, in which a shift toward phase IV (postmarket) surveillance during clinical trials has emerged. This shift recognizes that the most critical data about genetic diseases arise in the context of full population analysis. However, clinical data about the "normal" population is very scarce because the research focus has been on those with disease and on the diseases themselves. The significant variability inherent in genetic diseases requires significant knowledge of the expression of genetic variants in a general population before they are well understood. Such data collection has not been a priority of funding agencies.
E. Summary
Significant variability exists in the types of newborn screening available and the conditions screened across the United States. This project was intended to evaluate the scientific and medical evidence in order to identify conditions appropriate for newborn screening. After articulating overarching principles to guide decision-making, the current practices and systems in the States/regions and other countries were assessed.
All analyses were done from the perspective of national data, since one of the goals of the project was to bring standardization and uniformity to newborn screening. It is appreciated that some conditions may occur more commonly in subpopulations, such as is the case for IBG and HbE in the Hmong population, and that that may alter local screening practices.
Criteria were defined that would be used to compare the many conditions under consideration. The scientific literature related to each criterion was reviewed for each of 84 conditions and the opinions of at least three acknowledged experts for every condition was evaluated. At the first level of analysis, an assessment was made as to the availability of a screening test that had been validated in a large general population. Scores were then established for each condition and they were assigned to one of three groups: 1. Core Panel (shared in common a high score [Ն1,200], the availability of an efficacious treatment, a knowledge of natural history adequate for inclusion in a public health screening program); 2. Secondary Targets ([1,000 -1,200] conditions that are part of the differential diagnosis of a core panel condition); and 3. Not Appropriate for Newborn Screening ([Ͻ1,000] either no newborn screening test is available or there is poor performance with regard to multiple other evaluation criteria).
The scientific evidence was overlaid on an initial categorization of conditions to ensure that all conditions in the core panel had a sufficiently well understood natural history and that an efficacious treatment was available.
The expert group recommends that State newborn screening programs: 1. Mandate screening for all core panel conditions defined by this report; 2. Mandate reporting of all secondary target conditions defined by this report and of any abnormal results that may be associated with clinically significant conditions. Some are identified in screening laboratories (e.g., hemoglobinopathies) and others in the diagnostic laboratory (e.g., MS/MS screened conditions). Clinically significant conditions also include the definitive identification of carrier status; 3. Maximize the use of multiplex technologies; and 4. Consider that the range of benefits realized by newborn screening includes treatments that go beyond an infant's mortality and morbidity. In order to successfully expand the number of mandated disorders screened for in newborns, the full breadth of the screening process and its components must be fully operational. Thus the expert group and its Diagnosis and Follow-up Work Group sought to examine the current status of screening systems throughout the United States, with particular attention paid to the diagnosis and follow-up components and their interface with the newborn screening program and primary health care professionals. In addition, the group was interested in identifying the key components of screening and highlighting some best practices that appear to improve outcomes. The six components of the newborn screening process that were assessed are: 1. Education, including prenatal education; Newborn screening panel and system 2. Screening, including specimen collection and testing; 3. Follow-up, including result reporting; 4. Diagnostic confirmation; 5. Management; and 6. Program evaluation and continuous quality improvement.
Much of the information reported in this section was obtained from a survey of State newborn screening programs conducted by the NNSGRC and reported at a November 2002 meeting sponsored by HRSA/MCHB and University of California, Los Angeles (UCLA), entitled "Educating Parents and the Informed Decision-Making Process Regarding Newborn Screening Procedures and the Use and Storage of Residual Bloodspots." NNSGRC has updated this information through June 2004.
Education
As screening increases there is a growing need for education across all groups of constituents, including parents and guardians, obstetrical providers, infants' medical homes, pediatric specialists, and emergency room/labor-delivery/neonatal intensive care unit (NICU) staffs. Education should occur in several places and times in the screening system, appropriate to the needs of patients, families, and health professionals.
Newborn screening programs typically provide educational materials during the perinatal period. The materials include information about newborn screening in general and brief descriptions of the conditions that are screened. Nineteen of 50 programs indicated that distribution of their newborn screening brochures was mandatory in birthing hospitals. Only one program reported not having an informational newborn screening brochure. All but three of the 50 programs indicated that their brochures included a list of disorders screened, and all but two described the specimen collection procedures and timing. Twenty provided information about when results would be available, 31 discussed the manner in which the results were reported to physicians, and 36 indicated how parents might obtain these results. As the number of conditions included in screening continues to expand, there has been a move toward providing more general information about the types of conditions screened rather than detailed information about each condition.
Prenatal Education
Few programs actively support education programs about newborn screening during the prenatal period. Ten of 50 State programs reported that newborn screening brochures typically were distributed in obstetrical offices, and 14 of 50 indicated that there was routine distribution in birthing classes. No information was available concerning quality, readability or understanding of the brochure information. The growing number of conditions for which newborn screening can be expected, combined with the existing limitations (e.g., familiarity of child health professionals with the newborn screening system) to delivering education during the perinatal period, argues for a focus on enhanced education during the prenatal period. This area of need is currently being addressed by HRSA/MCHB through a contract with UCLA.
Screening
The timing of specimen collection and delivery to laboratories also varied. According to the NNSGRC 2000 National Newborn Screening Information Report, which included information from 28 programs at the time of this report, 74% of newborns were known to have been screened prior to 48 hours of age and 22% were screened after 48 hours. Twenty-two States reported that 2.7% of infants were screened prior to 12 hours of age, and 12.2% were screened between 12 to 24 hours of age. In several States as many as 30% to 40% of infants were screened between 12 and 24 hours of age. These timing issues may have direct implications for the predictive values of testing for some conditions.
Information about the timing of specimen delivery to laboratories was not readily available. The majority of programs rely on the United States Postal Service for specimen transport, with service varying from overnight delivery to up to a week in some areas. Most specimens arrive in the laboratories within 72 hours. However, in United States territories, such as Guam and States with relatively isolated and rural populations, delivery may take a week or more. It is suggested that specimens be transported by courier services that allow for receipt at the testing laboratories within 24 hours.
The timing of specimen collection and delivery is variably tracked. For diagnosed cases, programs generally record date of birth, date and time of specimen collection, date of receipt in the screening laboratory, date of laboratory report, and date of diagnosis. However, since establishing an etiologic diagnosis may be an iterative process that increasingly refines diagnosis, it can be difficult to define the time at which "diagnosis" is established. The date when initial diagnostic tests are ordered has been used as a substitute for date of diagnosis. Some programs monitor the date of initiation of treatment, but variations in the treatments for different conditions and the tendency to institute low-risk treatments in ambiguous, nonclassical cases renders this less useful unless viewed in the context of individual diagnoses. Most newborn screening programs presently operate on a 5-day work week. Some conditions can be life-threatening (e.g., MSUD, CAH, GALT, organic acidurias, fatty acid oxidation disorders, urea cycle disorders) within a few days after birth, so it is desirable to initiate specimen processing within 24 hours of specimen receipt in the laboratory, with a 5-day turnaround time between birth and the availability of the test results. However, it should be emphasized that detection of disease in the presymptomatic phase is one of the basic principles and values of screening.
The handling of screen-positive cases also was evaluated. Essentially, all newborn screening laboratories utilize a follow-up coordinator for reporting and tracking screen-positive results. For the most part, a positive result is reported only after the laboratory has verified the original finding through a second analysis of the original specimen. However, for some of the most time-sensitive conditions characterized by short-term mortality and morbidity risks (e.g., CAH, galactosemia, isovaleric acidemia, MCAD, maple syrup disease, and some of the other metabolic diseases), preliminary positive results may be reported prior to repeat testing. These results are generally reported by telephone to the health professional identified by the newborn screening submittal form or by the birthing facility and/or the newborn screening consultant. The expert group recommends standardization of reporting procedures, including: the result, the reference range, the nature of the abnormality, and an indication of the speed and progression of clinical symptoms in the absence of intervention.
Screen-negative cases are often handled quite differently from the screen-positive cases. Some programs group normal results for batch reporting, waiting until all assays have been completed. Among the more significant potential problems identified in reporting of results is the risk of interpreting screening results as equivalent to diagnostic testing results. Screening results that are in the normal range may not have the same negative predictive value as is the case for diagnostic specimens obtained due to symptoms. 39 Additionally, it is increasingly apparent that age (developmental, chronological) and condition (acute affected, feeding status, transfusion status) of the newborn when the specimen was collected can affect the test results and their interpretation. 40 Further, the use of general terms such as "amino acids normal" or "acylcarnitines normal" in reporting of screen-negative results is an issue. The general lack of knowledge among clinicians of newborn screening programs and the screened conditions makes these types of results not useful. On the other hand, clinicians may not want to take the time to read through long, detailed, normal reports. A report indicating all that was normal in an MS/MS screening profile could require considerable information to reflect the varying degree to which different conditions had been ruled out. At the same time, it can be argued that detailed reports are necessary. For example, if an infant moves from one State to another that has a different screening panel, the results may be misinterpreted if they refer to a general group of tests rather than being delineated by condition.
The fact that two categories of screening tests and result reporting are proposed also complicates this issue. States vary in which primary-target conditions they choose to detect and the technology they use to detect them. In addition, there is variability in the testing strategies (e.g., use of second tier testing) and the cutoffs the program chooses to define cases. Diagnosis and Follow-up continues to consider these reporting issues.
Most programs report screened-negative results to the location identified on the newborn screening collection card, which in many cases is the hospital of birth and not necessarily the infant's medical home. It has been observed in NNSGRC reviews of newborn screening programs that many hospitals do not routinely track the results and when the test results arrive at the hospitals, they are simply filed in the medical records without review. In addition, the tracking of newborn screening results to ensure that results are obtained on all screened newborns, while desirable, is not a uniform hospital practice. As screening expands for the pediatric population, the medical home should consider incorporating verification status of newborn screening results and keep such records easily accessible in a manner similar to those used for posting immunization status to medical records. Recent efforts by HRSA/ MCHB to support the development of integrated and linked information systems that include newborn screening information for health care providers' direct access is an important development that may improve communication of screening results to the medical home and other appropriate health care facilities for the newborn. Additionally, national standards for the reporting of newborn screening results should be considered (similar to ACMG guidelines for prenatal DNA and other test report guidelines).
The use of second-or third-tier testing also was addressed in the work group's assessments. This practice is fairly common in newborn screening laboratories. Almost all States use a second-tier test for CH, either T4 or TSH depending on which was used in the initial screen. These second-tier tests are commonly done on the original bloodspot sample and are distinguished from repeat testing, which involves repeating the same test on the original specimen, or second tests that require a fresh sample. Some programs use a second-tier fluorometric test following an initial bacterial inhibition assay for PKU. DNA testing as a second-tier test to detect high-frequency mutations is done in some programs for CF, hemoglobinopathies, MCAD, LCHAD and galactosemia, and some are considering second-tier testing by MS/MS for CAH. With expanded newborn screening (including hearing loss screening) identifying as many as 1:250 newborns who will require diagnostic confirmation (B. Therrell, personal communication), the need to assess the capacity of the follow-up system is apparent.
Procedures for repeat testing in the newborn screening laboratory on the original bloodspot also were assessed. Essentially all newborn screening testing laboratories employ a QA step of retesting the original spot to confirm preliminary positive results. Some laboratories use a different method on second tests as a QA check. Retesting original bloodspots is distinguished from second-tier testing using a different test, and also from repeat screening, which uses a new specimen on which confirmatory testing is done. Routine repeat screening of all newborns is required in eight States, and several others strongly suggest second screening. There are specific circumstances (e.g., unsatisfactory specimens, acutely ill newborns in the NICU) under which repeat screening is commonly required. Because of the possibility of biologic false-positives, 29 States recommend/require a second specimen if tested prior to 24 hours of age and seven States require a second specimen if the newborn is tested before 48 hours of age. False-positives for CH and CAH are common in premature infants but can be dealt with through retesting when the infants are a few days older and their endocrine systems are more mature. Improved testing specificity on the initial specimen also can be achieved by using a nomogram more specific to the gestational age of the infant. False-negatives are the greater concern, since they may not be recognized easily. Programs that mandate a second test for CH report finding 5% to 15% of their total caseload through the second test, but these cases have not been studied. This number is reduced by about 50% when TSH is used as the initial screening analyte. Over half of the cases of the classical simple virilizing form of CAH may go undetected on an initial screen due to biological factors.
Reporting and Follow-up
Follow-up is the term commonly used to describe the process of reporting abnormal screening results to the medical home, specialist, and/or guardians/parents and the initiation and tracking of the next steps in evaluation. Follow-up can be divided into two categories, short-and long-term follow-up. Short-term follow-up includes those activities that ensure all infants are screened, abnormal results are appropriately and expediently handled, and affected infants are promptly identified, appropriately referred, and treatment initiated where applicable. Long-term follow-up extends the period of follow-up substantially to monitor continuously the medical management and care coordination of those affected who require such services. Long-term follow-up also allows assessment of efficacy, sustainability, and safety of early treatment intervention, and can uncover new disease/treatment outcomes, and is valuable for demonstrating utility or limitations of screening.
Newborn dried bloodspot screening follow-up generally has functioned independently of newborn hearing screening follow-up, although many aspects of the follow-up procedures are similar and sometimes duplicative in terms of effort. Programs should minimize the number of places to which health care professionals must go to get information about their patients. Advances in information technology would allow direct and immediate access to screening test results, benefiting infants, health care professionals and screening programs. The experience of the newborn dried bloodspot programs could inform the hearing screening programs that have significant loss to follow-up of patients.
There is also some variation in how programs follow-up unsatisfactory specimens. Some State laws and program regulations place the responsibility for a satisfactory specimen on the specimen submitter. In such cases, the program tends not to pursue unsatisfactory specimens, electing to let the submitter perform its responsibility to the program. It is not clear that such practices had any impact on the liability issues that seem to have been the reason for such program practices to have arisen. In other cases, programs exercise their follow-up responsibilities in much the same way as they handle screenpositive cases. CLIA regulations require that a testing laboratory show that it has a procedure for improving specimen submissions in instances where there is unsatisfactory performance on the part of the specimen submitter.
Inadequate demographic information (e.g., patient's name, weight or age at the time of collection) also may render a specimen unsatisfactory. Most programs lack a strict enforcement policy regarding specimen rejection related to their rules governing certain demographic information. Often the initial re-sponsibility for determining the acceptability of the specimen's demographic information falls to the clerical personnel performing the check-in process.
In order to improve the overall quality of specimens provided to newborn screening laboratories, the best approach is to minimize the number of unsatisfactory specimens and to ensure that an appropriate submitter education program is in place. It is best to have a designated person responsible for monitoring the quality of infant demographic information and for ensuring that accurate and complete information is part of a total quality management approach to laboratory operations. Compliance with requests for specimen demographic information must be monitored and action must be taken regarding noncompliance.
Most large States use computerized follow-up systems. Because these systems can be adapted to automated error surveillance, programs are encouraged to pursue routine quality checks using their computer systems. In the few States with computer generated submitter profiles, the profiles are used to improve the quality of specimens and information submission by, for example, monitoring periodic error rate reports. Those using computerized reporting and tracking systems have reported improvements on the part of submitters when profiling reports are used and submitters receive feedback from the reports.
In the event of a screen-positive result, most programs rely on information submitted with the newborn screening specimen to identify the newborn's physician or medical home. However, many newborns lack an identified child health professional at the time of release from the hospital. Often, the demographic information submitted with the specimen lists the nursery physician or on-call physician as the physician of record. Although identifying the appropriate child health professional may be a challenge, most newborn screening programs attempt to meet this challenge. Contact with the subspecialists is usually easier, since the group is smaller and is usually more intimately involved with the newborn screening program. In the interest of further closing the gaps in the system, it would be useful if hospitals were able to ensure that a follow-up appointment has been made for all newborns prior to their hospital discharge. At a minimum, the hospital nursery staff should work with families to identify the infants' medical homes and ensure that contact information for all infants is up to date.
Once the screen-positive case has been referred into the health care system, most programs have follow-up protocols that include tracking the patient until treatment has been initiated. Some programs subcontract this responsibility to regional medical centers and do not actively pursue this information, having transferred the responsibility for this in their contracts. However, this practice may complicate ready access to short-and long-term information that would be useful for program evaluation. Some States are developing systems that allow information integration and program linkage to improve tracking of screening results and patient outcomes. For example, some use bar codes that link newborn screening filter paper cards with birth certificates, and others have considered including the newborn screening information on the face page of the medical record where vaccination information is placed to facilitate monitoring. In any case, a plan should be in place for exhaustive and documented confirmation of follow-up. Follow-up coordinators should link repeat specimens to initial specimen records, and all programs should obtain short-and long-term follow-up information.
A variety of methods of screen-positive results notification have evolved within newborn screening. In most programs, once the follow-up coordinator has provided results to the child health professional, the child health professional or a member of his or her staff informs the family of the screening results. Some programs notify both the child health professional and the family. Education is an important aspect of the notification of parents and health care professionals. Some States have developed culturally and linguistically appropriate educational materials for families but there is limited availability of similar materials for child health professionals and specialists.
Once the family is informed of the test results, the child health professional determines the need for and extent of subspecialty involvement, unless the program's follow-up is conducted directly through subspecialists. Not all conditions have similar demands for the timeliness or complexity of follow-up. The availability of informational materials for child health professionals that would facilitate their ability to participate actively in a collaborative management approach to their patients' care would be useful. Such information could include immediate management issues and relevant subspecialist referral sites. The work group on Diagnosis and Follow-up developed templates for such informational materials that have been pilot tested at limited sites. They are the basis of ongoing work developing templates for all conditions in the core panels, as well as those in the secondary target category. (Examples of these templates can be found in Appendix 3.) Although guidelines for immediate management could be readily developed, there is little standardization of parameters by which one would qualify an experienced subspecialty provider. Further, some parts of the country may have limited availability of experienced pediatric and subspecialty care health care professionals. This is particularly apparent in the area of inborn errors of metabolism; there are currently 53% fewer board certified biochemical geneticists in the United States than were practicing in 1990 and a limited number of trainees. In such circumstances, an organized system to link child health professionals with specialty care professionals would be useful. This could be accomplished through the developing HRSA/MCHB Genetics and Newborn Screening Regional Collaboratives that are intended to make national and regional services and resources accessible at the local community level.
Once confirmation of diagnosis is available to the child health professional or subspecialist, it is common for this information to be communicated promptly to the State newborn screening program. It is important that all programs obtain confirmatory outcome reports in order to fulfill their public health mandate.
Diagnosis
There is a complex relationship between the definition of screen-positive test results and the definition of the genetic condition itself. Upon identifying a screen-positive infant, algorithms through which diagnostic confirmation is obtained are followed. Some steps may involve the screening laboratory as is the case with second-tier tests while others involve the clinical and laboratory evaluations that lead to the final diagnosis. It is only after significant testing in a general population that the full breadth of the phenotype of the genetic condition in question is well understood. Hence, it becomes important to maintain communication between the health care professionals and the screening programs related to the false-positive and true-positive results. It will also be important to reconsider what constitutes a false positive result since a particular screening result may be associated with either a core condition panel or a secondary target condition. Further, it is important to develop mechanisms through which programs can be made aware of patients identified outside of the program in order to adjust program parameters to avoid "missed" cases. Finally, given that genetic tests can provide information about affected individuals and carriers, clear policies should be in place about communicating such information.
Management
Many programs do not have educational materials to facilitate and optimize patient care once a patient is diagnosed. Such information is commonly in the purview of the experts who develop guidelines for treatment. Information dissemination practices that facilitate collaborative management between the child health professionals and specialists would be useful.
Over the longer term of intervention and treatment there is usually insufficient information shared between health care professionals and the programs, and contact beyond the initial treatment phase is rare. This gap might only be filled through the development of information collection systems that facilitate the integration of program information with other health care information.
The availability of and access to therapeutic interventions varies among the States. Some States provide funding for medical foods † 1 either completely or on a sliding scale based on income. Costs not covered by insurance may be covered through Title V funds and Medicaid. However, they are less likely to fund genetic counseling, penicillin for sickle cell disease, or thyroid hormone replacement therapy.
A definition of the range of health care professionals considered necessary for managing a particular condition is limited. Medical and nonmedical services are generally defined by the health care professionals to whom the infants have been referred. However, because almost all programs provide no funding for health outcome evaluation, few long-term studies exist. Beyond one to three years of age, there is little coordinated or systematic monitoring by the programs.
Program Management
Programs use a mix of models for management and development of their newborn screening activities. Many States have external advisory committees, although some rely only on internal advisory groups, which may not include consumers and experts for conditions considered by the programs.
B. Program evaluation
Several of the goals of this project are aimed at standardizing language and identifying the data or information needed to evaluate newborn screening program performance. Historically, newborn screening programs have been evaluated only internally, with the exception of the screening laboratory, which generally must meet CLIA requirements even though some of the analytes may not be specifically covered. Since 1987, HRSA/MCHB has made available to the States consultative program reviews by a team composed of experts in various aspects of newborn screening activities, and this has been continued as a responsibility of the NNSGRC. Besides providing annual State data specific to the Title V Block Grant performance measure, programs voluntarily report their program performance data to the NNSGRC for compilation and publication as an annual newborn screening data report. These reports are available at the NNSGRC website and can be used for inter-and intraprogram comparison (See www.genes-rus.uthscsa.edu). Uniform performance measures, however, could enable better and more standardized comparative assessment of newborn screening programs. Performance standards should be related to the needs of those with the specific conditions identified. Uniformity of language and standardization of performance measures will allow programs to move from independent evaluation to a comparative system targeted at high quality and efficiency.
Program Standards
A fundamental goal of newborn screening is benefit to the newborn by identifying a treatable condition. Variability exists among the conditions in the core panel regarding the speed with which they must be treated in order to minimize or eliminate the negative consequences of the condition. In newborn screening programs, speed of screening and reporting results is sometimes driven by the conditions that have the most demanding time needs. For example, an elevated 17-hydroxyprogesterone indicates a high likelihood that classical CAH is present and should therefore be pursued promptly, since in some instances death can occur from salt wasting within the first two weeks of life. Similarly, an elevated C8 acylcarnitine indicates a high likelihood that MCAD is present and should therefore be pursued promptly, since in some instances death can occur within the first two weeks of life. This contrasts with the finding of hearing loss, for which the interventions can be delayed for two to three months without significantly affecting speech development. The importance of education of families and the medical home about timing and the consequences of later notifications is apparent.
Appendix 4 lists specific steps in the newborn screening program process that should be monitored. Program performance can be improved by integrating data monitoring into policies and procedures and then modifying programs as problems are identified. Furthermore, development of a uniform approach to data collection and program evaluation allows for the comparison of program performance among States.
National Programs of QA
On a national basis, there is no comprehensive QA program for newborn screening aside from that provided for screening laboratories by CDC (see Fig. 10 ). CDC offers a proficiency testing and quality assurance program specifically for newborn screening laboratories-the Newborn Screening Quality Assurance Program. The newborn screening laboratories are regulated under CLIA of 1988. FDA provides additional oversight of manufacturers who provide testing products to newborn screening laboratories, and CDC provides a service that validates the filter paper bloodspot collection devices. The NNS-GRC, funded by HRSA/MCHB, provides consultative program reviews that include all aspects of the newborn screening system (upon the official invitation of individual State newborn screening programs), and collects and assimilates national newborn screening data.
The Joint Commission on Accreditation of Hospital Organizations (JCAHO) plays a role in the oversight of activities within hospitals. For several reasons, JCAHO's activities have not been specifically directed toward the hospital's role in newborn screening. Even though birth hospitals collect the vast majority of screening specimens, record demographic information, and receive newborn screening test results, hospitals have not traditionally been held accountable to JCAHO for newborn screening activities per se. Historically, hospital responsibilities for tracking newborn screening testing results have been varied, particularly since the newborns are usually not in the hospital when the screening results are completed and returned. Most State screening regulations are silent on hospitals' responsibilities, though some include specific requirements, and hospitals and administrators can in some States be held liable if newborn screening practices are improperly performed. Oversight of newborn screening has been complicated by the fact that the oversight of clinical activities is limited compared to the regulation of laboratories, which includes maintaining records of specimen submission and result reporting. In many hospitals, newborn screening specimens are collected and submitted to the screening laboratory directly from the newborn nursery, bypassing some areas of this laboratory oversight. Hospitals appear to assume greater responsibility for screening conducted within the nursery, for example, screening for hearing loss. In such circumstances, hospitals have a clear responsibility to make patients aware of any critical laboratory information stemming from their hospital stay. However, since hearing screening results are immediately available, the task of initiating notification and arranging for next steps in evaluation is simplified.
Discussions are ongoing regarding the possibilities of improving the ways in which hospitals provide information to newborn screening programs to ensure that adequate information is available in a timely manner for recontacting families or health care professionals and establishing follow-up while still maintaining appropriate privacy of the patient's medical information. 2 At the level of diagnosis and follow-up, there are several programs that have worked toward ensuring quality. Some organizations, such as CORN, AAP, ACMG, and the Society for Inherited Metabolic Disorders (SIMD), have been involved in the development of practice guidelines for the diagnosis, treatment, and management of many of these conditions. In addition, there are programs with "deemed" status through CLIA that offer proficiency testing and inspections of the laboratories providing diagnostic services for the conditions included in newborn screening programs. However, at the present time most analytes that are screened are not included in this program, although their addition is under active discussion.
Some programs have developed internal QA programs that variably address the components of the newborn screening system. While all States tabulate the number of tests done, many cannot relate tests to birthing records in order to ascertain the percentage of newborns screened. On the other hand, programs routinely track time from birth to diagnosis and treatment, and the numbers of newborns lost to follow-up, which are extremely important aspects of the screening system. Most programs maintain records of unsatisfactory specimens but they vary in follow-up actions and educational programs to improve specimen quality. In this respect there is perhaps a role for the federal government in providing some form of national program oversight. Furthermore, there are very different forms of oversight for laboratory services than for clinical services. In order to continue to improve the quality of newborn screening programs, several actions should be taken: 1. There should be uniformity in the types of data collected (see Appendix 4) by programs in order to compare program performance among States. In addition, reporting to a central authority should be required. 2. Periodic performance reviews of all components of newborn screening programs should be required. This should be a federal responsibility. 3. Language and terminology should be standardized in order to better compare performance among programs. 4. Turnaround time in reporting screen-negative results should be improved. a. At a minimum, all results from the initial screening test (some States perform a second test later) should be available less than five days after the blood sampling for the first posthospital discharge visit to be of use in this clinical visit and to facilitate awareness of lifelong screening. Most results should be available within two days of the specimen arriving in the laboratory, and specimens should arrive in the laboratories within three days of collection. 5. Diagnostic laboratory QA programs should be enhanced to include all conditions screened in newborns. 6. Organized systems to allow for the collection and analysis of data about patients are important in defining the standards to be met and improving our understanding of these typically very rare conditions. Data from populationbased screening are the optimal source of unbiased information about conditions and required reporting should be instituted. 7. Hospitals and JCAHO have significant roles to play, and standards need to be developed to improve quality, minimize errors, and facilitate tracking of newborns requiring active participation in testing follow-up. 8. All newborn screening laboratories should be CLIA-certified and should participate in CDC and CAP/ACMG proficiency testing programs or other equivalent programs as applicable. 9. All States should have an active system-wide newborn screening QA and total quality management program. 10. To bring uniformity to programs across the country and thereby create a more equitable system for all Americans, national oversight and authority must be provided with adequate resources. Consideration should be given to institutionalizing the role of the HRSA-funded NNSGRC, which currently offers on-site expert consultative reviews to the State newborn screening programs.
C. Cost-effectiveness analysis
This project focused primarily on a scientific analysis of conditions and the features that should be considered when deciding whether they should be included in a newborn screening program. However, costs often are the basis on which such decisions are made. Review of the few available cost-effectiveness studies of newborn screening suggests that often, they may be too limited in scope. Some studies have focused on the short-term costs and benefits of the screening stage and the immediate steps following the identification of a screen-positive infant. Most address tests for only a small number of disorders, and none has explored the cost savings and clinical benefits of tests such as MS/MS. [41][42][43][44][45][46] A basic cost-effectiveness analysis was conducted to better inform our decisions. Costs and benefits related to screening for particular conditions or groups of conditions were evaluated after mapping them over major disease outcomes (e.g., life expectancy, cerebral palsy/stroke, seizures, developmental delay, hearing loss, vision loss). Costs were obtained from the literature. 2,42,43,[47][48][49][50][51] Benefits were determined from expected outcomes with and without early treatment or intervention. Quality-adjusted-life years (QALYs) were then compared to costs. Where appropriate, tests capable of being multiplexed with other tests for different conditions were assessed independently and as a group. Results were found to be stable by sensitivity analysis.
The results of these analyses indicate that all newborn screening programs evaluated improved outcomes and most reduce overall costs (Carroll and Downs, in press). Screening Newborn screening panel and system for CAH added increased cost per QALY gained, but the cost was well within the range conventionally considered cost effective. Screening for galactosemia was the only strategy that would be considered not cost effective in the base case analysis. However, under some reasonable assumptions, it can be shown to be cost effective. The identification of potentially affected individuals at such an early time in life leads to many years over which the benefits accrue and, in aggregate, the benefits outweigh the costs.
Technologies such as MS/MS further save money due to their multiplexing capability and low screening false-positive rates. MS/MS, used to screen for multiple conditions, had the greatest impact on outcomes and saved the greatest amount of money in the analysis. Virtually all screening for conditions that are treatable with significantly beneficial outcomes can be justified with benefits increasing as more conditions are included. The analysis also showed that clinical benefits and savings depend on low false positive rates and timely follow-up and treatment of positives, emphasizing the importance of an integrated screening and follow-up program. [41][42][43][44][45]52
Data and Analytical Needs
Screening The evidence base for disorders potentially amenable to screening is limited and the questions that must be answered to inform our decisions about the future of our newborn screening programs are numerous and the issues complex. There are cutting edge new technologies emerging that can have a significant impact on screening programs. However, tech assessments have limited capacity to identify issues about promising technologies early in their development (e.g., is there sufficient capacity in the system to test the 4.1 million United States newborns? Are the tests adequately validated?). This raises important questions about how to implement new technologies for screening. Historically, as new technology is validated on a known cohort, it is then applied to a prospective screening cohort in a linked or unlinked (e.g., HIV screening) method, with or without reporting, and with or without randomization (e.g., CF). Many State newborn screening programs have awaited data from other State pilot or trial programs before investing in the costs of incorporating new technologies into testing and follow-up protocols. The potential for screening beyond the first few days of life is increasing. Determining how best to link existing public health activities (such as immunization) that occur at specific clinical points later in life offers opportunities to screen for additional conditions that are less amenable to screening in the first 24 to 48 hours of life. Information technology has opened up opportunities to improve the systems that support the medical home's integrated role in newborn screening and there is always the opportunity to improve informatics and communications and their integration into public health information systems and registries.
There is an ongoing and growing need to articulate a research agenda for the many conditions that are already part of newborn screening. For example, the impact on the optimal timing of screening of newborns in the neonatal intensive care unit that have received hyperalimentation or packed cell transfusions remains unclear.
Follow-Up
Many questions remain about the impact of screening for a larger number of rare disorders, as well as what the true significance is of a "false-positive" or "transiently abnormal" screening test. 53 These may require costly, long-term evaluation projects in order to obtain the statistical power needed to better understand these issues in rare diseases. Again, we may need a broader national approach to data collection and analysis.
Diagnosis Considerable research potential exists in the area of diagnosis of these rare diseases. The preferred approaches and methods of diagnosis and confirmation of presumptive diagnoses remain to be determined and our understanding of the natural history of the conditions and the associated genotype-phenotype correlations can only improve. There are many questions to be answered for each of the conditions for which screening is currently offered. For instance, there is still little information available about the outcomes of infants identified in G6PD screening programs. The interrelated roles of genetic risk factors and the environmental exposures that trigger disease expression are areas where large collaborative research projects will be needed. The use of the National Children's Study as a component of newborn screening research offers a number of opportunities. Similarly, we need to understand the issues and barriers that lead to the lack of hearing screening follow-up to determine etiology.
Management
The emerging area of collaborative disease management offers many opportunities to improve our newborn screening programs. The nature of our health care system is such that the bridges between child health professionals and specialists must be strengthened. Issues of interest include: 1) how best to partner with the medical home; 2) how to facilitate the transition to adult care (childhood cancer survivorship model); and 3) what are the expected outcomes for the adults with these now chronic diseases. It is also likely that situations similar to that of maternal PKU will arise with other metabolic diseases, such as 3-MCC, or the endocrinopathies, such as CH. Long-term outcomes research will require organized systems of data collection and monitoring. There are also gaps in our understanding of treatment issues for many conditions (e.g., nonclassical CAH). We also need to elucidate the long-term behavioral and educational issues associated with children with conditions detected by newborn screening.
Evaluation Program evaluation can also benefit from organized collaborative research programs. The creation of registries for longterm outcomes research and for system validation offers a clear pathway to improvement of the programs.
Health Systems And Outcomes Research Our health care system continues to evolve in parallel with the evolution of the newborn screening programs. The increas-ing diversity of the United States population necessitates that health disparities research as relates to diagnosis, management, and long-term follow-up of patients identified in newborn screening be enhanced. Education The trend toward more direct consumer involvement in health care decisions and prevention indicates the need for enhanced educational programs for the public. Further, the rarity and complexity of the many conditions already screened suggests a need for improved educational programs for the professionals. Opportunities remain to improve our understanding of the primary communication and education needs related to a screen-positive result in newborn screening. Similarly, many questions remain about the issue of appropriate decision-making relative to newborn screening. There is a need to understand the issues that arise in the delivery of prenatal education and determine the best models for such education while still working to broaden overall genetics public education. There is also a need to improve our understanding of how attention to cultural diversity and literacy could contribute to effective newborn screening programs. In order to better understand the limitations of and impediments to education, best practices models related to who provides services (e.g., birth educators, obstetrician gynecologists, subspecialists) need to be identified and there is need to understand how they can be provided outside the delivery room or nursery, and when they are best provided. The role for cross-specialty education and continuing medical education for health care professionals is also an area that would benefit from study. Last, there is considerable opportunity for research into the ethical, legal, and social issues that arise with expanded newborn screening and newborn screening in general.
Health Systems As Related To Newborn Screening A better understanding of the organization and functioning of our newborn screening related health care systems would also benefit the continued development of programs. In particular, studies of systems of care that would offer the highest quality delivery of newborn screening services would improve the programs.
Other There are numerous ancillary issues that relate to improving newborn screening outcomes. These include: 1) expanding screening opportunities prenatally and after birth when timing of testing, identification, and intervention offer additional value for health outcomes in the pediatric population; 2) ongoing research efforts to identify better and new screening and intervention strategies for rare and common disorders; and 3) continued research into outcomes of transiently abnormal screens to determine if such test results have predictive value for later diseases as well as to measure the psychosocial impact of such results (e.g., costs of vulnerable child issues). Some of the diseases for which postnatal newborn screening is recommended may be additionally benefited by prenatal detection; however, prenatal screening is not presently universally available. We may gain a better understanding of the incidence and spectrum of diseases associated with perinatal and early child-hood mortality by implementing uniform child autopsy policies and procedures which ensure availability of appropriate studies (including metabolic and genetic studies for all perinatal deaths, including stillbirths) and early unexpected childhood deaths.
E. Future needs
Hopefully all screening programs can benefit from a more robust national role and increased national standards and policies for newborn screening. Because so many of the conditions screened in newborns, or under consideration for screening, are rare, most States that undertake evaluations of the scientific basis for screening of conditions must rely on the same relatively small group of patients identified throughout the world. There is a potential national role in providing scientific evaluation of conditions and defining core condition panels. This would allow the States to apply the best science to their own considerations when determining their role in expanded screening. Practice guidelines also could be developed at a national level by interested organizations. There is also a potential expanded national role in oversight and enforcement, data collection, program evaluation, and the development of educational materials to support newborn screening.
Depending on the overall incidence of particular conditions, regional cooperatives should coordinate access to health care professionals, serve as coordinators and repositories for data collection, provide long-term follow-up capability when resources and expertise are limited, facilitate transition (and access) from pediatric to adult care, and provide education. The distribution of primary, secondary, and tertiary services is largely based on the incidence of a condition and the complexity of its short-and long-term diagnosis and management. For more common conditions with easier diagnosis and follow-up, there is likely to be sufficient local health care expertise for patient care. As incidence decreases and complexity increases-particularly for rare metabolic diseases-services become more difficult to access. Developing resources and infrastructure to ensure that health care professionals with appropriate expertise are available locally, regionally, and nationally will be important to ensuring access to high-quality services.
States also must retain their significant roles and responsibilities. They have a clear authority with regard to oversight and evaluation, as well as enforcement. There is a need to integrate the various systems of health care coverage and payment through flexible and comprehensive financing of services. Service coordination at both State and local levels must be considered, as well as program integration with the State Children's Health Insurance Plan, early intervention programs, Title V programs, Medicaid, and similar services.
In considering the national role in newborn screening, it is apparent that there are already significant barriers to the creation of a model newborn screening system in the United States. For example: 1. Financing across State and county lines is constrained by Medicaid rules; 2. Service delivery is fragmented on a disease basis; 3. There is lack of universal access and ability to access the medical home; 4. There is insufficient support to bridge geographic barriers; 5. It is difficult to identify experienced health care professionals for complex care (e.g., centers of excellence for genital reconstructive surgery for CAH; confirmation of metabolic diagnoses); 6. Misinterpretation of privacy regulations (e.g., HIPAA) (see Appendix 5 for discussion and clarification of HIPAA related issues in the context of a public health program); 7. There is underutilization and lack of uniformity of information technology; 8. Collaborative management and care is constrained by systems of reimbursement; 9. There is variability in State mandates; 10. State sovereignty sometimes dictates individual approaches; and 11. There is variability in financing of screening programs.
F. Summary
In order for expanded newborn screening to be implemented universally, a well operating and standardized newborn screening system must be in place. At the present time there is significant variability among the State programs with regard to policies and practices employed after screening and in initial notification of health care professionals. The expert group evaluated the components of the system and their associated functions with a primary focus on the parts of the system that interface specialty care professionals with either the newborn screening program or the child health professionals.
A basic cost effectiveness study of newborn screening was conducted. The results of this analysis demonstrated that newborn screening is cost effective when compared to other recommended medical expenditures. This supports the recommendations made in Section One of this report regarding the need to expand the breadth of conditions that should be included in core screening panels and the secondary target category.
The scientific analyses and systems evaluations also identified gaps in our knowledge base and pointed to areas in which research is needed. The expert group recommends that: • Programs continue to improve the components of the system beyond the initial screening, communication of those results, and ensuring that the newborn enters into short-term follow-up. To accomplish this: • reporting procedures should be standardized • reports of confirmatory results should be obtained • There should be improved oversight (e.g., JCAHO) of the hospital-based screening activities to improve tracking of screen-positive cases; • There should be more uniformity in the language and definition of the performance standards (e.g., repeat test, second test) monitored and reported by programs; • The QA programs involving the diagnostic and follow-up system should be enhanced; • National oversight and authority with appropriate resources should be provided; and • Systems should be in place for collection of data about individuals identified as screen-positive in newborn screening programs.
Meaning of Screening Result
Decreased thyroxine (T4) accompanied by increased thyroid stimulating hormone (TSH) suggests primary hypothyroidism; decreased T4 and decreased TSH suggests secondary hypothyroidism.
Some programs screen only for primary hypothyroidism by only measuring TSH. An increase in TSH suggests congenital hypothyroidism.
Metabolic Description
Lack of adequate thyroid hormone production.
Confirmation Of Diagnosis
Takes 1-3 days. Diagnostic tests include reduced serum T4, T3 uptake, free T4 or T4 index, and serum TSH, which will be increased in primary hypothyroidism and reduced in secondary hypothyroidism.
Clinical Expectations
Asymptomatic in the neonate. If untreated, results in developmental delay/mental retardation and poor growth.
Resources for Referral
Insert local, state and regional resources Additional Information Gene Tests/Gene Clinics www.genetests.org Appendix 4 Program standards Initial Newborn Screening Activities 1. Document complete reporting of all results of all liveborn newborns within three months of the close of the year (target 100%). a. Initial screening specimens should be collected after 24 hours, but as close to discharge as possible. Newborns with prolonged hospital stays should be tested before day seven, regardless of reason for hospitalization. b. The number of newborns discharged from hospitals without screening and the number of these infants involved in follow-up testing should be documented. c. The number of newborns discharged without screening for which screening occurred through follow-up at some later time should be documented. 2. Document and report the number of out-of-hospital births (e.g., using birth certificates) and the numbers of those tested versus those not tested. 3. Document the number of unsatisfactory specimens for any reason (target is 0%). This includes specimens considered unsatisfactory due to: a. laboratory/analytical issues (e.g., a poor specimen); b. clinical issues (e.g., timing of specimen acquisition); and c. information issues (i.e., inadequate demographics such as name, data completeness such as no discharge time or specimen collection times noted) 4. Document rate of unsatisfactory specimens followed up with a satisfactory test (target 100%) a. document the number of newborns discharged prior to 24 hours and retest all; b. document the number of newborns discharged prior to 24 hours and initiate a retest of all within 6 days of life; and c. monitor unsatisfactory specimen data and report plans for corrective action. 5. Document the number of newborns screened positive or not normal for each disorder on the screening panel. For programs that universally require a second screen, document the number of newborns receiving the required second screen. 6. Document the rates and types of disorders with a confirmed clinical diagnosis. 7. Document time from birth to reporting of all presumptive positive screens. 8. Document time from birth to: a. testing to establish diagnosis; and b. initiation of intervention or treatment by condition. 9. Document: a. that confirmed positives are treated where indicated and comply with the therapeutic program; b. appropriate outcome variables, long-term health status, and development, at least annually; and c. the offering of services and utilization for positive cases (consider matched controls).
or health care operations. "Operations" include most routine activities of a covered entity. Research is not included in operations as defined by the regulations. Uses and disclosures of PHI beyond treatment, payment, or health care operations are only lawful if 1) pursuant to a valid authorization; or 2) pursuant to an exception set out in the Privacy Rule.
PHI can be disclosed to third parties with an individual's written authorization. ("Individual" is defined in the regulations as a competent adult or a personal representative acting on behalf of an incompetent person.) For the purposes of newborn screening, the newborn is represented by parent(s) or a legal guardian.
State laws "serving a compelling need related to public health, safety or welfare" remain in effect after April 14, 2003. Specifically, state laws concerning the reporting of disease and the conduct of public health surveillance, investigation, or intervention remain in effect (45 CFR Section 160.203). Further, covered entities can disclose otherwise protected patient information for public health activities without prior notice to the individual or the signing of an authorization. Pursuant to section 164.512(a) and (b) of the regulations, covered entities may disclose information for public health surveillance, public health intervention, and other public health purposes. These provisions make it clear that state newborn screening and reporting laws and programs remain in effect.
Under the Privacy Rule, a covered entity may use or disclose PHI without consent, authorization, or an opportunity to agree or object by the patient where: 1. the use or disclosure is required by law (including a public health law such as a newborn screening law); or 2. the disclosure is to a public health authority authorized by law to receive the information for public health activities (164.512(a) and (b)); or 3. the disclosure is for treatment needs of the patient. Treatment includes provision, coordination, or management of health care and related services by one or more providers, including coordination and management by a provider with a third party.
The Privacy Rule permits public health reporting, but it does not require it. Reporting requirements are established by provisions of state and local laws.
There are two kinds of public health disclosures under the Privacy Rule-mandatory and permissive. Mandatory disclosures are those required by law, and the Privacy Rule places no limit on the amount of information disclosed. Section 164.512(b) also permits covered entities to disclose PHI to public health authorities and their authorized representatives for public health surveillance, investigations, and interventions. A "minimum necessary" requirement applies to "permissive" disclosures, thereby limiting such disclosures to the "minimum necessary to accomplish the intended purpose of the use, disclosure, or request" (Section 164.502 (b) (1.).
A "Public Health Newborn Screening Program" includes initial screening, QA, diagnosis, follow-up, contracts with ac-ademic laboratories and consultants, and management of the research uses of the stored data. A program must share data among state agencies, laboratories, physicians, and state-and Institutional Review Board (IRB)-approved researchers to fulfill the public health mandate. Because each state's program is run in different ways, each needs to consult with its advisors about its status as a "covered entity," "provider," or other public health-related status. For example, under the Privacy Rule, if data are collected as surveillance data under 164.512(b) by a public health authority authorized by law to collect or receive such information for the purpose of preventing or controlling disease, any subsequent use or disclosures are not required to comply with the Privacy Rule. State law may provide added protections. If the public health authority is also a covered entity, the Privacy Rule would apply for subsequent uses, for example, research (see discussion below).
Once screening has occurred, the results, the diagnosis, a care plan, and follow-up treatment can be transmitted to the laboratory, the public health department, and the physician(s) providing care. This is allowed under the regulations because of the public health mandate and because once a patient has received and acknowledged the Notice of Privacy Practices (a document that explains the patient's rights and the actions the provider will take to protect privacy), the PHI can be used and disclosed. The patient would receive a notice from the hospital where the birth occurred and from the primary care physician.
Security
If PHI is transmitted electronically (which means by computer, not by phone or fax), transmission must be secure. The security conditions required are set forth in HIPAA security regulations found in relevant parts of 45 CFR Parts 160 and 164. Those regulations become effective April 21, 2005. They require adequate firewalls, encryption, password protection, and backup so that electronic transmissions can protect the confidentiality of the PHI.
Research
Research conducted by state or federal programs as mandated by relevant law is permitted as a public health activity.
For research by private researchers or research not mandated by law (e.g., a prevalence study using identifiable names linked to DNA), the rules of research would apply. Research with human subjects conducted with federal funding (or involving researchers otherwise covered by federal law) is regulated by 45 CFR Part 46.
Because research is not considered to be part of treatment, payment, or operations, a researcher wishing to access PHI from a covered entity must either: 1. de-identify the PHI so that the patient cannot be determined. De-identification occurs once the following items are redacted from the data to be used by the researcher: • names; • all geographic subdivisions smaller than a state, including address, except for the initial 3 digits of a zip code Privacy Board or an IRB waives the need for authorization in accordance with specific requirements designed to protect privacy. Those requirements include a finding that the research could not practicably be conducted without the waiver, that data will not be reused or disclosed to a third party, and that there is an adequate plan to protect privacy (164.512(i)). OR 3. construct a Limited Data Set, where the data are provided to a researcher who has signed a Data Use Agreement. A Limited Data Set can include dates and geographic information, but not street addresses or other direct identifiers listed above. A Data Use Agreement establishes the permitted uses of the limited data set and says the researcher will not further use or disclose the information, will protect it, and will not identify or contact the individuals whose data are in the set.
For research using DNA derived from dried-bloodspots: a. there must be de-identification, which can most easily be accomplished by simply snipping off a piece of the specimen and providing no other information; or b. there must be parental or legal guardian written authorization on a Privacy Rule compliant form; or c. there must be a waiver of the need for authorization properly granted by a Privacy Board or IRB; or d. there must be a Limited Data Set containing only general geographic information and relevant dates, coupled with a data use agreement signed by the researcher (see privacyrulesandresearch.nih.gov/).
Conclusion
Because newborn screening and related activities are permitted under 45 CFR Section 164.512 (a) and (b) and are required by state law, these activities and associated research can proceed under the Privacy Rule. The greatest challenge is to confront the often pervasive misinformation about the Privacy Rule that sometimes has been used to justify the nondisclosureof newborn screening and other public health information.
|
2014-10-01T00:00:00.000Z
|
2006-05-01T00:00:00.000
|
{
"year": 2006,
"sha1": "adf223e36e17e24bc6060041410bbacd57c5a25a",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3109899",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4b75fb4b6587f00179105c5563427bfc34ddf57",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
257615308
|
pes2o/s2orc
|
v3-fos-license
|
Public Awareness and Sentiment Analysis of COVID-Related Discussions Using BERT-Based Infoveillance
: Understanding different aspects of public concerns and sentiments during large health emergencies, such as the COVID-19 pandemic, is essential for public health agencies to develop effective communication strategies, deliver up-to-date and accurate health information, and mitigate potential impacts of emerging misinformation. Current infoveillance systems generally focus on discussion intensity (i.e., number of relevant posts) as an approximation of public awareness, while largely ignoring the rich and diverse information in texts with granular information of varying public concerns and sentiments. In this study, we address this grand challenge by developing a novel natural language processing (NLP) infoveillance workflow based on bidirectional encoder representation from transformers (BERT). We first used a smaller COVID-19 tweet sample to develop a content classification and sentiment analysis model using COVID-Twitter-BERT. The classification accuracy was between 0.77 and 0.88 across the five identified topics. In the sentiment analysis with a three-class classification task (positive/negative/neutral), BERT achieved decent accuracy, 0.7. We then applied the content topic and sentiment classifiers to a much larger dataset with more than 4 million tweets in a 15-month period. We specifically analyzed non-pharmaceutical intervention (NPI) and social issue content topics. There were significant differences in terms of public awareness and sentiment towards the overall COVID-19, NPI, and social issue content topics across time and space. In addition, key events were also identified to associate with abrupt sentiment changes towards NPIs and social issues. This novel NLP-based AI workflow can be readily adopted for real-time granular content topic and sentiment infoveillance beyond the health context.
Background
Social media have become the major avenue for the public to receive health information from health agencies and news outlets and to share their own opinions on emerging health issues, especially during pandemics such as the 2009 H1N1 pandemic influenza, 2014 Ebola, 2015 Zika, and COVID-19. They have also become an important source for various health agencies and researchers to understand the public opinion and promote certain health campaigns. During the pandemic of 2014 Ebola, researchers noticed the significant upward trend of Twitter posts and Google search in the USA [1,2]. Moreover, during the 2016 Zika pandemic, multiple health agencies started to use social media as communication channels and adopted effective communication strategies to improve the dissemination of public health-related issues [3]. COVID-19 has become one of the most discussed topics on social media platforms across the globe.
Pandemics always involve issues beyond medical and health aspects alone. They are often associated with cultural, social, economic, and political issues [4,5]. In the early stage of COVID-19, the majority of the discussions and debates on social media were about intervention policies such as quarantine and social distancing. As the pandemic progressed, the discussion shifted towards mask wearing; the government's handling of the crisis; and vaccine development, roll-out, and mandates. COVID-19 is still one of the most popular topics on social media [6], and a lot of internet users retrieve COVID-19-related information from and share their opinions on social media platforms.
Relevant Work
Research on the monitoring and surveillance of social media discussions about health issues, commonly known as health infoveillance, started in 2000. Current infoveillance is achieved with the combination of natural language processing (NLP), time-series analysis, and geospatial analysis techniques. Various NLP applications, including topic modeling, topic classification, sentiment analysis, and semantic analysis, can give a comprehensive understanding of the topic, sentiment, and semantic of public opinion and sentiment regarding a health issue. Monitoring the trend of certain topics helps predict the outbreak and progress of an epidemic, such as influenza [7][8][9][10][11][12][13], Zika virus [14], and the recent COVID-19 [15,16]. More specific topics are of interest in infoveillance, especially during the COVID-19 pandemic. Non-pharmaceutical interventions (NPIs), including social distancing, stay-at-home orders, quarantine, and mask wearing, have been effective yet controversial ways to reduce airborne disease transmission [17][18][19][20][21].
Large pandemics, including COVID- 19, have never been an isolated medical or health issue and are always associated with multiple aspects beyond health. Current NLP-based infoveillance can generate more comprehensive characterizations of the diverse topics and sentiments using textual data based on the rich linguistic, sentiment, and semantic features. There are word frequency-based NLP approaches, such as term frequency-inverse document frequency (TF-IDF) and latent Dirichlet allocation (LDA) [22]. Another approach is to apply the encoding of text with pre-trained embeddings, including Word2Vec [23], GloVe [24], and BERT [25]. The embeddings are then fed into certain machine learning or deep learning methods, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs), for downstream tasks.
In this study, we focused on word-or sentence-level embedding to understand the contextual information of online discussions on COVID-19 on Twitter. Word embedding is the process of transforming textual words into numerical vectors. There are traditional static word embeddings, such as Word2Vec, FastText [26], and GloVe, where the embedding is trained based on a large cohort of texts. However, this kind of static embedding cannot effectively reveal the true meanings of the word in different contexts. Another potential problem is that these text embeddings are usually trained in a more general corpus as embedding news to be versatile in different contexts. However, such embeddings often perform not as well in certain specific contexts. As shown in this study, the language used in social media can be very different from the corpus upon which these text embeddings are trained; thus, this can result in low performance in the topic modeling tasks.
To address these problems, pre-trained embedding models such as BERT, ELMO [27], XLNet [28], and GPT-2 [29] have been developed to provide richer and more dynamic context-dependent information. BERT is one of these pre-trained embedding models for various NLP applications. BERT learns context from the input textual data with its initial embedding and positional information. Most importantly, BERT is able to infer a word's distinct meanings in different contexts by providing unequal vector representations, which static embeddings are not capable of achieving. BERT makes it possible to pre-train the model on the specific domain, such as health, using transfer learning techniques. Transfer learning usually ensures a better representation of the specific domain that the model is finetuned upon and leads to better performance in downstream tasks. Regarding medical and health domains, BioBERT [30], BlueBERT [31], and Med-BERT [32] are a few examples that have been pre-trained on biomedical publications and electronic health records. Regarding social media applications, examples include BERTweet and the more specific COVID-Twitter-BERT [6], trained on COVID-related tweets. These more specifically pre-trained BERT variants show substantial performance improvements over the original BERT model. In addition to token-level embedding, there have been semantic embeddings for sentences, such as SentenceBERT [33].
Data Source and Sampling
Twitter is one of the most popular social media platforms for online discussions about COVID-19. In this study, we used Twitter samples to analyze the trend and sentiment of COVID-related topics in the USA.
First, we used a relatively small tweet sample to develop the topic classification model. We randomly sampled 2000 tweets from 2020 using the keywords listed in Table 1. A filter was applied during the sampling process to ensure that the tweets had a geolocation tag in the USA. For this task, only English tweets were collected. In addition, we also excluded tweets that had fewer than 10 tokens for better semantic meaning and more accurate BERT classification. We also ensured that each user could only be sampled once. This criterion avoided the potential sampling bias of a few active users or bots who excessively tweeted about COVID-19. The key terms for sampling are provided in Table 1. Note that certain terms were discriminatory (e.g., China virus). However, we still included these inappropriate terms to increase sampling coverage for research purposes. Based on the sampled tweets, our team with a domain expert in COVID-19 developed the codebook in Table 2. After high inter-coder reliability was established, the final codebook covered 5 major topic categories, and a single tweet could belong to multiple topics and multiple sub-topics. Each sampled tweet was annotated by at least two annotators, and if discrepancies occurred, the tweet was then sent to the domain expert for the final determination of the topic category. For analyzing the trends of topics and sentiments of COVID-related tweets, we used a larger dataset than the previous dataset for topic identification. We randomly collected 12,000 English tweets from 1-March-2020 to 31-May-2021 with COVID-19-related terms using Twitter's Academic API V2. In total, 6000 of the 120,000 daily tweets were geo-tagged, with their geolocation being in the USA The remaining 6000 daily tweets were without geo-tags for comparison.
Preprocessing
Prior to training the BERT topic classification model, each tweet went through a series of preprocessing steps. User names and URLs in the tweet text were replaced with a common text token. We also replaced all emojis or emoticons with textual representations using the Python emoji library. The title of the URLs and hashtags were preserved as additional features in addition to the tweet text. Each tweet was treated as a text input and then fed into the BERT model. The 280-character limitation of a tweet was within the longest sequence input limitation of the BERT model.
Text Embedding
Text embedding was an essential part of BERT in this project to reflect the contextual, sentiment, and semantic features of the text. The accurate embedding of the text resulted in a better representation of the text and subsequently more accurate topic modeling. In order to further increase model performance and efficiency, we adopted COVID-Twitter-BERT, which was specifically pre-trained on COVID-19-related tweets and aligned with the tasks in this study. Our preliminary analysis showed that COVID-Twitter-BERT had substantial performance improvement over the generic BERT-Base model.
Topic Classification
Once the tweet was embedded, we then used the embedding to develop a multi-label (multinomial) machine learning classification model that was able to accurately identify the topics of each tweet. Since each tweet could have multiple topic labels out of a total of five possible topics, we further turned this multi-label classification task into 5 independent binary classification tasks. Five different binary classifiers were trained to identify the topic of each tweet. During the training stage, imbalanced issues were present, as the classifier used one class against the remaining four classes. The weight of each classifier was further fine-tuned to ensure that the classifiers were able to generate tweet topic labels that reflected the true percentage of tweets in the dataset.
The performance of topic classification based on the text embedding of the BERT model and traditional logistic regression was evaluated. In addition, we also compared the classification performance of the generic BERT-Base model against the specifically pre-trained COVID-Twitter-BERT.
Sentiment Analysis
After the content topics were identified, we further evaluated the sentiments of the tweets. Sentiment analyses based on VADER (Valence Aware Dictionary and sEntiment Reasoner) and BERT were performed. VADER is a lexicon-and rule-based sentiment analysis tool specifically tuned to sentiments expressed in social media. VADER not only identifies the binary positive or negative sentiment of a tweet but also quantifies the degree of the positive or negative sentiment of the post. Similar to the topic classification task, BERT was also used to train a sentiment classifier. In this study, BERT was applied to develop a 3-class sentiment classifier: positive, neutral, or negative sentiment of a tweet. In this study, the sentiment of a tweet was assumed to be mutually exclusive, that is, each tweet could only have one specific sentiment. This assumption could be relaxed in future studies.
Performance Evaluation
The performance of the classifiers was evaluated using the corresponding confusion matrix obtained by testing sets with four elements: true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Classification performance metrics included accuracy (ACC = TP+TN TP+TN+FP+FN ), precision (PPV = TP TP+FP ), recall (TPR = TP TP+FN ), and F 1 score = 2TP 2TP+FP+FN . High ACC, F 1 , PPV, and TPR scores indicated robust model performance, indicating that the classification models were validated. These metrics also allowed us to compare different text embedding and classification models so that the most accurate and reliable models could be identified.
The complete analytical framework was written in Python 3.7 with necessary supporting NLP and machine learning libraries. The codes are freely available upon request.
Topic Classification
We developed and compared the classification performance of the generic BERT-Base and COVID-Twitter-BERT models. Figure 1 shows that the optimal number of epochs to balance training loss and validation loss, as well as to reduce overfitting, was five. The comparison among the different models showed that the deep learning-based BERT models significantly outperformed the traditional logistic regression models based on classification accuracy (ACC = TP+TN TP+TN+FP+FN ). In addition, COVID-Twitter-BERT also showed improved performance over the generic BERT-Base model. These results demonstrated the advantage of large-scale deep neural networks that are pre-trained on specific domain data (Table 3). In this study, we focused on two topics that were specifically related to the COVID-19 pandemic: confounded social issues and non-pharmaceutical interventions (NPIs). The NPI topic was the combination of certain sub-topics in the classes of countermeasures and policies, and the related topics included masks, other PPE, disinfection, social distancing, stay-at-home, and shelter-in-place. The performance is shown in Tables 4 and 5. Overall, the two BERT classifications models for social issues and NPIs both showed excellent performance, with accuracy of over 87%, as well as high precision and recall.
Sentiment Classification
Next, we investigated how BERT identified sentiments in COVID-19 discussions on Twitter. There were three classes: positive, neutral, and negative sentiments. For sentiment analysis, eight epochs were chosen instead of five as in the previous topic classification, because sentiments were more challenging to model and took more training to update the optimal model parameters. Practically, it was also more difficult to identify the sentiments of tweets, as online discussions could be frequently sarcastic or informal. We compared the sentiment analysis performance of the VADER and BERT models. Labels 0, 1, and 2 corresponded to negative, neutral, and positive sentiments.
The sentiment classification performance is shown in Tables 6 and 7. Overall, BERT was able to achieve the accuracy of 0.7 in the three-class sentiment classification task, significantly outperforming the previous benchmark method, VADER (ACC < 0.6). These results demonstrate the capability of NLP methods based on deep neural networks, especially transformers, which are able to further identify contexts in texts.
Analysis of Topic Trends and Sentiments
Once accurate COVID-19 topic and sentiment classification models were developed using BERT, we further applied the topic and sentiment classifiers on a much larger scale, i.e., to a 4 million-tweet sample, to comprehensively understand the spatio-temporal variability of COVID-19 discussions on Twitter. The trend analysis data input was smoothed using a 7-day Gaussian smoother with standard deviation of 3.
Comparison between Geo-Tagged and Non-Geo-Tagged Tweets
A total of 6000 geo-tagged tweets per day and 6000 non-geo-tagged tweets were sampled and analyzed to evaluate differences in topic distributions and trends between the two groups. Figures 2 and 3 show that the topics were very similar and highly correlated between the two groups. The Pearson correlation coefficients were 0.79 and 0.8 for the topics of NPIs and social issues, respectively, showing that the topic being discussed in geo-tagged tweets were highly correlated with tweets without geo-tags. We also found that the proportion of NPI topics was significantly higher in geo-tagged tweets than in non-geo-tagged tweets, indicating that users who shared their geo-tags were more engaged in discussing NPIrelated issues. On the other hand, users without geo-tags showed more interests in social issue-related topics. We also compared sentiments in tweets with and without geo-tags. The overall sentiments were based on the arithmetic mean sentiment across all sampled tweets per day in the two groups. Overall sentiments ranged from −1 to 1, where 0 indicated a neutral sentiment. The sentiment trends in the two groups are shown in Figure 4. Figure 4 shows substantial overall sentiment differences between tweets with geo-tags and tweets without geo-tags. Nevertheless, the Pearson correlation coefficient of 0.84 showed that the sentiments of the two sets were highly correlated. Overall, tweets with geo-tags had significantly higher sentiment scores (i.e., more positive sentiments) than tweets without geo-tags.
We further compared sentiments towards NPIs and social issues. Figure 5 shows sentiments towards the topic of NPIs. Tweets with geo-tags had more positive sentiments than tweets without geo-tags. There were several sudden changes in sentiment towards NPIs. Based on the time frame of these abrupt sentiment changes, we hypothesized that such changes were caused by the real-world events of former President Trump testing positive for COVID-19 and the CDC updating the guideline on mask mandates. The two topics were highly associated with NPIs, showing that our BERT sentiment classification model was able to successfully capture the changes. Figure 6 shows that the overall sentiment towards social issues was more negative in tweets with geo-tags than in tweets without geo-tags. Compared with NPIs, the sentiment towards social issues was −0.42 (i.e., overall negative), while the sentiment towards NPIs was 0 (i.e., overall neutral). Therefore, overall public sentiments on social media significantly differed between the two topics. Similar to NPI sentiment changes, we were able to identify some key real-world events that caused the sudden changes in public sentiments towards social issues. Examples included the murder of George Floyd, former President Trump admitting downplay of COVID-19 threat, Trump being diagnosed with COVID-19, and the 2020 US election. Some of these events were not reflected in the sentiments towards NPIs (e.g., murder of Floyd), indicating that the BERT model was capable of identifying and separating non-relevant tweets.
Comparison between Top 50 Cities and the Rest of the Country
In this section, we further present the comparison of content topic trends and sentiment trends between tweets geo-tagged in the top 50 most populous cities in the USA and the rest of the geo-tagged tweets. There were a total of 13,299 cities in the 2 million-geo-taggedtweet sample, with the top 50 cities contributing 36.5% of the sample. The top 50 cities with their number of tweets are presented in Table 8. , CA 6367 27 Detroit, MI 4625 44 Miami, FL 6841 11 Austin, TX 17,881 28 Memphis, TN 5199 45 Oakland, CA 6938 12 Jacksonville, FL 10,850 29 Louisville, KS 3838 46 Minneapolis, MN 8573 13 Fort Worth, TX 5640 30 Baltimore, MD 8541 47 Tulsa, OK 3142 14 Columbus, OH 9977 31 Milwaukee, WI 4492 48 Bakersfield, CA 2182 15 Indianapolis, IN 8481 32 Albuquerque, NM 4340 49 Wichita, KS 2540 16 Charlotte, NC 10,394 33 Tucson, AR 4683 50 Arlington, TX 2427 17 San Francisco, CA 20,660 34 Fresno, CA 3583 First, we compared the proportions of NPI and social issue topics in the top 50 cities and the rest of the country. Figure 7 shows that the proportion of NPI topics was around 11% of the overall tweets. At the beginning of the pandemic (April 2020 to August 2020) people who lived in the top 50 most populous cities were more likely to discuss NPIs on social media than people from less populous areas. We also observed a convergence in NPI discussions between populous, large metropolitan areas and less populous regions after September 2020. This matched the trajectory of the COVID-19 pandemic in the USA, as major metropolitan areas were impacted the most at the beginning; thus, people in these populous regions were more concerned about NPIs and engaged in NPI-related topics on social media. Regarding the social issue topic, as Figure 8 shows, the proportion was generally around 16% of the overall tweets. Topics on social issues could abruptly arise when some real-world events happened, as we discuss above. In contrast to NPI topics, users in the top 50 most populous cities showed lower interest in social issues during the pandemic than the rest of the country. Nevertheless, people in large metropolitan regions discussed social issues more than other regions around late May 2020, when George Floyd was murdered.
We compared overall sentiments between the tweets generated in the top 50 most populous cites and tweets from the rest of the country. As Figure 9 shows, there was a clear and consistent difference throughout the study period, as users from the top 50 cities generally expressed more positivity than users from the rest of the country. While the overall sentiments between the two groups were highly correlated, with a Pearson correlation coefficient of 0.92, there was a substantial 0.03 sentiment difference. Tweets sent from the top 50 cities were generally 22% more positive regarding the pandemic.
We also compared the sentiments specifically regarding NPIs and social issues between the two regions. As Figures 10 and 11 show, the sentiments of tweets from the top 50 cities were more positive towards NPIs than those of tweets from the rest of the country. We also observed a substantial drop in sentiments towards NPIs around September 2020, which was probably due to the unclear messages that the CDC sent regarding mask mandates. The public then began to show negative sentiments towards NPIs.
Regarding social issues, the sentiments of tweets from the top 50 cities were consistently more positive than those of tweets from the rest of the USA. However, the Pearson correlation coefficient was only 0.51 for sentiments towards social issues. On the other hand, the Pearson correlation coefficient was 0.72 for the comparison of NPI sentiments between the top cities and the rest of the USA.
Discussion
In this study, we developed an innovative BERT-based NLP workflow for effective content topic and sentiment infoveillance during the COVID-19 pandemic. We first developed a content topic classifier and a sentiment classifier based on a smaller sample of COVID-19related tweets using the COVID-twitter-BERT variant. We compared the performance of the baseline BERT models and the more specifically tuned COVID-Twitter-BERT models. The COVID-Twitter-BERT models demonstrated higher performance in classifying content topics and sentiments than the baseline BERT-Base models and significantly outperformed non-deep learning logistic regression models.
We then applied the developed BERT topic classification and sentiment classification models to more than 4 million COVID-19-related English tweets over 15 months. We were able to characterize the overall temporal dynamics of COVID-19 discussions on Twitter, as well as the temporal dynamics of more specific content topics and sentiments. Using the NPI and social issue topics as examples, we were able to accurately characterize the dynamic changes in public awareness of these topics over time, as well as sentiment shifts during different stages of the pandemic. In general, we found that the public had an overall neutral sentiment towards NPIs, but an overall negative sentiment towards various social issues. Compared with many infoveillance studies during the COVID-19 pandemic, our study is one of the few that utilized advanced AI NLP techniques to identify the real-time content topics and sentiments of online discussions from massive social media data. In addition, we also developed a highly effective BERT-based content and sentiment classification model for health-related discussions.
Our granular-level intelligent infoveillance is based on the deep learning NLP technique BERT. It enables public health practitioners to perform scalable infoveillance to zoom in and zoom out of an issue of interest (e.g., the overall COVID-19 pandemic) and understand various content topics associated with the issue (e.g., different aspects of COVID-19, such as clinical/epidemiological information of the disease itself, NPIs, vaccination, policies and politics, social issues, etc.). By understanding how public awareness and sentiment vary across time and space during different stages of the pandemic, public health practitioners can develop more effective and targeted health communication strategies and better address public concerns towards specific content topics, such as vaccination, NPIs, and social issues, including health disparity and inequality during the pandemic and other health emergencies.
Future Work
An extension of this study using the current BERT-based NLP infoveillance workflow is to quantify the spatio-temporal variability of public sentiment towards vaccination, one of the most discussed topics during the COVID-19 pandemic in the USA and across the globe. We demonstrated that our infoveillance workflow could successfully monitor public awareness and sentiment towards NPIs. Similarly, public perception towards vaccination could also be explicitly evaluated. Similar to our study on NPIs, public health practitioners could quickly respond to abrupt drops in sentiment towards vaccination and effectively identify potential external influencing events to develop countermeasures.
Our infoveillance workflow is also spatially explicit. We compared tweets generated in the top 50 most populous cities and in the rest of the cities in the USA. We observed a substantial sentiment gap between these major metropolitan areas and less populated regions. Social media users in major metropolitan areas expressed more positive sentiments towards the pandemic, NPIs, and social issues in their tweets. Future improvement in this workflow could incorporate more scalable geospatial information, such as identifying content topics and sentiments across geospatial scales, from the county level to the nation level. Public health practitioners could not only zoom in and understand more granular content topics but also zoom across geospatial scales to understand the spatial heterogeneity of content topics and sentiments. By incorporating more spatially explicit variables, for instance, various social and structural determinants of health (SDOHs), public health practitioners could identify key influencing factors for certain content topics at granular spatial scales.
Our BERT infoveillance workflow is modularly designed and is able to be integrated with other analytical techniques, e.g., time-series analysis and signal processing, to detect certain key events during the pandemic that could have driven the abrupt changes in public sentiment towards NPIs and social issues. The future version of this infoveillance system is expected to automatically detect the key turning points of public perception towards a specific content topic and effectively identify potential external real-world drivers of the sudden sentiment changes.
The modular design of our BERT-based NLP infoveillance workflow can also be adapted for future applications such as misinformation detection. Using NLP and other analytical techniques, we could quickly find potential misinformation content topics and promptly respond to emerging misinformation topics. More granular characterization of online discussion reveals more specific contents and sentiments that are highly associated with misinformation, similar to the "digital antigen". Therefore, the infoveillance workflow is also able to actively send alarms to public health practitioners when certain key content topics of emerging misinformation match the "digital antigen" of misinformation.
Another extension of our infoveillance workflow is to further investigate content topic and sentiment shifts in social networks, using graph and network analysis. For instance, we could collect all replies to a specific original post, construct the network of information dissemination, and evaluate potential content and sentiment shifts from the original post in the network. We could then identify key vertices in the network that contribute to sentiment shifts, i.e., online influencers. Network metrics, such as various centrality scores, can be used to quantify the potential effectiveness of influencers in driving online discussions on social media.
In summary, we successfully developed a highly effective BERT-based infoveillance workflow for content and sentiment analysis. The workflow serves as a cornerstone for more extensive research and applications of large-scale social media analytics beyond the public health context. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Twitter data used in this study are freely available upon request to the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-03-19T15:18:41.134Z
|
2023-03-17T00:00:00.000
|
{
"year": 2023,
"sha1": "689e0dfcec660611c1f84490b3055b020b7bd0e1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-2688/4/1/16/pdf?version=1679031976",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4bad9dec341cf0adae8074c2689f299dfe8f765b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
231861883
|
pes2o/s2orc
|
v3-fos-license
|
Dynamics of transcendental H\'enon maps III: Infinite entropy
Very little is currently known about the dynamics of non-polynomial entire maps in several complex variables. The family of transcendental H\'enon maps offers the potential of combining ideas from transcendental dynamics in one variable, and the dynamics of polynomial H\'enon maps in two. Here we show that these maps all have infinite topological and measure theoretic entropy. The proof also implies the existence of infinitely many periodic orbits of any order greater than two.
Introduction
A transcendental Hénon map is a holomorphic automorphism of C 2 of the form where δ ∈ C \ {0}, and f is a transcendental entire function. Transcendental Hénon maps form a bridge between two distinct families of holomorphic maps whose dynamical behaviors have been studied intensively in recent years: the family of complex (polynomial) Hénon maps, and the family of transcendental entire functions.
In two previous papers [ABFP19 ,ABFP20] we studied the dynamics of these maps, demonstrating non-trivial dynamical behavior. For example, the Julia set is always non-empty. Here we provide further evidence of non-trivial dynamics: Theorem 1.1. Any transcendental Hénon map has infinite topological entropy.
As an immediate corollary we obtain an alternative proof that the Julia set is non-empty, and by the Variational Principle that the metric entropy is also infinite. The proof implies that a transcendental Hénon map has infinitely many periodic cycles of any order greater than 2. This result gives a complete description on the possible periodic cycles, since there exist transcendental Hénon maps without any periodic cycles of orders 1 and 2 [ABFP20]. We recall the analogy with one-dimensional transcendental functions, which may not have any fixed points, but always have infinitely many periodic cycles of any order greater than 1.
The topological entropy of holomorphic maps is a topic with an interesting history. It was shown by Gromov that the topological entropy of a rational function of degree d is log(d), a result written in a preprint in 1977, but not published until 2003 [Gro03]. In the meantime the result was obtained independently by Lyubich [Lju83].
Smillie [Smi90] proved in 1990 that a polynomial Hénon map of degree d has topological entropy log(d). Preliminary results for transcendental Hénon maps were obtained by Dujardin [Duj04], who proved that the entropy of a Hénon-like map of degree d is log(d) as well, and used this fact to construct examples of transcendental Hénon maps with infinite topological entropy. † Supported by the SIR grant "NEWHOLITE -New methods in holomorphic iteration" no. RBSI14CFME. Partially supported by the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006.
‡ This project has been partially supported by the project 'Transcendental Dynamics 1.5' inside the program FIL-Quota Incentivante of the University of Parma and co-sponsored by Fondazione Cariparma, and by Indam via the research group GNAMPA.
The fact that transcendental functions in one complex variables always have infinite entropy was proved in the paper [BFP19] by the three last authors. However, after completing our paper we learned that this result was obtained earlier by Markus Wendt [Wen02,Wen05b,Wen05a], who never published this work. The proof we present in this paper will closely follow ideas from the proof of Wendt.
1.1. Outline of the proof. Following Wendt we give different proofs depending on whether the family of rescaled maps f n (z) := f (n · z)/n is quasi-normal or not (see Definition 2.8). If this family is quasinormal, Wendt showed that f acts as a polynomial-like map of arbitrarily large degree on larger and larger domains, hence has infinite entropy. Similarly, we show that F acts as a Hénon-like map of arbitrarily large degree, hence by Dujardin's result F also has infinite entropy.
When the family (f n ) is not quasi-normal, Wendt shows that one can find an arbitrarily large number of disks with pairwise disjoint closures, such that each of these disks contains a univalent preimage of all but at most 2 of the disks; a consequence of the Ahlfors Five Islands Theorem [Ber00]. In the Hénon setting, we prove similarly that any suitable graph over each of these disks contains a preimage of a suitable graph over all but at most 2 of the other disks. In both the quasi-normal and the non quasinormal setting we obtain completely invariant compact subsets on which the entropy is arbitrarily large. It follows that the topological entropy is infinite.
In section 2 we recall background on topological entropy, including the definition of entropy on noncompact spaces that we will use. We also discuss the notion of quasi-normality, and recall Ahlfors Five-Islands Theorem and some of its consequences. In section 3 we prove Theorem 1.1, first under the assumption that the family (f n ) is quasi-normal, and then under the assumption that the family is not quasi-normal. In section 4 we prove the existence of periodic cycles of any period at least 3. In section 5 we construct examples of transcendental Hénon maps with arbitrarily slow or fast growing entropy in terms of the size of the compact sets.
Acknowledgment. The result obtained here answers a question asked to us by both Romain Dujardin and Nessim Sibony. We are grateful for their suggestion, which stimulated this research. The proof of our result closely follows the ideas of Markus Wendt in unpublished work. We are grateful for Walter Bergweiler for bringing this work to our attention, and for further discussion on this topic.
Preliminaries
2.1. Entropy. For maps acting on compact spaces the concept of topological entropy has been introduced in [AKM65].
Definition 2.1 (Definition of topological entropy for compact sets). Let f : X → X be a continuous self-map of a compact metric space (X, d). Let n ∈ N and δ > 0. A set E ⊂ X is called (n, δ)-separated if for any z = w ∈ E there exists k ≤ n − 1 such that d(f k (z), f k (w)) > δ. Let K(n, δ) be the maximal cardinality of an (n, δ)-separated set. Then the topological entropy h top (X, f ) is defined as In the literature there are several non-equivalent natural generalizations for the definition of topological entropy on non-compact spaces (see for example [Bow73b], [Bow71], [Bow73a], [Hof74], and more recently [HNP08]). We will use the definition introduced by [CR05] which is smaller than or equal to all the ones mentioned above.
Definition 2.2. Let f : Y → Y be a continuous self-map of a metric space (Y, d). Then the topological entropy h top (Y, f ) is defined as the supremum of h top (X, f ) over all forward invariant compact subsets X ⊂ Y. If there is no forward invariant compact subset the topological entropy is defined to be 0. Remark 2.3. Notice that this definition does not depend on the metric inducing the topology on Y , and is invariant by topological conjugacy, hence the name "topological entropy" is justified. Notice also that in [REF] the last three named authors used a slightly different definition of topological entropy, a priori larger than or equal to the above one.
Proof. Let X be a compact forward invariant subset of Y . Consider the compact set Λ := n≥0 f n (X). Since f is injective, it follows that the map f | Λ : Λ → Λ is bijective, and in particular Λ is completely invariant by f . The following classical result yields the lemma.
Theorem 2.5. Let g : K → K be a continuous self-map of a compact metric space (K, d) and let Λ := n≥0 g n (K). Then For the proof, see e.g. Block and Coppel.
2.2. Ahlfors Theorem and quasinormality. The following is a version of Ahlfors five islands Theorem which can be found in [Ber00], Theorem A.1. A more classical formulation of Ahlfor's five islands theorem and Corollary 2.7 in terms of regularly exhaustible Riemann surfaces can be found in [Sch93], Chapter 1.9. We recall the definition of quasi-normality from the Appendix in [Sch93].
Definition 2.8. Let Ω ⊂ C be a domain. A family F of holomorphic functions on Ω is quasi-normal if for every sequence (f n ) of functions in Ω there exists a finite set Q ⊂ Ω and a subsequence (f n k ) of (f n ) which converges uniformly on compact subsets of Ω \ Q.
The rest of this subsection is devoted to the proof of the following Proposition 2.9, which in turn will be used in the proof of the not quasi-normal case.
Proposition 2.9. Let Ω ⊂ C be a domain and let F be a not quasi-normal family of holomorphic functions Ω → C. Then there exists a sequence (f n ) ⊂ F and an infinite subset Q = (x j ) j≥1 ⊂ Ω such that no subsequence of (f n ) converges uniformly in any neighborhood of any x j .
Lemma 2.10. Let Ω ⊂ C be a domain and let F be a not quasi-normal family of holomorphic functions Ω → C. Then there exist a sequence (f n ) in F with the following property: for every subsequence (f n k ), there exists an infinite set E(f n k ) ⊂ Ω such that (f n k ) is not normal in any neighborhood of a point in E(f n k ).
Proof. Assume F is not quasi-normal. Then there exists a sequence (f n ) in F such that for any finite set L ⊂ Ω and every subsequence (f n k ) of (f n ), (f n k ) does not converge uniformly on compact subsets in Ω \ L. For every subsequence (f n k ), define E(f n k ) as the set of all points x in Ω such that the sequence (f n k ) is not normal in any neighborhood of x. We just need to prove that E(f n k ) is not a finite set. If by contradiction E(f n k ) is a finite set, then for all points y ∈ Ω \ E(f n k ), the sequence (f n k ) is locally normal around y. Since normality is a local property, it follows that (f n k ) is normal on Ω \ E(f n k ), and thus we can extract a subsequence of (f n k ) converging on Ω \ E(f n k ), which is a contradiction.
Lemma 2.11. Let Ω ⊂ C be a domain and let x ∈ Ω. If a sequence of holomorphic functions (f n : Ω → C) is not normal in any neighborhood of x, then we can extract a subsequence (f n k ) with the property that no subsequence of (f n k ) converges uniformly in any neighborhood of x.
Proof. Recall that a sequence (f n ) is normal if and only if it is equicontinuous with respect to the spherical metric on the Riemann sphere. Since (f n ) is not normal on any neighborhood of x, it follows that (f n ) is not equicontinuous in x. This means that there exists a constant ε > 0 such that for all j there exist |x j − x| < 1/j and an integer n j such that But then the sequence (f nj ) cannot have a subsequence converging uniformly in any neighborhood of x.
Proof of Proposition 2.9. Let (f n ) be the sequence given by Lemma 2.10, and E(f n ) be the associated non-normality infinite set. Choose x 1 ∈ E(f n ). By Lemma 2.11 there exists a subsequence (f n1(h) ) of (f n ) such that every subsequence of (f n1(h) ) does not converge in any neighborhood of x 1 .
Let now E(f n1(h) )) be the infinite set given by Lemma 2.10 for the subsequence (f n1(h) ). Choose x 2 ∈ E((f n1(h) )) different from x 1 . By Lemma 2.11 there exists a subsequence (f n2(h) ) such that every subsequence of (f n2(h) ) does not converge uniformly in any neighborhood of the points x 1 , x 2 . By induction we obtain an infinite set Q := (x j ) j≥1 and a family ((f n k (h) )) k≥1 of nested subsequences of (f n ) such that for all k ≥ 1 no subsequence of (f n k (h) ) converges uniformly in any neighborhood of the points x 1 , . . . , x k . The diagonal subsequence (g h := f n h (h)) gives the result.
Proof of Theorem 1.1
Let F (z, w) = (f (z) − δw, z) be a transcendental Hénon map. For n ∈ N and z ∈ C let us define Observe that for each n, f and f n are topologically conjugate via the map z → nz, so they have the same entropy. Analogously, the maps F n (z, w) = (f n (z) − δw, z) are topologically conjugate to F and hence have the same entropy as F . (1 − z/a ).
Since the infinite product converges for every z by choice of the a , and since it is not a polynomial, f is a transcendental entire function. Notice that f n (0) → 0, that the zeros of f are {a } ≥1 , and that the zeros of f n are Z n := {a /n} ≥1 .
Given any sequence in (f n ) we can find a subsequence (f nj ) for which the sets of zeros Z nj = {a /n j } ≥1 converge as n j → ∞ to the set Z ∞ , which is either {0, ∞} or {0, ∞, q} for some q ∈ C \ {0}, in terms of the Hausdorff metric on the Riemann sphere.
Indeed, if a sequence of zeros a j /n j accumulates on a point q = 0, ∞, then up to passing to a subsequence we may assume that a j /n j → q as j → ∞. Since |a j+1 /a j | → ∞ it follows that as j → ∞ we have that a ij /n j tends to 0 whenever i j < j , and converges to ∞ whenever i j > j .
Let us work with the case Z ∞ = {0, ∞, q}. Write f nj (z) as a product of three terms as follows: Observe that on any compact subset of C \ {0, q} the second of these terms converges uniformly to the non-zero function 1 − z/q, while the third term converges uniformly to the constant function 1. The first term diverges uniformly, proving quasi-normality. In the case Z ∞ = {0, ∞} one writes f nj (z) as a product of two terms, similarly obtaining locally uniform divergence on C \ {0}.
The proof of Theorem 1.1 is divided into two cases, with different proofs, depending on whether F := (f n ) is a quasi-normal family or not. As mentioned in the introduction, the outline of our proof follows Wendt's proof [Wen02,Wen05b,Wen05a] for the one-dimensional case.
3.1. Quasinormal Case. In this subsection we prove the following result: be a transcendental Hénon map, and suppose that the transcendental functions defined by f n (z) = f (nz)/n form a quasi-normal family. Then F has infinite entropy.
For any r ∈ R let us denote by D r the Euclidean disk of radius r centered at 0. Let f be entire transcendental and let F be the family of rescalings f n (z) = f (nz)/n. Assume that F is quasi-normal. Then there is a subsequence (f n k ) of (f n ) and a finite set Q such that (f n k ) converges uniformly on compact sets of C \ Q.
Lemma 3.3. The set Q contains the origin, and there exists 0 < s < 1 such that f n k → ∞ uniformly on compact subsets of D s \ {0}.
Proof. Observe first that for every r > 0, any subsequence of (f n ) is unbounded in the circle ∂D r . Indeed, for any n we have that f n (D 1/ √ n ) = f (D √ n )/n, and the maximum modulus of a transcendental function on a disk of radius r grows faster than r 2 .
We claim that (f n k ) does not converge uniformly in a neighborhood of 0, so in particular, 0 ∈ Q. Indeed, f n k (0) = f (0)/n k → 0 as n k → ∞, while (f n k ) is unbounded in any neighborhood of 0. Since Q is finite we can find s such that f n k → g uniformly on compact subsets of D s \ {0}, with g : D s \ {0} → C or g = ∞. Since (f n k ) is unbounded in any circle ∂D r we obtain g = ∞.
Proposition 3.4. Let s, (f n k ) be as in Lemma 3.3. Let 0 < r < s, and let R > 0 and m ∈ N. Then there exists k 0 ∈ N such that for k > k 0 we have (1) |f n k (z)| > R for every z ∈ ∂D r , (2) the winding number of the curve f n k (∂D r ) around the origin is larger than or equal to m. Proof.
(1) is an immediate consequence of Lemma 3.3. We now prove (2). Let a ∈ D R be a non- Let k 0 be large enough such that for all k ≥ k 0 we have M/n k < R, and such that (1) holds. Let k ≥ k 0 . Denote by W/n k the set {z/n k : z ∈ W }. Then if z ∈ W/n k we have n k z ∈ W and hence |f n k (z)| < R. Thus W/n k ⊂ f −1 n k (D R ). Notice that 0 ∈ W/n k . It follows by (1) that W/n k ⊂ D r . We now claim that W/n k contains at least m preimages of a k := a/n k under f n k . Indeed W contains at least m preimages of a under f , and for any such preimage z we have that Since a k ∈ D R , the result follows by the argument principle.
Let ∆ = D r1 × D r2 be a bidisk, ∂ v ∆, ∂ h ∆ denote its vertical and horizontal boundary respectively. The following definition of Hénon-like maps is Definition 2.1 in [Duj04]. ( Let π z , π w : C 2 → C denote the projection to the z and to the w axis respectively. Definition 3.6 (Degree of a Hénon-like map). Let H be a Hénon-like map defined in a neighborhood of ∆ = D r1 × D r2 and let L h be any horizontal line intersecting ∆. Consider the holomorphic function which means that the function in (3.1) is proper, and thus a branched covering. By Proposition 2.3 in [Duj04], its degree is independent of the chosen horizontal line. This integer is the degree of the Hénon-like map H.
The following theorem is proved in [Duj04, Theorem 3.1].
Theorem 3.7. Let H be a Hénon-like map of degree d. The topological entropy of H is log d.
Lemma 3.8. Let f be a holomorphic function defined in a neighborhood of D r , let δ = 0, and suppose that |f (z)| > (|δ| + 1) · r whenever |z| = r. Assume that the winding number of the curve f (∂D r ) around the origin is d ≥ 1. Then the map F : (z, w) → (f (z) − δw, z) is a Hénon-like map of degree d on ∆ = D r × D r .
Proof. We check the three properties in Definition 3.5. The estimate |f (z)| > (|δ| + 1) · r gives that |f (z) − δw| > r for all (z, w) ∈ ∂ v ∆, which implies property (2). The formula for F therefore implies that F (∆) cannot intersect ∂ h ∆, giving property (3). Since f (∂D r ) winds around 0 exactly d ≥ 1 times, 0 has at least one preimage a ∈ D r . Hence F (a, 0) = (0, a) ∈ ∆ which gives Property (1). We now show that F has degree d on ∆. By Definition 3.6 it is enough to show that 0 ∈ D r has d preimages counted with multiplicity in F −1 (∆) ∩ ∆ ∩ L 0 under π z • F , where L 0 is the horizontal line passing through 0. It is easy to see that these points coincide with the preimages in D r of the origin under the function f , and the result follows by the argument principle since the curve f (∂D r ) winds d times around 0.
Proof of Theorem 3.2. Recall that F n (z, w) := (f n (z) − δw, z), and that F n is topologically conjugate to F for all n ≥ 0.
Fix m ∈ N. Let s, (f n k ) be as in Lemma 3.3 and fix r < s, R > (|δ| + 1)r. Let k 0 be given by Proposition 3.4. Then, if k ≥ k 0 , it follows by Lemma 3.8 that F n k is Hénon-like of degree at least m on the bidisk D r × D r . By Theorem 3.7 we have that the entropy of F n k is larger than or equal to log m, and by topological invariance the same holds for the map F .
3.2. Non Quasinormal Case. We will now prove the following: Theorem 3.9. Let F : (z, w) → (f (z) − δw, z) be a transcendental Hénon map, and suppose that the transcendental functions defined by f n (z) = f (nz)/n do not form a quasi-normal family. Then F has infinite entropy.
3.2.1. Proof of Theorem 3.9. Assume that the family (f n ) is not quasi-normal. Let (f n h ) be the subsequence of (f n ) given by Proposition 2.9 and let Q = (x j ) j≥1 be the associated infinite set. Fix k ≥ 1. Let R > 0 be such that the closures of the disks D R (x j ), for j = 1, . . . , k are pairwise disjoint. Next define 0 < r < R such that |δ|r < R − r. Recall that no subsequence of (f n h ) is normal in any of the k disks D r (x j ), j = 1, . . . , k. Then there exists n h such that #(J(i, )) ≥ k − 2 for every i, ∈ {1, . . . , k}.
Proof. Assume by contradiction that this is not the case. Then for all n h there exist i, ∈ {1, . . . , k} and 3 distinct values j 1 , j 2 , j 3 ∈ 1, . . . , k such that the disks D R (x j1 + δx ), D R (x j2 + δx ), D R (x j3 + δx ) do not admit biholomorphic preimages via f n h in the disk D r (x i ). It follows that we can find a subsequence (f m h ) with the following property: there exist i, ∈ 1, . . . , k and 3 distinct values j 1 , j 2 , j 3 ∈ {1, . . . , k} such that for all m h the disks In what follows we denote the map f n h given by the previous lemma simply as f n . We will consider the dynamics of the Hénon map F n (z, w) := (f n (z) − δw, z), which is linearly conjugate to F . • it is a holomorphic graph over D r (x i ), that is D can be parametrized as (z, w(z)) with w(z) holomorphic in D r (x i ); • π w (D) ⊂ D r (x ), where π w is the projection to the second coordinate.
Lemma 3.12. Let i, ∈ {1, . . . , k}. Then for all j ∈ J(i, ) and for any (i, )-disk D there exists a holomorphic disk V ⊂ D for which F n (V ) is a (j, i)-disk.
Proof. It is clear that the w-coordinates of F n (V ) are contained in D r (x i ), regardless of the choice of V ⊂ D. We therefore merely need to find a holomorphic disk V ⊂ D such that F n (V ) is a graph over the disk D r (x j ) in the z-coordinate. Since j ∈ J(i, ) there is a biholomorphic preimage W ⊂ D r (x i ) of D R (x j +δx ) under f n . It follows that the function f n −δx : W → D R (x j ) is a biholomorphism as well. Let z → (z, w(z)) be the graph parametrization of D. We claim that there exists an open subdomainW ⊂ W such that f n (z)−δw(z) :W → D r (x j ) is a biholomorphism. Once this is proved, setting V := D∩(W ×C) yields the result. Notice that up to shrinking R we can assume that f n − δx : by assumption, hence by Rouché's Theorem it follows that for every u ∈ D r (x j ) there exists exactly one point z ∈ W such that f n (z) − δw(z) = u. SettingW := (f n − δw) −1 (D r (x j )) we have that f n − δw :W → D r (x j ) is a biholomorphism. We conclude the proof of non quasi-normal case by showing that Lemma 3.12 implies that the topological entropy of F n is at least log(k − 2).
Define the compact subsets of C 2 Clearly L is forward F n -invariant. We say that a sequence (i 0 , i 1 , i 2 , . . .) ∈ {1, . . . k} N is admissible if i m+1 ∈ J(i m , i m−1 ) for every m ≥ 1 and similarly, a finite word is admissible if it is the start of an infinite admissible sequence. Clearly, for every admissible sequence (i 0 , i 1 , i 2 , . . .), there exists a point P ∈ L for which F m n (P ) lies in a (i m+1 , i m )-disk for all m ≥ 0. Moreover for all m ≥ 0 there are at least k 2 · (k − 2) m−2 admissible words of length m.
Thus L contains at least (k − 2) m points with distinct symbolic representations, which are therefore (m, ε)-separated as soon as ε < min i, dist(D r (x i ), D r (x )).
This proves the claim that F n : L → L has topological entropy at least log(k −2), which in turn completes the proof of Theorem 3.9.
Periodic cycles of arbitrary order
We continue to a consider transcendental Hénon map F of the form In the previous paper [ABFP20] we showed that when δ = −1 the map F may not have any fixed point or periodic orbits of period 2, but if F has neither, then it must have periodic points of order 4. The proof of this fact relied upon algebraic manipulations of the equation F 4 (z, w) = (z, w). Using the techniques presented in the previous sections we can now obtain the following description.
Theorem 4.1. A transcendental Hénon map has infinitely many periodic cycles of any order N ≥ 3.
Proof. We consider again the family of rescaled transcendental functions (f n ). We have shown that if this sequence is quasi-normal then appropriate restrictions of the Hénon map F act as Hénon-like maps of larger and larger degrees. It was proved by Dujardin in [Duj04], Proposition 5.7, that a Hénon-like map of degree d has exactly d N points which are fixed under F N , counted with multiplicity. It follows that if the family (f n ) is quasi-normal then F has infinitely periodic cycles of any period. Let us therefore assume that the family (f n ) is not quasi-normal and fix N ≥ 3. Let k > 3N − 1, and let f n h be the function given by Lemma 3.10. Since the subsequence (n h ) plays no further role in this proof, we will just write n instead of n h , and write as before F n := (f n (z)−δw, z). Consider the (i, )-disks constructed in Definition 3.11, for i, = 1, . . . , k. Recall from Lemma 3.12 that for any i, = 1, . . . , k there exists a subset J(i, ) ⊂ {1, . . . , k} with #(J(i, )) ≥ k − 2 such that for any j ∈ J(i, ), any (i, )-disk D i, contains a holomorphic disk V which F n maps onto an (j, i)-disk. We first claim that the number of N -tuples (i 0 , i 1 , . . . , i N −1 ) with distinct entries satisfying (where the indices are taken modulo N ) tends to infinity as k → ∞. Indeed, the number of N -tuples whose entries are all distinct over k symbols is k · (k − 1) · . . . · (k − N + 1); on the other hand by Lemma 3.12, the number of such N -tuples which violate the condition i j+1 ∈ J(i j , i j−1 ) in at least one index is at most 2N k · (k − 1) · . . . · (k − N + 2). Hence the number of admissible sequences is at least k · (k − 1) · . . . · (k − N + 2)(k − 3N + 1) → ∞ as k → ∞. Notice that this counting argument breaks down for N = 2, in agreement with the fact that there exists transcendental Hénon maps without periodic points of period 2.
We will now argue that corresponding to any sequence {(i 0 , i 1 ), . . . , (i N −1 , i 0 )} of length N which is periodic in the sense discussed above we can find a periodic cycle of minimal period N .
Observe that in the proof of Lemma 3.12 the holomorphic disk V ⊂ D is of the form D∩(W ×C), wherẽ W ⊂ W depends on D, but W is independent of the chosen (i, )-disk D. Indeed, it is by construction a simply connected domain W ∈ D r (x i ) that is mapped univalently onto D R (x j + δx ) by the function f n , hence it depends only on the three indices i, j, of the domain, the (i, )-disk, and the codomain, the (j, i)-disk.
It follows that having chosen the domain W , the intersection of the bidisk W × D r (x ) with the preimage F −1 n (D r (x j ) × D r (x i )) is connected; a union of straight horizontal disks V w ⊂ W × {w} for w ∈ D r (x ).
Let us now consider the periodic sequence (i 0 , i 1 , . . . , i N −1 ) discussed earlier, where each i j+1 ∈ J(i j , i j−1 ). For each triple (i j−1 , i j , i j+1 ) we select a disk W j ⊂ D r (x ij ) as above, for j ≥ N we define these sets inductively by W j = W j−N , obtaining a periodic sequence. We will consider the nested sets and show that the intersection for all m ∈ N is a unique holomorphic disk which is a holomorphic graph and which is actually the local stable manifold of a saddle periodic point. Define the compact and forward invariant set Let D be the intersection of a (i j , i j−1 )-disk with W j × D r (x ij−1 ). We know that the image F n (D) contains a holomorphic graph over the disk So the modulus of the annulus D \ F −1 n (W j+1 × D r (x ij )) is bounded away from zero. Applying this observation repeatedly and using the Gröztsch Inequality we have that D ∩ Γ consists of a single point.
Applying this argument to the trivial foliation of W j × D r (x ij−1 ) consisting of disks D of the form {w = c} we immediately get that Γ ∩ (W j × D r (x ij−1 )) is a graph z → (ϕ(z), z) for some function We claim that the function ϕ is actually holomorphic. Recall that in the proof of Lemma 3.12 we can choose the ratio between the radii r and R as large as we wish. The function f n maps W j univalently onto D R (x ij+1 + δx ij−1 ). By applying Cauchy estimates to f −1 n from D R (x ij+1 + δx ij−1 ) into D r (x ij ) it follows that |f n (z)| can be made arbitrarily large on the subset of W j that is mapped by f n onto It follows that we may assume that the derivative |f n | is arbitrarily large on (W j ×D r (x ij−1 ))∩(F −1 n (W j+1 × D r (x ij ))) for every j.
Recall that hence when |f n (z)| is sufficiently large the horizontal cone field C h containing the tangent vectors (v 1 , v 2 ) with |v 2 | ≤ 2|v 1 | is forward invariant. Let C v be the vertical cone field, given by the pullback under dF n of the constant vertical cone field defined by |v 2 | ≥ 2|v 1 |. It follows that C v is backwards invariant for any point in F n (W j × D r (x ij−1 )), and moreover, any non-constant tangent vector in C v is contracted by some uniform factor, while vectors in C h are uniformly expanded. Thus Γ is a hyperbolic forward invariant set by the cone criterion, and through every point (z, w) ∈ Γ there exists a stable manifold W s (z, w). It immediately follows that Γ ∩ (W j × D r (x ij−1 )) has to coincide with a local stable manifold, and thus the function ϕ is actually holomorphic. By the forward invariance of Γ we know that the holomorphic disk Γ ∩ (W j × D r (x ij−1 )) is mapped into itself by F N n . The existence of a saddle periodic orbit of period N follows. Since the maps F n are all conjugate to F it follows that F has infinitely many periodic cycles of any order N ≥ 3.
For polynomial Hénon maps saddle periodic points form a dense subset of the Julia set J = J + ∩ J − . While the periodic points constructed above in the not quasi-normal setting are all saddle points, it is unclear to the authors whether there also exist (infinitely many) saddle points of any order N ≥ 3 in the quasi-normal case.
Arbitrary Growth of entropy
In [Duj04], Dujardin constructed transcendental Hénon maps with infinite entropy by letting f (z) be an entire function which, on suitable disks D i , is well approximated by polynomials of some degree d i → ∞, to deduce that the corresponding Hénon map is Hénon-like on the bidiscs D i × D i of the same degree d i . It follows that the Hénon map has topological entropy at least log d i → ∞.
The rate of the growth of entropy is then given by the relation between d i and the radii of the disks D i .
In this section we show that the entropy of lacunary power series, i.e. power series with mostly vanishing coefficients, can grow at any prescribed rate. We will first prove the statement for entire functions in one variable: Lemma 5.2. Let P (z) := az n with a = 0 and n ≥ 2. Let r > 0, set R := |a|r n , and assume that R/2 > r. Let g : D r → C be a holomorphic function such that |g(z)| < R/2 n for all z ∈ D r . Then the function defined as f := P + g, f : is a polynomial-like map of degree n.
Proof. The function f satisfies f (∂D r ) ∩ D R/2 = ∅ and by Rouché's Theorem the winding number of the curve f (∂D r ) around the origin is n. It follows that f : D r ∩ f −1 (D R/2 ) → D R/2 is a proper map of degree n, and by the maximum principle every connected component of its domain is simply connected.
To prove that it is polynomial-like it suffices to show that D r ∩ f −1 (D R/2 ) is connected. Notice that |f | > 0 for |z| > r/2, hence all preimages of 0 under f are contained in D r/2 , and hence all connected components of f −1 (D R/2 ) have to intersect D r/2 . On the other hand, D r/2 ⊂ f −1 (D R/2 ), hence there is only one connected component of f −1 (D R/2 ) in D R as claimed.
Recall that the entropy of a polynomial-like map of degree n is log n. It follows from the fact that such maps are topologically conjugate (in fact, hybrid conjugate) to a true polynomial of degree d by Douady-Hubbard Straightening Theorem [DH85] in a neighborhood of their Julia set, or one can prove it directly as for polynomials following for example [Lju83].
Proof of Theorem 5.1. We construct f as a lacunary series ∞ i=1 a i z ni with (a i ) positive real numbers. Define g j := i =j a i z ni . By choosing a i , r i , n i appropriately we will ensure that for each j the monomial a j z nj = f −g j is the leading term on the circle of radius r j , in the precise way needed to apply Lemma 5.2.
We will construct the series inductively, along with a sequence of radii (r j ) such that for all integer j ≥ 1 we have h(r j ) = log n j ; (5.1) |g j (z)| ≤ r j 2 nj , ∀ z ∈ D rj ; (5.2) a j r nj j > 2r j ; (5.3) a j ≤ 2 −(j+1)j/2 . (5.4) By (5.4) the series converges to an entire function f . By (5.2),(5.3), and Lemma 5.2 we immediately obtain that the topological entropy of f on D rj equals log n j , which by (5.1) is equal to h(r j ).
Corollary 5.3. Let h, f be as in Theorem 5.1. Then the topological entropy of F (z, w) = (f (z) − δw, z) on D rj × D rj equals h(r j ) for all j sufficiently large.
Proof. In the proof of Theorem 5.1 we obtained a sequence of disks D rj with r j ∞ such that |f (z)| > (|δ| + 1) · r j for |z| = r j and j sufficiently large, and that f (z) winds n j times around the origin as z runs around the circle ∂D rj . It follows from Lemma 3.8 that the restriction of F to the bidisk D rj × D rj is a Hénon-like map of degree n j , which by Theorem 3.7 implies that the topological entropy on D rj × D rj equals h(r j ) for all j sufficiently large.
|
2021-02-11T02:15:43.077Z
|
2021-02-10T00:00:00.000
|
{
"year": 2021,
"sha1": "5b41b1f17810c388f1954a30c7b97e67b04a9d4a",
"oa_license": null,
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=94fc09b0-b514-49e3-9dc6-d70dd33bbcd2",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "5b41b1f17810c388f1954a30c7b97e67b04a9d4a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
150028277
|
pes2o/s2orc
|
v3-fos-license
|
Extraordinary Motivation or a High Sense of Personal Agency : The Role of Self-Efficacy in the Directed Motivational Currents Theory
the purpose of this article is to explore a possible correlation between the concept of self-efficacy and the occurrence of a highly intense motivational surge to which Dörnyei refers to as a Directed Motivational Current. To this end, the first two parts are of purely theoretical nature and aim to familiarise a reader with the background information behind our constructs. The quantitative part of our study supports the existence of a strong correlation between our primary variables. The final section is devoted to the qualitative analysis and discussion of our research project and has revealed a few noteworthy implications, such as the importance of a facilitative structure and a tendency for the value of self-efficacy to increase during a DMC experience.
The concept of self-efficacy
Before we commence with a more in-depth analysis of theoretical underpinnings behind the Directed Motivation Current Theory, let us first focus on another concept which is of significant relevance for the sake of this paper. The notion of personal agency was devised by a major motivational scholar Albert Bandura in the 1960s as 1 a central point to the Self-Efficacy Theory. Due to its multidimensional nature, the construct has had a considerable impact on many diverse fields, including sport, education, and addiction treatment. Unlike other traditional psychological conceptions, self-efficacy is hypothesised to vary depending on a domain of functioning and circumstances surrounding the occurrence of our behaviour. The primary assumption behind the theory coined by Bandura implicates that our perception of self-efficacy conditions our actions, behaviour, and motivation. Grounded in the Social Learning Theory, Bandura (1997, 3) shaped and popularised the understanding of self-efficacy as "beliefs in one's capabilities to organize and execute the courses of action required to produce given attainments." For this reason, it might be said that the term refers to the way human beings judge the likelihood of success in situations they perceive as challenging, usually within contexts which demand a considerable amount of effort. The feeling of control over difficult events has been found to have positive effects on emotional well-being, social interaction, and cognitive performance. Following Lenz et al. (2002, 35), "a firm belief in the possibility of accomplishing a task can trigger a proper level of motivation." More importantly, the scope of self-efficacy somewhat exceeds the field of psychology, also overlapping on the areas fundamental for the sake of this paper, such as foreign language learning. Schunk (1985, 215) puts forward the view that "our sense of personal agency may be a better predictor of success than any prior accomplishment, skills or knowledge." That is, self-efficacy is believed to govern our perseverance in case of obstacles and means we are willing to exert to obtain the desired goals. On these grounds, personal beliefs regulate whether an individual will proceed with a certain behaviour, how much effort will be devoted, and for how long a person is likely to sustain the devotion to the cause, especially in the event of adversities. The commonly held opinion seems to be that people who place a lot of confidence in their own capacities are also more determined and, as a natural consequence, the expectancy of favourable outcomes guarantees activation of a sufficient effort. On the other end of the spectrum, a low sense of personal agency results in hesitation which is frequently followed by a deficient incentive for showing a greater dedication to the cause. Forthwith, such individuals are also inclined to be discouraged by even minor hardships and are more likely to prematurely abandon a task.
The level of perceived self-efficacy, which influences our apprehension of the world, also plays a fundamental role in pursuing behavioural strategies. To Bandura's mind (1997,2), "people and their affective states are based more on what they believe than on what is objectively true." Following this logic, human behaviour is conditioned by our own perception of our capabilities rather than the skills we actually possess. Henceforth, instead of exhibiting a constant sensation of threat, those who are convinced of the value of their skills and knowledge regard life adversities as areas for possible improvement. Even more interestingly, initial difficulties rarely have a negative effect on the perception of personal agency. In such cases, the engagement is additionally fostered by a genuine interest and a sense of challenge, encouraging an individual to pursue an objective further and to focus on the elimination of spotted shortcomings. Quite the opposite approach can be observed in the case of individuals for whom the sense of personal capacities is lacking. Bandura (ibid.) also claims "that such people have lower aspirations and are not likely to be dedicated to the cause." Challenging situations prompt a feeling of an inevitable failure, to an extent that general stress levels are subject to a sharp increase. Instead of aiming attention at how to efficiently deal with an issue, human beings who characterised their personal agency as low are more apt to concentrate on their own lack of skills and competency. Low self-efficacy beliefs often lead to a firm conviction that it is better to withdraw from a task so that the additional stress can be avoided, rather than to devote a more significant amount of effort to chase our objectives.
Core features of the Directed Motivational Currents Theory
The concept of motivation, along with various theories and their interpretations that emerged along the way, has been propelling a vigorous debate for several decades. As one may expect, the construct has had a tremendous impact on the field of Second Language Acquisition, dominating the research practice within the field in question. This practice would only seem logical as almost everyone has, at a certain moment in their life, experienced a period of heightened motivation, allowing one to successfully pursue their goals and targets. Upon reviewing the theoretical background behind our concept, however, one can easily notice a rather peculiar trend. That is, as mentioned by Henry et al. (2015, 330),"the usual practice of motivation research has been to examine motivation in terms of generalizable factors, where the focus is directed to single constructs and attention is paid to between-group differences." Although much research was devoted to the well-renowned psychological notions as such, each of them received, what appears to be, a rather momentary attention. In similar fashion, Dörnyei et al. (2016, 13) claim that "no mainstream motivation theory has yet attempted to link goal-related dispositions with specific behavioural occurrences over time." Therefore, the research into how to combine behavioural patterns and psychological constructs affecting our pursuits to the benefit of overall motivation is currently scarce.
In an attempt to address this omission, Dörnyei and his colleagues coined a novel theory which elaborates on the phenomenon described as an intense motivational surge, allowing an individual to initiate and, then, maintain a long-term motivated behaviour. As outlined by Dörnyei et al. (2014, 10), a Directed Motivational Current 2 is a "conceptual framework which depicts unique periods of intensive motivational involvement both in pursuit of and fuelled by a highly valued goal." Combining a definite vision of the desired self with a clear action structure, a DMC enables human beings to engage in a series of tasks providing a great sense of enjoyment realised in accomplishing highly relevant goals. Even more importantly, Dörnyei et al. (2014, 98) assert that "the progression of such a motivational drive is further scaffolded by sets of behavioural routines, for instance, regular amounts of time spent on a task." Proximal subgoals are also of utmost importance here, as they allow an individual to sustain the flow of energy, providing a sense of satisfaction when one of the short-term targets is achieved. Take, for example, a person approaching a deadline for a piece of writing in the academic context. The student operating within the DMC zone would focus all his or her efforts on submitting a piece of work matching the best of its own abilities, with the vision of being offered a dream job fuelling the motivational structure. Prior to the deadline, proceedings of such a person become highly intensified and focused, to an extent where the vision of fulfilling one's dreams becomes the most significant part of life, whereas the other daily pursuits are deemed somewhat irrelevant.
Considering the subject of this paper, we shall now focus on drawing similarities between the core components of the Directed Motivational Current Theory and the concept of self-efficacy. According to Dörnyei et al. (2014, 99), "the most salient feature of a DMC is its directional nature, as such a powerful motivational drive cannot happen without a well-defined target or outcome that can provide cohesion for one's efforts and help focus energy on final goal attainment." Possessing such a clearly defined goal bears tremendous importance for sustaining a proper level of motivation, channelling our actions towards activities which favour the accomplishment of the ultimate objective. Contrary to the random cases of great motivation, when human beings perform tasks for the sake of sheer enjoyment, a DMC is distinguished by the very straightforwardness of its nature. According to Dörnyei et al. (ibid.), "a person operating within a DMC has a directional desire to reach a certain future state." More significantly though, the experience also includes a strong sensory element so that an individual is capable of visualising his or her own condition and emotions once the goal is achieved. In a similar fashion, Bandura (1994, 73) asserts that "a strong sense of efficacy enhances human accomplishment, making people more eager to approach demanding tasks." This efficacious outlook favours genuine interest and proper engagement which is extremely relevant, especially in case of initial hardships. In both cases, however, an activity becomes an integral part of one's personality, evolving from a random pursuit into a constituent of one's concept of self.
The second distinguishing feature of a DMC is its noticeable facilitative structure. This being said, a clearly tailored path of a current is a prime determiner of whether our endeavour will result in the goal attainment. Following Henry et al. (2015, 331) such a structure "includes a facilitative element, granting an individual with progress checks maintaining the momentum of the current." Each part of the framework functions as an incentive on its own, propelling further efforts of an individual. Significantly, each and every DMC must be consciously and explicitly inaugurated. Hence, a structure has a clear starting point, where the combination of both cognitive and contextual factors initiates a stream of motivational energy. As we may read in Dörnyei et al. (2014, 100), once this launch has occurred, "the continued motivated behaviour is sustained through Pobrane z czasopisma New Horizons in English Studies http://newhorizons.umcs.pl Data: 17/03/2020 02:06:18 U M C S Extraordinary Motivation or a High Sense of Personal Agency… the inclusion of a number of regular subgoals, serving both as proxy targets and as criteria to evaluate progress." Needless to say, such progress checks provide an individual with affirmative feedback being an extreme aid in maintaining one's motivation throughout the project. Dörnyei et al. (ibid.) also claim that these "subgoals divide long-term progression into smaller chunks, which fuel further actions." Likewise, feedback is also a crucial factor in building self-efficacy beliefs. Bandura (1994, 75) points out that "persuasive boosts in perceived self-efficacy lead people to try hard enough to succeed, allowing human beings to mobilise greater effort when problems arise." On these grounds, properly adapted subgoals can significantly foster our engagement by creating a sense of progress and, by the same token, making the final goal more attainable.
Positive emotionality is considered to be the final element required for a DMC to emerge. As previously mentioned, within the Directed Motivational Current Theory, a goal is highly personalised to an extent that it becomes an integral part of our behavioural routine. According to Henry et al. (2015, 332), this experience "can be understood as actualising one's potential, generating a feeling of intense personal pleasure." This situation may be the reason why positive emotionality in a motivational framework is related to the concept of eudaimonic well-being, as opposed to the satisfaction human beings experience when an isolated goal is achieved. As Waterman (2008, 236) elaborates "eudaimonia is a constellation of subjective experience including feelings of rightness and centeredness in one's actions, identity, and competence." Such an experience evokes an intense sensation of personal development so that some activities, which were previously considered boring and irrelevant, can suddenly become a source of great joy, provided they are a part of a DMC structure. Dörnyei et al. (2016, 101) explain that such a shift results in the fact that "pursuits which were once tedious evolve into endeavours being conductive to the accomplishment of the higher purpose." Notably, eudaimonic experience creates a reciprocal effect within a Directed Motivational Current. Henry et al. (2015, 332) indicate that "the enjoyment projected from the overall emotional loading of the target vision permeates each step along the way, even including engagement in activities that, outside the stream, could seem boring." Along similar lines, positive emotional states are understood to be one of the key features for the successful self-efficacy building. Not only does it govern whether an individual will engage in a task but also affects our judgement of perceived personal agency. Whereas positive emotionality towards a task enhances our self-efficacy, despondent mood was found to significantly diminish the quality of our proceedings.
Research
In the following section, we aim to analyse and present the outcomes of the research project conducted on adult learners of English as a foreign language. More specifically, the primary purport is to investigate the correlation between the concept of self-efficacy and the occurrence of a period of highly motivated behaviour.
Statement of the problem
Although Directed Motivational Current is a recent conjecture, the theoretical underpinnings of the theory are rather well-investigated. On the other end of the spectrum, the research into how individual psychological constructs, such as anxiety and self-efficacy, can leverage the occurrence of the motivational surge is currently lacking. For this reason, our project is meant to first identify the potential DMC cases and, then, examine the level of personal agency displayed by our subjects. Henceforth, we aim to shed some light on the following research questions: 1) Is there any correlation between a high value of self-efficacy and the frequency at which a highly intense motivational surge occurs? 2) Is it possible to assume that operating within a DMC structure influences the sense of general self-efficacy?
Research Participants
The data required for the purpose of this study was drawn from several universities in Rzeszow, offering both public and private education. At the very beginning, the subjects were assured that their participation was to be voluntary and that it would not affect their final grade in any way. The research body consisted of 212 adult learners, all of them currently pursuing a bachelor's degree in English Studies. Of the total population, 114 participants were female and 108 male, with an age range between 20 and 42 (the majority of our subjects indicated their belonging to the 19-30 bracket -69.34%).
Research instruments
As our research intended to measure two distinct phenomena, it was extremely important to ensure the credibility of our project. To this end, we decided to apply the principle of data triangulation, administering two questionnaires for the purpose of evidence collection. As suggested by Cohen and Manion (2000, 254) such a method "explains more fully the richness and complexity of human behaviour by studying it from more than one standpoint." Our subjects were requested to state their opinions and views so that proper scientific results could be obtained. Each questionnaire included straightforward instructions, advising the learners to mark an answer they deem the most appropriate. Although the tools were designed to stand by themselves, these guidelines were also provided verbally prior to the research. The time limit was not specified; however, on average, the participants required no less than 20 minutes to complete both questionnaires.
In the initial stage of our study, the subjects were requested to complete the General Self-Efficacy Scale stemmed from Schwarzer and Jerusalem (1995). The tool itself consists of 12 items, with the primary aim of measuring coping competence and the degree of resourcefulness. The amount of confidence learners place in their innate ability to address difficulties encountered in daily life and the degree of effort exerted to resolve such Pobrane z czasopisma New Horizons in English Studies http://newhorizons.umcs.pl Data: 17/03/2020 02:06:18 U M C S Extraordinary Motivation or a High Sense of Personal Agency… issues was also taken into consideration. Following the data collection, all questionnaires were scored and evaluated, starting from 1 (not at all true) to 4 (exactly true). In order to ensure successful correlation at further stages, the results were rounded to specify an even number.
The next instrument utilised for the purpose of the study was created to investigate the possible cases of people experiencing a DMC phenomenon, regardless of a setting. The DMC Disposition Survey derived from Muir (2016) is more complex in its structure compared to the previous questionnaire, consisting of both multiple choice and open questions. Regarding the former, a similar procedure to the preceding tool was applied: that is the answers were scored and rounded up for the sake of correlation. This being said, the lowest possible answer was 1 (strongly disagree), whereas 5 (strongly agree) marked the highest value possible. The latter, on the other hand, was extremely vital for exploring individual DMC cases in-depth. In general, the tool was meant to examine whether our subjects have ever witnessed an intense motivational surge, the duration of such an occurrence, the attitude towards the experience in question and, finally, their eagerness to encounter such a circumstance in the future.
Data analysis and discussion
Having in mind that a true Directed Motivational Current is a rather rare phenomenon, we had to first scrutinise our questionnaires in terms of whether the motivational experience described by a student fulfils the core theoretical underpinnings so that it can be classified as an intense motivational drive. Specifically, we have taken into account such features as directionality, the presence of a facilitative structure, and the emotional loading experienced during the process. On these grounds, 88 cases were identified as bearing importance for our further analysis, constituting 41.5% of the entire research body. In efforts to address the first research question, that is, to examine the possible relationship between the occurrence of a DMC phenomenon and the general level of personal agency, we have decided to run Spearman's rank correlation coefficient. This very choice was based on the fact that the Shapiro-Wilk normality test has indicated for our data to be far from a normally distributed population. More precise findings are summarised in the Table 1 below. As previously mentioned, the correlation between the data yielded by the two questionnaires administered for the purpose of this study was further examined by means of Spearman's rank correlation coefficient. The analysis of the table above proves that there is indeed a strong correlation (0.685) between a steady sense of personal agency and experiencing an intense burst of motivational energy, which can be further labelled as a case of a Directed Motivational Current. Furthermore, the results obtained during the examination are statistically relevant (p<0,01), indicating that an increase in the value of one variable is accompanied by an increase in the value of the other factor.
The second stage of our investigation was focused on a closer examination of the answers provided to the open questions included in the DMC Disposition Questionnaire. The data was scrutinised on several separate occasions, each time producing a significant amount of notes and remarks. In an attempt to address the second research question, each and every survey was also analysed to expand on the shape of self-efficacy beliefs associated with experiencing an intense motivational surge. Finally, the last stage of the study aimed at identifying commonalities and dissimilarities in the way a DMC was experienced. Following the consent received from the research participants; below we will present some quotations describing their DMC experience along with the corresponding interpretation.
It seems that even though our subjects witnessed their DMCs at various stages of their life and in completely different contexts, all participants highlight how their personal agency had evolved during the process. For Ola, who has been combining a full-time job with pursuing her degree for nearly two years, writing her bachelor's thesis was a starting point for her DMC experience. As she puts it: I was always struggling to find a proper motivation to study. Being very busy at work left me with a little time to engage myself in the academic life. However, when searching for the topic of my thesis, I finally found the subject which inspired me. From now on, I could not wait to read, write, and do all the things connected with it. More importantly, I surprised myself with how much I was able to do! As she admits, while experiencing the period of intensified motivation, the belief in her own coping abilities has also peaked. Suddenly, she found herself capable of finding the balance between her working and academic life. Being more organised and highly motivated to submit her thesis on time, she also managed to improve her proceedings at work. Experiencing a DMC has had a tremendous impact on her life and she stresses that she would like to witness such a motivational drive again. Contrary to the short periods of regular motivation, her experience lasted for nearly a year.
Extraordinary Motivation or a High Sense of Personal Agency…
Compared to our previous subject, the DMC experience described by Diana was set in an entirely different context. A couple of years ago, she was presented with an opportunity to run a large-scale training scheme for her company, which involved living in Spain for several months. She recalls the experience in the following way: At first, I was really excited. Being an eager traveller myself, this seemed to be a fantastic chance to experience something new. There was, however, a single condition -I had to improve my Spanish so that I could communicate effectively with my new colleagues at work. Considering I had only a couple of months, I became extremely stressed and nervous. As the deadline was approaching, I enrolled myself in an intensive language course and then it has all started. I wanted to be busy with the language all the time! I quickly became able to communicate with the fellow students and, in the end, I was sent to conduct the training! When asked about her memories concerning the period, Diana recalls only positive things about that time. However, prior to the DMC launch, that is the moment when she began to attend the language course, one can easily notice the fluctuation in the level of her self-efficacy. Although anxious at first, she improved her own perception of coping abilities, altering the attitude towards the forthcoming challenge. The gradual development of communicative skills provided our subject with affirmative feedback which was of utmost importance for maintaining the commitment to the cause. While summarising her experience, Diana mentions how proud she was with her achievement, especially in the presence of the obstacles mentioned above. For this reason, she would gladly welcome an opportunity to experience a similar period of intense motivation in the future.
Yet, the story of Daniel is a completely different illustration of a DMC phenomenon. Unlike our previous participants, for whom being presented with a challenge was the starting point of an intense motivational surge, he claims that his DMC was initiated while observing other highly motivated individuals. Below, we may find his own description of the DMC launch: In this case, it seems that Daniel's DMC was triggered by observing his friends and, thus, finding a highly personalised goal of his own. Similarly to the concept of modelling described by Bandura, where observing other human beings can inspire the growth of self-efficacy, seeing how excited his colleagues were provided our subject with the incentive required to embrace the new challenge. Also worthy of note is the fact that this event has completely altered his attitude towards life so that he was no longer struggling with the lack of motivation. At this point, we may observe a steadily growing sense of personal agency, allowing our subject to not only become more effective at work but also accomplishing his new goal. One of the primary goals behind this paper was to broaden the current state of knowledge regarding the DMC construct and, by the same token, to investigate whether any correlation between the value of self-efficacy and the occurrence of a Directed Motivational Current exists. Based on the insights we have gained, it seems only logical to assume that not only are our two variables mutually dependent but also experiencing an intense motivational surge can greatly facilitate the development of self-efficacy beliefs. This being said, although the majority of our participants were generally convinced about their coping abilities prior to the emergence of the motivational drive, while experiencing a DMC, their sense of personal agency was subject to a noticeable increase. This assumption is further confirmed by the results obtained through Spearman's correlation coefficient, indicating that a rise in the value of one variable can be equated with a simultaneous growth of the other.
Also worth noting is the fact that despite being well-aware of the challenges involved in their pursuits, such as an upcoming deadline or the amount of money required to attain a goal, our subjects experienced positive emotionality towards their endeavours throughout the process. Furthermore, in the examples analysed, the presence of a facilitative structure is easily recognisable. Each obstacle our participants had managed to overcome was bringing them closer to accomplishing the desired outcome and, at the same time, provided them with the feeling of extreme joy they had never experienced before. The very intensity of each of the occurrences described above appears to prove that, when in a DMC, an individual is capable of operating at levels significantly higher than in cases of standard motivation. Our analysis also suggests that it is possible to sustain this type of behaviour through personal goal setting and regular progress checks, serving as a source of affirmative feedback. Well-defined targets are of utmost relevance here, as they provide cohesion for effort and shape the paths individuals choose to follow towards the ultimate goal attainment. In all of the cases mentioned, our participants visualised a specific objective, allowing them to focus their energy on accomplishing the goal. Beyond any doubt, self-efficacy suits this framework perfectly, as both elements required to maintain the intense motivation are also important factors in building a healthy perception of one's coping abilities. Additionally, self-efficacious individuals were found to pursue more challenging and distant endeavours -a feature which is clearly demonstrated by the examples specified in the former part of the article.
Concluding remarks and suggestions for further research work
Needless to say, the construct of Directed Motivational Currents offers a firm theoretical background to support the concept. To the best of our knowledge, however, it has not been previously explored in an empirical fashion. Furthermore, there appears to be a scarcity when it comes to the research linking a DMC experience with other psychological notions, such as self-efficacy. Therefore, ameliorating this ambiguity was the main objective of our study and we believe that the findings which emerged from the project will contribute to the conjecture's validity. Specifically, not only does our examination provide compelling evidence for the existence of a relationship between self-efficacy and the occurrence of a high motivational surge but also highlights the theoretical underpinnings shared by both concepts.
Leaving the theoretical relevance of comprehending motivational drives aside, the concept created by Dörnyei also has enormous practical potential. For this reason, it is recommended to scrutinise how a DMC operates on a group level. A better understanding of the phenomenon could contribute to developing effective motivational frameworks, aiding the creation of motivational interventions in various settings. The accessibility of such a structure would be of great help in the context of education, where students frequently lack the incentive required to realise their full potential. For the sake of further research work, it would also be vital to conduct a large-scale longterm study so that the possible correlation between Directed Motivational Currents Theory and other psychological constructs can be investigated.
|
2019-05-12T14:24:09.873Z
|
2018-08-17T00:00:00.000
|
{
"year": 2018,
"sha1": "91aaea839c12e49db49f81e93ca3cff789eb470d",
"oa_license": "CCBY",
"oa_url": "https://journals.umcs.pl/nh/article/download/6294/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1f209e5666f5fa83346eedacf4e48e38a9ccc8b2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
234873332
|
pes2o/s2orc
|
v3-fos-license
|
Effect of fuels, aromatics and preparation methods on seal swell
Abstract New alternative jet fuels have provided many advantages in the aviation industry, especially in terms of economics and environment. However, fuel–seal compatibility is one of the major issues that restricts alternative fuel advancement into the market. Thus, to help understand and solve the problem, this study examines the swelling effect of prepared and non-prepared O-rings in different fuels and aromatic species. Stress relaxation experiments were carried out to evaluate seal compatibility under compression, which mimics engine operation conditions. Seals were compressed and immersed in a variety of fuels and their blends for about 90h while maintaining a constant temperature 30°C and constant compression force of 25% seal thickness. The two types of elastomers investigated were fluorosilicone and nitrile O-rings, which are predominantly used in the aviation industry. Meanwhile, three different fuels and aromatic species were utilised as the variables in the experiments. The fuels used were Jet-A1, SPK and SHJFCS, while the aromatic species added were propyl benzene, tetralin and p-xylene. The swelling effects were determined from the P/Po value. Results indicate that Jet-A1 has the highest swelling effect, followed by SHJFCS and SPK. It was observed that the higher the percentage of aromatics in fuel, the higher the rate of swelling. Furthermore, prepared seals had a lower swelling rate than did non-prepared seals. Meanwhile, the intensity of the swelling effect in the Jet-A1-SHJFCS blends was in the order of 60/40, 85/15 and 50/50 blend. The work done in this study will aid in the selection of suitable aromatic species in future fuels. The novelty of this research lies in the determination of the appropriate amount of aromatic content as well as the selection of type of aromatic and its mixture fuel. Moreover, the various proportions of fuel blends with aromatic are investigated. The primary aim of this study is to understand the behaviour of prepared and non-prepared seals, and their compatibility with alternative fuels.
O-rings in different fuels and aromatic species. Stress relaxation experiments were carried out to evaluate seal compatibility under compression, which mimics engine operation conditions. Seals were compressed and immersed in a variety of fuels and their blends for about 90h while maintaining a constant temperature 30 • C and constant compression force of 25% seal thickness. The two types of elastomers investigated were fluorosilicone and nitrile O-rings, which are predominantly used in the aviation industry. Meanwhile, three different fuels and aromatic species were utilised as the variables in the experiments. The fuels used were Jet-A1, SPK and SHJFCS, while the aromatic species added were propyl benzene, tetralin and p-xylene. The swelling effects were determined from the P/P o value. Results indicate that Jet-A1 has the highest swelling effect, followed by SHJFCS and SPK. It was observed that the higher the percentage of aromatics in fuel, the higher the rate of swelling. Furthermore, prepared seals had a lower swelling rate than did non-prepared seals. Meanwhile, the intensity of the swelling effect in the Jet-A1-SHJFCS blends was in the order of 60/40, 85/15 and 50/50 blend. The work done in this study will aid in the selection of suitable aromatic species in future fuels. The novelty of this research lies in the determination of the appropriate amount of aromatic content as well as the selection of type of aromatic and its mixture fuel. Moreover, the various proportions of fuel blends with aromatic are investigated. The primary aim of this study is to understand the behaviour of prepared and non-prepared seals, and their compatibility with alternative fuels.
Keywords: Seal compatibility; Alternative fuels; Aromatics; Stress relaxation NOMENCLATURE AJF alternative jet fuels FT Fischer-Tropsch SPK synthetic paraffinic kerosene EIA Energy Information Administration CTL coal-to-liquids GTL gas-to-liquids BTL biomass-to-liquids GC-MS gas chromatography-mass spectrometry nvPM nonvolatile particle emission H/C hydrogen-to-carbon ratio in compound P/P o initial force/force Ni nitrile O-ring Si fluorosilicone O-ring A-SPK-Ni prepared nitrile elastomers with acetone in SPK A-SPK-Si prepared fluorosilicone elastomers with acetone in SPK Tetra tetralin A-Tetra acetone-prepared elastomer in tetralin Pro propyl benzene A-Pro acetone-prepared elastomer in propyl benzene P p-xylene A-P acetone-prepared elastomer in p-xylene Tetra-Pro tetralin and propyl benzene A-Tetra-Pro acetone-prepared elastomer in tetralin and propyl benzene Tetra-P tetralin and p-xylene A-Tetra-P acetone-prepared elastomer in tetralin and p-xylene Pro-P propyl benzene and p-xylene A-Pro-P acetone-prepared elastomer in propyl benzene and p-xylene SHJFCS severely hydro-processed jet fuel from conventional source 50/50 50% Jet-A1 with 50% SHJFCS 60/40 60% Jet-A1 with 40% SHJFCS 85/15 85% Jet-A1 with 15% SHJFCS TC test condition
INTRODUCTION
Fossil fuels are categorised as non-renewable sources of energy. With the depletion of non-renewable sources emerges the need to shift to renewable energy sources. Advancements in the use of sustainable fuels are impacted majorly by the cost increase in crude oil and market fluctuations. Hileman and Stratton (1) outlined the production cost and price of jet fuel from data obtained from the EIA, demonstrating the instability in the price of crude oil. This later leads to a decline in oil production, further aggravating the rise in demand. An even more potential driver for sustainable fuels is growing environmental concern. Considerations over fuels and their emissions lead to the development of renewable fuel and so-called alternative fuels. New renewable fuels are expected to solve the fuel demands. Types of Alternate Jet Fuels (AJFs) include synthetic and bio jet fuels. Bio jet fuels can be play a role in protecting the climate by reducing CO and CO 2 by using the biowastes as their sources. While on other hand synthetic fuels are chemically produced to be a eco friendly fuel thus by reducing the carbon footprint. In addition to the emission reductions, they offer excellent thermal stability that reduces energy loss. Alternative jet fuels can be made from renewable or non-renewable resources. These fuels are made from the Fischer-Tropsch (FT) process or biological process (2) . Synthetic Paraffinic Kerosene (SPK) is derived from the FT process. FT is a process where the synthesis gas that is a mixture of carbon monoxide and hydrogen is converted into higher-molecular-weight hydrocarbon (3) . This process can be used with any materials containing carbon. Previously, this process was more expensive than refining petroleum. However, with new technologies, yearly cost reduction has made it more feasible. It has started to be dominant since the increase in demand for synthetic fuel. Some improvements have also been made to this process to increase production with affordable cost.
According to Gregory (3) and Muzzet et al. (4) , the Fischer-Tropsch synthesis originated in the early 1920s and 1930s during World War II, produced by the Germans. The first material to undergo the FT process is coal. It results in a by-product of Coal-To-Liquids (CTL). Other materials that are being used in this process are natural gas and biomass, which produce Gas-To-Liquids (GTL) and Biomass-To-Liquids (BTL), respectively. The raw products from FT synthesis are then further processed into suitable jet fuel by breaking the long chain of molecules into smaller molecules. The obtained final products are free from aromatics. The synthetic fuel derived using the FT process is a good alternative fuel. Besides improved thermal stability, it reduces carbon emission. This is because a pure synthetic jet fuel has low or almost no sulphur or aromatic contents. Ewing (2) also emphasised that SPK has very clean burning. In addition, fuels derived from the FT process overcome problems of cost and supply faced by petroleum derived fuels.
Despite these advantages, alternative fuels also have some drawbacks, such as poor elastomer compatibility and low density compared with the original jet fuel. Liu and Wilson (5) have also agreed that the compatibility of the alternative fuels with the seals in the turbine engine needs to be prior tested to commercialization. The absence of aromatics in the fuels causes the fuel to have a density below the minimum requirements and makes the seal shrink. This later causes fuel leakage in the engine by degrading rapidly once in contact with the new fuel. The seal shrinking can cause seal failures, thus damaging the system. Graham et al. (6) and other studies suggested the minimum aromatic content in a fuel is about 8% on average. This percentage is obtained from vigorous research, but it can go up and down in the future, depending on the types of fuels and aromatics used. It is considered as a safe minimum level of aromatics (1) . Besides that, the aromatic content in kerosene ranges from 8% to 22%. In common jet engine fuel, the high aromatic content encourages the seals to swell, thus providing more protection from leakage. In the case of renewable fuel, the seals tend to extract since it contains no aromatics. These problems can be overcome by adding aromatics to the fuel or by blending it with conventional jet fuels.
The concentrations of aromatics must be minimised to reduce carbon emission (7) . Another main purpose of alternative jet fuel production is to have low-sulphur fuels. In addition, it is important for the fuel to have good lubricity for the engine to run smoothly. Other than aromatics, synthetic jet fuel can also be blended with conventional jet fuel to gain the required properties of aviation fuel. The maximum aromatic content is regulated by environmental concerns since aromatics are large contribute to PM emissions. On the other hand, the minimum content was set to improve lubricity and prevent leakage. This is because some chemical contents in conventional jet fuels provide a better swelling effect than that provided by the synthetic fuel-aromatic mixture (8) . In this case, new jet fuels can be improved by eliminating or reducing the undesired properties of conventional jet fuels.
Three types of seals frequently being used in the aviation industry are fluorocarbon, fluorosilicone and nitrile seals. These seals are the most common seals found in an aircraft engine. The elastomers, such as O-ring seals, are used mainly in the engine parts and the hydraulic system to prevent leakage. Since the seals are made purely of rubber, they need to be changed after a period. This is because the elastomer starts degrading under different engine conditions. Studies done by Ewing (2) showed that the fluorocarbon showed no degradation or changes when immersed and compressed in a fuel. Therefore, in the current study we used only fluorosilicone and nitrile seals.
According to Liu and Wilson (5) , an O-ring is an elastomer which deforms when it is squashed between two parts, thus providing sealing function. Forces are applied when seals are squashed. After a while, the seals are unable to sustain the impact force applied to them. In addition, the long operating hours and fluctuating conditions of the engine apply more impact to the seals. Two effects that commonly happen to the O-ring are swelling and shrinking. In simple terms, swelling can be defined as an increase of seal volume, while shrinking is a decrease in volume. Usually, the swelling of elastomer causes the inner diameter and the thickness of the O-ring to increase. Swelling occurs when an elastomer absorbs some chemical components of fuel, thus softening and swelling. DeWitt et al. (7) explained that the fuel component separation increases as alkanes < alkyl benzenes < naphthalene. Naphthalene is a good hydrogen donor compared with alkyl benzene and alkanes. This is because the alkanes are of nonpolar in nature thus making it a inefficient hydrogen donor, while on otherhand the polarity of alkyl benzene varies, making it also a weak hydrogen donor compared to naphthalene.
According to Thomas et al. (8) , elastomer swelling is an indication of the seal resistance towards the fuel. It can also be described as a chemical attack of the fuels on the elastomers. Meanwhile, Qamar et al. (9) explained swelling as a diffusion process where the fuel hydrocarbon is absorbed by the seal. The acceptable seal swell of the elastomer in the automotive industry is about 12%, while in the aviation industry ranges from 18% to 30% (3,6) . To further understand the process of swelling, Graham et al. (6) described it thermodynamically as the breaking of the fuel and polymer intermolecular bonds. As a result of this, polymer-fuel bonds are created. The energy required to break the fuel-fuel and polymer-polymer intermolecular bonds is then replaced by the energy released when forming the polymer-fuel bonds. This results in an equilibrium condition, but it also depends on the properties of the fuel and polymer. In addition, Treloar (10) explained that an equilibrium degree of swelling occurs when the cross-linked rubber encounters the low-molecular-weight liquid. The lowmolecular-liquid molecules can easily diffuse into polymers and increase the entropy of the elastomer.
In contrast, shrinking occurs when some molecules or components in the seals are extracted into the fuel, causing the O-ring to degrade. Shrinking makes the seal thinner and the inner diameter smaller. As an example, the absence of plasticisers in the elastomer can cause the O-ring to harden and shrink. Baltrus and Link (11) explained that the shrinking process involves the release of fuel components absorbed by the seals. The components absorbed by the seals can be determined using Gas Chromatography-Mass Spectrometry (GC-MS). It can determine different substances in the sample using the principle of ionisation. Another method to identify the substances in the seals suggested by Baltrus and Link (11) is switch-loading tests.
Aromatic species selection is one of the crucial parts, as it not only affects the elastomer swell but also greatly influences engine emission (16,18,21) . Many studies (19)(20)(21)(22)(23)(24)(25)(26)(27) concluded that polyaromatic compounds are precursors of Non-Volatile Particle Emission (nvPM). As investigation of aromatic compounds of various types is essential, this study has focussed on three aromatics, that is, tetralin, propyl benzene and p-xylene. Tetralin is a polycyclic aromatic, while propyl benzene and p-xylene are monocyclic compounds. Polycyclic aromatics have a high H/C ratio, thus releasing maximum heat and being a major contributor to emissions compared with monocyclic aromatic compounds (21,24) .
The current work contributes to existing research in determining the appropriate amount of aromatic content in the fuel by examining various aromatic percentages, that is, 4%, 8%, 12.5% and 25%. Also, different types of aromatics and mixture fuel are selected. Three aromatics, that is, tetralin, propyl benzene and p-xylene, are investigated. In addition to the above blends, various proportions of fuel blends with Jet A-1 of 0/100, 15/85, 40/60, 50/50 and their combination with aromatics are also investigated. The primary aim of the study is to understand the behaviour of acetone-prepared and non-prepared seals, and their compatibility with alternative fuels. This study was carried out by creating engine-like operating conditions for seals under stress.
Materials
In this current study, only fluorosilicone and nitrile O-rings were used in each test since fluorocarbon O-rings do not show significant swelling is and are relatively inert compared with the other two (4) . Besides, nitrile and fluorosilicone O-rings are widely used in the market for sealing purposes, especially in the aviation industry (2) . Graham et al. (12) also proved that fluorocarbon elastomer did not exhibit much volume swell compared with nitrile and fluorosilicone. In addition, Ewing (2) verified that fluorocarbon seals showed almost no reaction to swelling. It can be said that the aromatics do not affect fluorocarbon seals. Thus, only nitrile and fluorosilicone seals were studied as described in Table 1 and Fig. 1. The seals used were brand new from sealed package. Fluorosilicone O-rings were obtained from the manufacturer Parker Hannifin Corporation, while nitrile seals were supplied by Trelleborg. Seals in different colours as in Fig. 1 were chosen for easy identification.
Fuels and aromatics
Some of the fuels commonly used in the test are jet propulsion fuel, Shell Sol-T and SPK. The renewable fuels have no or low amount of aromatic content and, thus, act as reference subjects for each study. A wide variety of aromatics was then blended into the fuel with different volume percentages to study the effect of the aromatics towards the volume swell. Graham et al. (12) also blended the aromatics together with the synthetic fuels. Four different tests were conducted in this study to gain a better understanding of seal swell. Besides, these tests helped to determine the concentration of aromatics needed to obtain the desired swelling condition. Three types of aromatics and three different fuels were used together. The conventional jet fuel, Jet-A1, was used as a reference for this study. The other two fuels used were SPK and Severely Hydro-processed Jet Fuel from Conventional Source (SHJFCS). The SPK made of animal fats was obtained from Shell while the Jet-A1 was from British Petroleum (BP) plc. The aromatics involved were tetralin, propyl benzene and p-xylene (Table 2). Tetralin is basically a naphthalene hydrogenated at high pressure with the presence of a catalyst. 10 1, 4-Dimethylbenzene Benzene FX2 Propyl benzene, C 9 H 12 1, 3, 5 Trimethylbenzene Benzene FX3 Figure 2. Swelling of O-ring in different pure aromatics by Anderson (13) .
These aromatics were chosen based on a study done by Anderson (13) . As seen in Fig. 2, tetralin and propyl benzene gave the highest F/F o among the five aromatics tested. The F/F o can be identified as swelling rate. Meanwhile, p-xylene showed average swelling rate, but it attained a stable swelling rate after a few hours.
Test conditions and fuel blends
The experiments were conducted over four testing conditions which differed in seal preparation methods and fuel blending. For all four test conditions, swelling effect of O-rings of both fluorosilicone and nitrile were investigated. With regard to seal preparation, non-prepared O-rings were used for test conditions 1, 2 and 4, while prepared O-rings were used for test condition 3. Jet A-1 was used as the reference fuel for all test conditions, while SPK and its blends with the above-mentioned three aromatics were investigated under test conditions 1-3 (Table 3). For convenience in test conditions 1-3, part A of the test condition means that no aromatics are blended in the fuel. Part B means that fuels were blended with multi-species aromatic combinations. Part C refers to single-species aromatics as presented in Table 3. On other hand, SHJFCS in blending with Jet A-1 fuel is used for test condition 4 as in Table 4. Test Condition 1 (TC-1) involved the use of SPK with blends of aromatics in 8% of total fuel volume proportion. This total 8% aromatic volume included the above-mentioned three Table 3 Test conditions of 1-3(TC-1, TC-2, TC-3) and their fuel blends Test condition 1 Test condition 2 Test condition 3 Test materials TC-1 TC-2 TC- aromatics in various proportions and combinations as presented in Table 3. Under TC-1, we evaluated the swelling effect of O-rings with minimum aromatic content required for swelling promotion (6)(7)(8)12,13) . Under Test Condition 2 (TC-2), total aromatic volume increased to 25% from minimum aromatic content 8%. The 4% of individual aromatic species increased to 12.5%, while the 8% of individual aromatic species was increased to 25% mixture (Table 3). TC-2 was carried out using the same O-rings from TC-1, and no new O-rings were utilised in this part to observe their reaction towards increasing aromatic content. Meanwhile, Test Condition 3 (TC-3) compared the swelling effect of O-ring with different preparation methods. In this test, the O-rings were prepared with acetone and tested with the same fuel mixtures of TC-1 ( Table 3). The results were then compared with TC-1 to determine the effect of the preparation method. On the other hand, Test Condition 4 (TC-4) evaluated the swelling effect of the elastomers in SHJFCS, Jet-A1, and blends of SHJFCS and Jet-A1 (Table 4). In this part, no aromatics were involved. TC-4 was conducted to find the optimum blend ratio of the SHJFCS and Jet-A1.
O-ring preparation
In TC-3, the swelling effect of the O-ring was compared on the basis of its preparation method. The first, second and fourth test condition were conducted with non-prepared elastomer using the new O-ring. Only the elastomers for third test condition were prepared in the manners suggested by Graham et al. (6) . The elastomers were soaked in acetone for a whole day. Afterwards, they were rinsed with acetone three times. Before oven drying at a constant temperature of 60 • C, the seals were air dried for 1 day. The function of this preparation method is to de-plasticise the elastomer. In other words, the plasticiser of the elastomer is removed from its surfaces. Since the elastomers in a jet or airplane are not new, this preparation method enables researchers to visualise the used elastomer instead of the new one. Thus, it gives better representations of O-rings which are in service. It was also explained by Graham et al. (6) that the extraction of plasticiser is crucial since it tends to depress the swelling effect of the O-ring material. The swelling effect for different preparation methods can be seen in Fig. 3.
Equipment setup
The tests were conducted using the Relaxation Rig System EB02 made by Elastocon AB (Fig. 4). Four rigs can be placed and run simultaneously in the cell oven. The temperature in each rig can also be set differently. The stress relaxation system was used to study seal compatibility under different conditions of either tension or compression. Different standards of regulation must be used when using tension or compression. The load cells and rig temperature sensors were calibrated to function precisely. These parts were then connected to the data connection box to enable data recording of the rig temperature and force exerted by the O-ring. The dial gauge was also checked for any faults. Stress relaxation test was chosen as the sole procedure in this project because it uses an ageing effect that helps to shorten the operating hours while providing much information (14) . The continuous measurement system also helps to record the data at a desired time range of seconds, minutes or hours. In this project, the test procedure was followed on the basis of the standard ISO 3384-1. Since this study focuses mainly on compression, the stress relaxation in tension using standard ISO 6914 was not considered. All the results were taken continuously at every hour until 90h. The analysis was done by determining the initial force applied to the seal at zero hours. After 1h, the new force, better known as the counterforce, was recorded. To determine the swelling response of the seal, the force at a given time that is the counterforce was divided with the initial force giving P/P o , as in equation 1 (17) . The ratio of the counterforce to the initial force determines the rate of swelling of the elastomer at a given time. Swelling of the O-ring occurs if P/P o exceeded the value of 1, while shrinking happens if it is less than 1. The shrinking of the seals is caused by the extraction of the elastomers into the fuel.
RESULTS AND DISCUSSION
To determine the swelling effect, the P/P o values were calculated from the data. The higher the P/P o value, the higher the swelling effect of the elastomer. According to Graham et al. (12) , the smaller the size or molar volume of the aromatic, the higher the rate of seal swelling. It can be referenced that the greatest seal swelling occurs in fuel and aromatics with small molar volume and strong hydrogen bonding potential and polarity (7) . Table 5 lists the properties of aromatics used in this study. The molar volume of each aromatic can be calculated using Equation 2, which depends on its base compound (15) : where V = molar volume of compound K = 2.696 V b = molar volume of base compound N = outer electron compound n = outer electron base compound As discussed in methodology, the results will be presented as follows. We will discuss the results of test conditions 1-3 in a stage-wise manner of part A, part B and part C, followed by test condition 4.
Results of SPK, SHJFCS and Jet-A1 with no added aromatics
The swelling effect of nitrile and fluorosilicone O-rings in Jet-A1, SPK and SHJFCS with no added aromatics was compared (Fig. 5). Results show that the SHJFCS gave almost the same swelling effect as did Jet-A1 for nitrile O-rings. The high concentration of aromatics in Jet-A1 caused the swell rate of SHJFCS to be slightly lower than that of Jet-A1. Since there are no aromatics in SPK, neither elastomer immerged in SPK exhibited swelling; the nitrile O-ring exhibited swelling only in Jet-A1 and SHJFCS. The nitrile O-ring in Jet-A1 fuel started to swell in the fifth hour, almost the same time as that in SHJFCS fuel. The 5h delay in swelling was due to the chemical components being extracted from the fuel before swelling. This is different from O-rings immersed in SPK where, due to the absence of aromatics, extraction was followed by further shrinking. In Fig. 5 it is clear that the fluorosilicone seals shrank dramatically at the beginning of the test and further extracted until the end point. The shrinkage of seals in SHJFCS was not severe, unlike that of fluorosilicone seals in Jet-A1 and SPK. It can be said that the SHJFCS tried to maintain the elastomer condition by reducing the shrinkage process. The results show that the maximum P/P o value for nitrile in SPK is 1. This can be compared with the results in Fig. 5, showing only the extraction of nitrile seal in SPK during the whole test. It can be said that Jet-A1 and SHJFCS have their own aromatics that encourage the swelling of the O-ring, unlike SPK.
SPK with multi-species aromatics of 4% each (TC-1 part B)
Here, SPK fuel is blended with two different aromatic species of 4% each to obtain an overall aromatic content of 8% fuel. This is the minimum prescribed aromatic content required in a fuel to promote seal swell (6)(7)(8)12,13) . As seen in Fig. 6, the P/P o value for each component does not exceed 1. However, swelling did occur for all nitrile O-rings because a slight bump or increase in P/P o values can be seen in Fig. 6. Since the extraction of the elastomer occurred in at bigger value compared with swell, the swelling rate did not exceed the value of 1. For nitrile O-rings, the biggest swelling effect occurred in the fuel mixture of SPK with tetralin and propyl benzene, while the smallest occurred in the fuel mixture of SPK with p-xylene and propyl benzene. The nitrile seal in tetralin and propyl benzene has a higher swelling rate compared with the other two (Fig. 6). Instead of molar volume, seal swelling in tetralin-propyl benzene blend was highly affected by the hydrogen bonding potential of those two aromatics. Tetralin shares characteristics with the naphthalene group, and propyl benzene with benzene group. According to DeWitt et al. (7) , the polarity of alkyl naphthalene shows stronger variation than that of benzene, thus making it easier for it to be a hydrogen donor. Therefore, the fuel-fuel bond will be broken easily, thus encouraging swelling to occur. Figure 7 shows that no swelling was observed for fluorosilicone O-rings in all the above three fuel mixtures. Unlike nitrile, the biggest swelling effect of fluorosilicone O-ring occurred in the SPK mixture of propyl benzene and p-xylene. This is because the role of polarity, molar volume and hydrogen bonding potential in swelling is only dominant in nitrile O-rings. Since rate of swelling depends on various factors, only the net factors of volume swell must be considered. It can also be seen that the maximum values of P/P o for all the 4% fuel mixtures were the same and occurred at the starting point.
SPK with single-species aromatics of 8% (TC-1 part C)
Here, fuel is blended with single-species aromatic with 8% aromatic content. Swelling was observed at all the nitrile O-rings in the three fuel mixtures. However, as seen in Fig. 8, the swell is too small compared with the shrinking of the nitrile seals. Besides, only nitrile O-rings soaked in propyl benzene started with swelling while others started with extraction. On the other hand, no swelling occurred for fluorosilicone O-rings in all three mixtures of tetralin, p-xylene and propyl benzene with SPK (Fig. 9). Meanwhile, the maximum P/P o obtained by the nitrile O-ring was the highest in propyl benzene mixture, followed by tetralin and p-xylene. It can be said that, for 8% aromatic blends, both the hydrogen bonding potential and molar volume play a role in affecting the swell rate, because the lowest swelling effect was for seal in p-xylene, which has the lowest molar volume but weakest hydrogen bonding potential. To achieve a good swell, the aromatics should have small molar volume and high hydrogen bonding potential. The tetralin and p-xylene mixtures only achieve the maximum P/P o value of 1 at the start of the test (Figs. 8 and 9) because the rate of extraction is greater than the rate of swelling.
TC-2: SPK with increased aromatic content of 25% volume in fuel
There was little improvement of swelling in the TC-1 with minimum aromatic content of 8%, encouraging us to understand the behaviour of swelling at elevated aromatic contents. So, in TC-2, the 4% multi-species aromatics of TC-1 were increased to 12.5% and 8% single-species aromatics of TC-1 to 25% in SPK, thus attaining overall aromatic content of 25% for all the fuel blends of TC-2.
SPK with multi-species aromatics of 12.5% each (TC-2 part B)
Since minimum aromatic content 8% had not shown a big increase in swelling, the aromatic contents were increased with the purpose of achieving significant volume swell of the seals. The volume percentage of each aromatic species was increased from 4% to 12.5% to increase the total volume percentage to 25%. The same elastomers were used to give accurate results. The results from different fuels and aromatics were then compared (Figs. 10 and 11). As seen in Fig. 10, only nitrile O-rings swelled, while there was no swelling for fluorosilicone seals. A better explanation of the swelling effect of the nitrile O-ring is shown in Fig. 11. The nitrile O-rings in three different fuel mixtures started with extraction but then swelled greatly before extraction took place once again. The greatest swelling effect occurred to nitrile O-ring in propyl benzene and p-xylene mixture with SPK. On the other hand, the rate of seal swelling in tetralin and propyl benzene was the lowest because it has the biggest molar volume, with a size of 275.4ml/mol. This makes it harder for fuel components to diffuse into O-rings, hence the observed lower swelling effect. Fluorosilicone O-rings followed the same trend. For fluorosilicone O-rings, no significant swells occurred (Fig. 11). The highest maximum value of P/P o was also obtained by the nitrile O-ring in the mixture of propyl benzene and p-xylene. The maximum P/P o value for fluorosilicone seals was only 1, while other nitrile O-rings achieved the maximum value of P/P o with a value greater than 1.
SPK with single-species aromatics of 25% vol (TC-2 part C)
Here, the fuel SPK was blended with single-species aromatics of 25% in volume. Figs. 12 and 13 show that the swelling occurred at all nitrile seals, while the fluorosilicone O-ring in propyl benzene mixture with SPK exhibited a small swelling. The nitrile seals in Fig. 12 show that they went through small extraction at the beginning, followed by great swelling. Figure 12 shows that seal in tetralin blend had the highest rate of swelling. Since tetralin is derived from the naphthalene group, it is expected to have seal swell higher than the benzene group of propyl benzene and p-xylene. This is because the naphthalene group has higher potential to be a hydrogen donor, thus encouraging swelling to occur. It has been observed by other researchers as well that hydrogen bonding of components of fuels can play a major role in seal swell phenomenon. On the other hand, all fluorosilicone seals in Fig. 13 1.003539. The highest maximum P/P o value was obtained by the nitrile O-ring in the tetralin mixture.
Comparison of results for TC-1 and TC-2
The TC-1 and TC-2 conditions were compared. Part B of TC-1 (4% each) and TC-2 (12.5% each) and part C of TC-1 (8% vol) and TC-2 (25% vol) are discussed below. As shown in Fig. 14 and Fig. 16 for fluorosilicone seals decreases as the percentage of aromatics increases from 4% to 12.5% and 8% to 25%. The results are plotted in Figs. 15 and 17 compared to Figs. 14 and 16, showing that the percentage increase of aromatics causes an increase in swelling rate and decrease in extraction rate. In addition, the mixture of different types of aromatics in fuels also affects the results.
TC-3: effect of seal preparation and its comparison with TC-1
TC-3 assessed the seal preparation methods and their effects on seals swelling. Unlike other tests, this test used prepared O-rings. The preparation involved washing and rinsing of O-rings using acetone as discussed in section 2.4. Four comparisons were made between the prepared (TC-3) and non-prepared (TC-1) seals immersed in SPK, Jet-A1, 4% + 4% multi-species aromatics with SPK and 8% single-species aromatics with SPK. As seen in Fig. 18, the nonprepared seals (e.g. SPK-Ni) have a higher rate of swelling than do the acetone-prepared seals (e.g. A-SPK-Ni). This is true regardless of fuels and aromatics and is particularly obvious for nitrile seals (Figs. 19 and 21). This happens because the prepared seals lose their plasticisers once they were washed and rinsed with acetone. Plasticiser is the main element of elastomers that makes them flexible. Once the plasticisers are gone, the seals will harden and later shrink. It was also in agreement with the test results obtained by Ewing (2) where the seals lost 10-15% of their original volume when they lost their plasticisers, and the seals also shrank as a result of this. Figure 19 shows a comparison of nitrile elastomers with and without preparation. The greatest relaxation was noted for the aromatic mixture of 4% propyl benzene and 4% p-xylene in SPK fuel for a prepared nitrile elastomer. In contrast, the least relaxation was observed in non-prepared seals 4% propyl benzene and 4% p-xylene. This clearly shows the effect of seal preparation and rate of strain inherited with method of preparation. On the other hand, as shown in Fig. 22, the fluorosilicone seals showed little difference in the swelling rate between the prepared and non-prepared seals. However, the non-prepared seals had low extraction value compared with the prepared seals. This means that the plasticisers in the non-prepared O-rings may still potentially facilitate swelling (Fig. 20). Only extraction occurs at all of the fluorosilicone O-rings, whereas swelling is not significant.
To identify the magnitude of swelling and effect of aromatics on prepared seals, the aromatic content was further increased to 8%. Along with this increment, only a particular aromatic species was blended to 8% fuel to identify the effect of individual aromatic species on prepared and non-prepared seals (Figs. 21 and 22). For nitrile elastomers, as shown in Fig. 21, the greatest relaxation was observed from the p-xylene aromatic species for an acetone-prepared elastomer, while the least relaxation was observed in non-prepared seal with propyl benzene aromatics. On comparing the results in Figs. 19 and 20 of nitrile seals, it is evident that with increase in aromatic content there is greater relaxation, and thus a greater From the results of fluorosilicone elastomer, as shown in Fig. 22, plasticisers in the nonprepared O-rings may still facilitate swelling with increment in aromatic content to 8%, where a similar behaviour was exhibited for 4% aromatic content. Only extraction occurred at all the fluorosilicone O-rings, while swelling was not significant. The graphs also clearly illustrate the differential magnitude of swelling for a prepared and non-prepared seal. This work has shown the importance of proper preparation before any detailed analysis and testing is done on different types of O-rings and fuel components. If appropriate care is not taken, there is a risk of generating results that are different in scales.
TC-4: blends of Jet-A1 and SHJFCS
The fourth test condition (TC-4) compared the swelling effect of elastomers of different composition of Jet-A1 and SHJFCS. The aim of this test condition was to obtain an optimal blend ratio of conventional Jet A-1 fuel with FT derived fuels and their compatibility effects. Five different compositions of the mixture were tested. As seen in Fig. 23, only nitrile O-rings swelled. This is further confirmed in Fig. 24, where all of them started with extraction, followed by swelling and then extraction once again. The chemical component in SHJFCS allowed the elastomers to swell. The same case goes for Jet-A1. No swelling occurred in fluorosilicone O-rings (Fig. 25). Meanwhile, the highest maximum value of P/P o was obtained by the nitrile O-ring in the mixture of 60/40, while the lowest was obtained by all the fluorosilicone O-rings with a value of 1. It can be concluded that the optimum mixture of Jet-A1 and SHJFCS is a 60/40 ratio of the total volume percentage. The blending of SHJFCS and Jet-A1 causes the aromatic hydrocarbon from both fuels to interact. The maximum P/P o values for 60/40 and 85/15 blends were higher than those of SHJFCS and Jet-A1.
However, the maximum P/P o value for the 50/50 blend was lower than that of both SHJFCS and Jet-A1, and the maximum P/P o value for the 85/15 blend was lower than that of the 60/40 blend. It can be deducted that the optimum blend of SHJFCS and Jet-A1 ranges between 50% and 85% Jet-A1.
CONCLUSIONS
The use of synthetic fuel in the airline industry has started to become a trend. It started with a trial but is now expanding towards a larger scale, because of its lower emissions than conventional jet fuel. Furthermore, the fuel can be produced from readily available sources such as plant and animal fats. However, its compatibility issue with seals restricts its use in jet engine. Thus, the current research helps to understand the swelling effect of different elastomers in different fuels. It was observed in all tests that only nitrile seals exhibited a better response to swelling, while fluorosilicone O-rings were more prone to shrinking. Results show that the swelling effect for Jet-A1 was the highest, then followed by SHJFCS and SPK, respectively. It was confirmed from TC-1 and TC-2 that a higher volume of aromatic content in a fuel results in a higher rate of seal swelling.
However, swelling rate does not depend solely on aromatics concentration but also on type of aromatics within the fuel. Meanwhile, the optimum blending percentage of Jet-A1 and SHJFCS from TC-4 was 60% Jet-A1 and 40% SHJFCS. The results show a linear relationship between rate of seal swelling and fuel's total aromatic concentration and type of fuel. Overall, this research has allowed some inferences regarding seal swell. Since more synthetic fuels are being created every day, more tests need to be conducted to determine their compatibility with seals in engines.
It can be observed that there were no colour changes at any of the elastomers at any point in the test. Besides, no cracking or peeling of the elastomers was observed. In addition, the seals were not physically damaged. From all the results collected, only nitrile O-rings exhibited a good response to swelling. This is in line with the findings of Muzzell et al. (4) , who also indicated that nitrile elastomers are very sensitive to changes in fuels and aromatics, unlike fluorosilicone elastomers. Thus, extensive future research is needed to achieve the target of producing renewable fuels that not only save the environment but also work well in engines.
The current work contributes to existing research in determining the right amount of aromatic content in the fuel by examining various aromatic percentages, that is, 4%, 8%, 12.5% and 25%. Also, different types of aromatic species and their mixture fuel are selected. In the study, three aromatics, that is, tetralin, propyl benzene, and p-xylene, are investigated. With the above, various proportions of fuel blends with JetA-1 of 0/100, 15/85, 40/60, 50/50 and their combination with aromatics is also investigated. The primary aim of the study was to understand the behaviour of acetone-prepared and non-prepared seals, and their compatibility with alternative fuels.
|
2021-05-21T16:57:12.536Z
|
2021-04-12T00:00:00.000
|
{
"year": 2021,
"sha1": "d8f8deb8199bc15800db7090eafa5a29404fbb74",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CF929FF16F530C8ECBB1D454872E9EA4/S0001924021000257a.pdf/div-class-title-effect-of-fuels-aromatics-and-preparation-methods-on-seal-swell-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "39697ec2cb4f51f0a445501ee2b86f651b6fe666",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
13254569
|
pes2o/s2orc
|
v3-fos-license
|
Single-image phase retrieval using an edge illumination X-ray phase-contrast imaging setup
A method enabling the retrieval of thickness or projected electron density of a sample from a single input image is derived theoretically and successfully demonstrated on experimental data.
A method is proposed which enables the retrieval of the thickness or of the projected electron density of a sample from a single input image acquired with an edge illumination phase-contrast imaging setup. The method assumes the case of a quasi-homogeneous sample, i.e. a sample with a constant ratio between the real and imaginary parts of its complex refractive index. Compared with current methods based on combining two edge illumination images acquired in different configurations of the setup, this new approach presents advantages in terms of simplicity of acquisition procedure and shorter data collection time, which are very important especially for applications such as computed tomography and dynamical imaging. Furthermore, the fact that phase information is directly extracted, instead of its derivative, can enable a simpler image interpretation and be beneficial for subsequent processing such as segmentation. The method is first theoretically derived and its conditions of applicability defined. Quantitative accuracy in the case of homogeneous objects as well as enhanced image quality for the imaging of complex biological samples are demonstrated through experiments at two synchrotron radiation facilities. The large range of applicability, the robustness against noise and the need for only one input image suggest a high potential for investigations in various research subjects.
Introduction
X-ray imaging is an essential tool for sample inspection in several fields, including industrial testing, materials science, small-animal imaging and clinical diagnostics. In this context, X-ray phase-contrast imaging (XPCi) has demonstrated an ability to provide improved contrast for materials made of low atomic number elements, such as biological soft tissues, where attenuation differences can be limited Wilkins et al., 2014;Snigirev et al., 1995;Davis et al., 1995;Olivo et al., 2001;Pfeiffer et al., 2006). Among the various XPCi techniques developed so far, edge illumination (EI) has shown significant promise both in synchrotron and laboratory implementations (Olivo et al., 2001;Olivo & Speller, 2007;Munro et al., 2012;Diemoz, Endrizzi et al., 2013;Diemoz, Hagen et al., 2013;Munro et al., 2013;Hagen et al., 2014), due to the simplicity and flexibility of the experimental setup and its practically negligible requirements in terms of spatial and temporal coherence (Olivo & Speller, 2007;Munro et al., 2012;Diemoz, Hagen et al., 2013). However, these practical advantages do not come at the expense of the phase sensitivity provided by EI, which was shown to be comparable with or even better than other XPCi techniques Diemoz, Hagen et al., 2013).
Like other XPCi approaches, such as analyzer-based imaging (ABI) (Davis et al., 1995;Chapman et al., 1997) and grating interferometry (GI) (Pfeiffer et al., 2006), the images acquired with an EI setup contain a mixture of attenuation and refraction (or differential phase) contrast, the latter being proportional to the spatial derivative of the X-ray phase shift. Methods that enable the separation and evaluation of these two quantities have been developed (Munro et al., 2012;Diemoz, Endrizzi et al., 2013;Diemoz, Hagen et al., 2013;Munro et al., 2013) which, however, require two images acquired in different configurations of the setup as input for the retrieval algorithm. While retrieval methods making use of a single experimental image have been proposed for other XPCi techniques (Paganin et al., 2002Burvall et al., 2011;Nesterets et al., 2004;Pavlov et al., 2004;Briedis et al., 2005;Momose, 2002;Momose et al., 2009), based on a variety of different assumptions and implementations, a single-image retrieval method for EI has not been developed yet. Such a method would be preferable in order to reduce the duration of the acquisition, a key requirement in many applications such as computed tomography (CT). Moreover, the existing implementations of EI do not provide the phase map directly, but rather its first derivative, which often has a significant intensity only along the boundaries of the sample details. Retrieval of the phase map would be advantageous in cases where subsequent processing (e.g. segmentation) is required, or where the object structure is complex (as is typical for many biological samples), in order to enable an easier image interpretation. While in principle the phase map could be obtained through one-dimensional integration of the refraction image (Hasnah et al., 2005), this procedure is known to produce strong streak artefacts along the integration direction, due to propagation of the noise in the refraction image. This is a well known problem of differential XPCi techniques, and various algorithms have been developed to try to reduce this effect, both in ABI and GI XPCi (Wernick et al., 2006;Thü ring et al., 2011).
In this article, we propose a method that enables direct retrieval of the phase map from a single EI image. The method is shown to produce artefact-free images, and to combine quantitative accuracy and robustness to noise.
Theory
The EI working principle is schematically presented in Fig. 1(a). The incoming beam is collimated in one direction by a first slit (with apertures typically from a few to a few tens of microns) located before the sample. A second slit, placed in front of the detector, is partially misaligned with respect to the first: as a result, part of the beam is stopped by the slit, while the remaining fraction impinges on the detector. The X-ray refraction introduced by the object leads to a spatial shift of the beam position at the detector plane, the component of which along the direction y orthogonal to the slits is equal to zÁ y , where z is the object-to-detector distance and Á y is the refraction angle along y. This beam shift will cause either an increase or a decrease of the photons counted by the detector, depending on the direction of refraction [see Fig. 1 In order to obtain a full image of the sample, a scanning of the latter along y needs to be performed. This scanning procedure can be avoided, in the case of a large beam covering the whole object (e.g. from a conventional X-ray tube), by replacing the slits with masks that replicate the EI principle over the entire field of view (Olivo & Speller, 2007).
If the object refraction angle and transmission are approximately constant within the height of the first aperture, the signal recorded by the detector along y is equal to Diemoz, Hagen et al., 2013) where N is the total number of photons passing through the first aperture and y represents the sampling position in the object. T y ð Þ ¼ exp À2k R dz y; z ð Þ Â Ã is the transmission and Á y y ð Þ ¼ k À1 ½@' y ð Þ=@y is the refraction angle, where ' y ð Þ ¼ Àk R dz y; z ð Þ is the phase shift, k is the X-ray wavenumber and n ¼ 1 À þ i is the complex refractive index. The illumination curve Cðy e Þ represents the fraction of the unperturbed beam entering the detector aperture, as a function of the position y e of the latter, and is obtained by scanning one of the slits vertically. An example of an illumination curve [measured at the European Synchrotron Radiation Facility (ESRF), see below for details on the experimental setup] is reported in Fig. 1(b). The right-hand side of equation (1) is obtained through a first-order Taylor expansion, in the approximation that the beam shift due to refraction is small compared with the width of the illumination curve (Munro et al., 2013).
In the direction parallel to the slits, however, the recorded signal is the same as that obtainable in free-space propagation (FSP) . If the object attenuation and phase are varying sufficiently slowly and the propagation distance is not too long (near-field regime) (Gureyev et al., 2008), this can be expressed by the transport-of-intensity equation (Teague, 1983). By combining the expressions for the signal in both directions, the normalized signal S n ¼ S=½NCðy e Þ can be written as where, for simplicity of notation, we have dropped the dependence of S n , T and ' upon the object coordinates x and y; r x and r y indicate derivation with respect to x and y, and à indicates convolution. LSF x is the line spread function of the imaging system along the x direction, which takes into account the blurring due to both the projected source size and the detector point spread function (Gureyev et al., 2008). In EI, instead, it can be shown that the effect of the source blurring on the signal is already taken into account by the shape of the illumination curve (Diemoz, Hagen et al., 2013), while the detector point spread function does not affect the signal . It can be seen from equation (2) that the signal depends on the two (unknown) functions T and ', which are in turn dependent on the distributions of and . The number of unknown quantities, however, reduces to one if the ratio =ðx; y; zÞ can be considered constant across the object. Although this simplifying assumption is strictly valid only in the case of a sample made of a single material, extensive use of it has been made in the literature (Paganin et al., 2002;Pavlov et al., 2004;Briedis et al., 2005). This approximation was shown, in fact, to provide good results in several practical cases (Paganin et al., 2002;Pavlov et al., 2004;Briedis et al., 2005;Sanchez et al., 2012), and to be well suited in particular for soft biological tissues, which feature very similar chemical compositions (Olendrowitz et al., 2012;Wernersson et al., 2013). For simplicity, we will first consider the special case of a homogeneous sample with constant values for and , as used by Paganin et al. (2002). Under this assumption, T ¼ expðÀ2ktÞ and ' ¼ Àkt, where the object thickness function t is now the unknown quantity to be determined. The following results will be then generalized in the case of the more relaxed assumption of constant =ðx; y; zÞ. We follow here an approach analogous to those employed by Paganin et al. (2002), Pavlov et al. (2004) and Briedis et al. (2005) for the FSP and ABI XPCi techniques. If we introduce the definition J EI zC 0 ðy e ÞC À1 ðy e Þ, equation (2) can be rewritten as where ¼ 2k is the linear attenuation coefficient. By noting that expðÀtÞr x;y t = À À1 r x;y ½expðÀtÞ and by developing the second and third terms accordingly, equation (3) can be rewritten in a more compact form: We now take the two-dimensional Fourier transform of both sides of equation (4) and make use of the Fourier derivative theorem, which gives: where F indicates the two-dimensional Fourier transform, k x ¼ 2 x and k y ¼ 2 y , where x and y are the Fourier space coordinates, and MTF x ðk x Þ F LSF x È É is the system modulation transfer function along the x direction. A single input image S n allows solving the above equation for the unknown quantity t, where F À1 indicates the inverse Fourier transform. Equation (6) can be implemented efficiently by means of the fast Fourier transform. A similar expression for the projected electron density e;p can be obtained under the more relaxed assumption of constant / ratio. In fact, by noting that the line integral of is equal to R dzðx; y; zÞ ¼ 2k À2 r 0 e;p ðx; yÞ, where r 0 is the classical electron radius (Born & Wolf, 1980), and following an approach analogous to that used in equations (3)-(6), it is found that
Experimental results
We now present two experimental demonstrations of the method, obtained with different setups, to highlight the method's flexibility and wide range of applicability. The first experiment was carried out at the ID17 beamline of the ESRF (Grenoble, France). The source size is about 132 mm (horizontal)  24 mm (vertical) (full width at half-maximum), and is located approximately 140 m from the experimental hutch. An energy of 27 keV was selected by using a double-crystal Si(111) monochromator in Laue geometry. The tungsten slits are oriented horizontally at a mutual distance of 8.90 m; their apertures are 20 mm and 250 mm, respectively. An illumination level of 50% [i.e. C(y e ) = 0.5] was used for the acquisitions, corresponding to the lower edge of the second slit being aligned with the centre of the first slit [see Fig. 1(a)]. A custom-made phantom consisting of wires of known materials is used to demonstrate the method's quantitative accuracy for a single-material object. The sample is placed on a motorized translation stage 3.85 m upstream of the second slit, and scanned vertically with steps of 20 mm during the acquisition. The images are acquired with a FReLoN CCD camera (Coan et al., 2006), with an effective pixel size of 46 mm  46 mm and 1 s exposure time.
The 'raw' images containing a mixture of attenuation and refraction contrast are shown in Figs. 2(a) and 2(d). The first wire is made of polyethylene terephthalate (PET) and has a diameter of 500 mm; the second is made of polyether ether ketone (PEEK) and has a diameter of 200 mm. It can be noted research papers that the amplitude of the FSP signal is significantly smaller than that of the EI signal, due primarily to the relatively large pixel size, which blurs the FSP signal [cf. equation (2)]. For each of the two images, equation (6) was used to retrieve the object thickness map. The following nominal values were considered in the calculation: = 4.09 Â 10 À7 and = 7.83 Â 10 À11 for PET, = 3.92 Â 10 À7 and = 6.91 Â 10 À11 for PEEK (Dejus & Sanchez del Rio, 1996). The retrieved thickness maps for the PET and PEEK wires are shown in Figs. 2(b) and 2(e), respectively, and the corresponding vertical profiles in Figs. 2(c) and 2( f ). The expected thickness profiles are also shown for comparison: they assume perfectly cylindrical wires with a diameter equal to the nominal one provided by the supplier.
There is reasonable agreement between retrieved and nominal thickness for both wires, and good image quality is obtained for the retrieved images in Figs. 2(b) and 2(e). In particular, the vertical streak artefacts visible when the phase map is obtained from integration of the refraction image are suppressed. This can be mainly attributed to the additional filtering along x present in equation (6): although in this experimental layout the FSP signal along x is limited, the filter effectively enforces consistency between columns, thus greatly reducing the vertical streak artefacts.
The second experiment demonstrates the applicability of the method under very different experimental conditions, and its benefits for the imaging of more complex biological samples. It was performed at beamline I13 (coherence branch) of the Diamond Light Source (Didcot, UK) using an X-ray energy of 9.7 keV. This energy is selected through a horizontally deflecting Si(111) pseudochannel-cut crystal monochromator. The source full width at half-maximum is equal to about 400 mm (horizontal) Â 13 mm (vertical); the experimental hutch is located about 220 m from the source. The first slit, made of gold electroplated on a silicon substrate, is oriented horizontally and has an aperture equal to 3 mm. In this experiment, the method described by Vittoria et al. (2014) was used, where the second slit is replaced by a high-resolution detector. This is a PCO Edge camera, consisting of a scintillator, magnifying visible light optics and an sCMOS sensor: it was operated with an 8Â magnification, which provides an effective pixel size of 0.8 mm. A 'virtual' edge is created through multiplication of the acquired frame by a Heaviside function, chopping the illuminated area in half along the vertical direction . The distance between sample slit and sample was equal to 5 cm, while the sample-to-detector distance was 30 cm. The sample is a flower petal with superimposed pollen grains. The vertical scan step was 1.6 mm, and the exposure time 7 s. In this case, the exact sample materials are unknown, and the retrieval of the projected electron density e;p was thus performed by using / as a tunable parameter. A / ratio corresponding to that of water was first assumed ( = 2.46 Â 10 À6 and = 5.94 Â 10 À9 ) (Dejus & Sanchez del Rio, 1996), then adjusted to obtain the best observable image quality (achieved with a / ratio equal to about 0.8 times that of water). The mixed image and extracted map for e;p are presented in Figs. 3(a) and 3(b). It can be seen that, owing to the very small pixel size employed, the amplitudes of the FSP and EI signals in the mixed image (respectively along the horizontal and vertical directions) are comparable in this case. Indeed, under conditions of very high coherence and very small pixel size the advantages of EI over FSP tend to be reduced. The pollen grains are clearly visible in the left region of the images. Cells lining up along the veins of the petal can also be seen. They show up as dark spots in the e;p image, because their density is lower than that of the surrounding tissue [see enlarged region of Fig. 3(b)]. Although in this case the sample materials do not strictly satisfy the assumption of constant / ratio, meaning that the estimated e;p values should be interpreted with caution, the obtained map is free from image artefacts and useful for interpreting the complex structure of the sample. In particular, it provides complementary information to the mixed one. While the latter is superior in terms of visualization of the smaller structures, corresponding to higher frequencies, low object spatial frequencies are better highlighted in the former. This can also prove useful for subsequent processing, such as segmentation.
Finally, a test of the method's robustness with respect to noise was carried out. Poisson noise corresponding to statistics of only 10 photons per pixel (standard deviation $30%) was added numerically to the image of the petal before the retrieval. Despite the very high noise in the input image, the retrieved map of e;p still maintains its ability to correctly visualize most of the sample structures, as seen in Fig. 4(a). The difference between the retrieved maps obtained with low ( Fig. 3b) and high (Fig. 4a) levels of noise is presented in Fig. 4(b) (note that a different color scale has been used since the values are small).
The high stability with respect to noise, apparent from Figs. 4(a) and 4(b), can be explained by the fact that equations (6) and (7) behave as low-pass filters, thus largely suppressing high-frequency noise. At the same time, however, lowfrequency artefacts are also limited since the filter never diverges (in particular at the zeroth frequency), leading to all spatial frequencies being well behaved. It is worth noting that this property is a direct result of exploiting both attenuation and refraction information from the input image (the attenuation signal effectively acts as a regularization term, by imposing point-wise consistency between absorption and retrieved e;p values).
Conclusions
The method proposed in this article has been shown to provide retrieved images of high quality, to be robust against noise and free from the streak artefacts often encountered with differential phase methods. Only one input image is required, which is advantageous in terms of reduced exposure time and radiation dose to the sample. If the ratio / is approximately constant and its value is known, fairly accurate quantitative information can be extracted, i.e. the projected electron density and, if and are constant, the object thickness. Our test on a biological object shows also that the assumption of a constant / does not have to be rigidly satisfied for high-quality images to be obtained. Another significant advantage over integration of differential phase images is that the method can be used on objects larger than the field of view, as prior knowledge of the phase values at the image boundaries is not required.
The developed method can be applied to both planar and CT imaging over a wide range of experimental conditions. Future work will be dedicated to extend its use to EI laboratory setups employing polychromatic beams from conventional X-ray tubes.
|
2016-05-16T03:01:45.312Z
|
2015-06-25T00:00:00.000
|
{
"year": 2015,
"sha1": "496f51c6d310117eefe471931f4c3ef02de88c00",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/s/issues/2015/04/00/pp5069/pp5069.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "01885715e742b637673a7895843d22bf882fa493",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
15772029
|
pes2o/s2orc
|
v3-fos-license
|
Non-patient related variables affecting levels of vascular endothelial growth factor in urine biospecimens
Vascular endothelial growth factor (VEGF) is an angiogenic protein proposed to be an important biomarker for the prediction of tumour growth and disease progression. Recent studies suggest that VEGF measurements in biospecimens, including urine, may have predictive value across a range of cancers. However, the reproducibility and reliability of urinary VEGF measurements have not been determined. We collected urine samples from patients receiving radiation treatment for glioblastoma multiforme (GBM) and examined the effects of five variables on measured VEGF levels using an ELISA assay. To quantify the factors affecting the precision of the assay, two variables were examined: the variation between ELISA kits with different lot numbers and the variation between different technicians. Three variables were tested for their effects on measured VEGF concentration: the time the specimen spent at room temperature prior to assay, the addition of protease inhibitors prior to specimen storage and the alteration of urinary pH. This study found that VEGF levels were consistent across three different ELISA kit lot numbers. However, significant variation was observed between results obtained by different technicians. VEGF concentrations were dependent on time at room temperature before measurement, with higher values observed 3–7 hrs after removal from the freezer. No significant difference was observed in VEGF levels with the addition of protease inhibitors, and alteration of urinary pH did not significantly affect VEGF measurements. In conclusion, this determination of the conditions necessary to reliably measure urinary VEGF levels will be useful for future studies related to protein biomarkers and disease progression.
Introduction
Evidence for the role of angiogenesis in cancer biology was first suggested by Judah Folkman, who found that solid tumours remained dormant and limited in size in the absence of neovascularization [1]. This observation has driven further research directed at targeting angiogenesis as a means of halting tumour growth, which has resulted in the current therapeutic use of several angiogenesis inhibitors as anticancer agents [2][3][4][5][6]. Recently, several studies have investigated the quantification of angiogenic proteins in urine for use in cancer diagnosis and prognosis [7][8][9][10]. [11][12][13][14]. If urinary levels of VEGF can be accurately quantified, evaluation of this protein may potentially provide a convenient and non-invasive predictor of tumour behaviour and the overall angiogenic state of the host. However, as with any marker evaluated in biological specimens, the stability of VEGF in the urine between the time of sample collection and analysis is of concern. Often, initial studies of a marker report promising results but subsequent confirmatory studies of the same candidate marker conflict, potentially due to the use of unstandardized methods that lack reproducibility [15]. Few studies have investigated the role of variables that could potentially affect VEGF measurements in urine [16]. In this study, we sought to examine several variables that may affect the biomolecular profile of urine specimens. Whereas Hayward et al. (also in this issue) examined potential variables affecting measured VEGF levels prior to long term freezer storage, we focused on variables after collection as well as potential sources of error related to the measurement of urinary VEGF levels by enzyme linked immunosorbant assays (ELISA). We also chose to focus on potential causes of diminished reproducibility, including an evaluation of inter-assay precision, determined by the variation in VEGF sample results obtained from three different ELISA kit lot numbers and variations in the results obtained with identical ELISA lots when assays were performed by two lab technicians. We then examined variables after collection that had been suggested to affect levels of VEGF in urine biospecimens such as the time the samples were left at room temperature prior to assay, the addition of protease inhibitors prior to storage and the modification of the pH of the sample.
Specimen collection
Human urine samples were collected from male and female adult patients receiving definitive radiation therapy for glioblastoma multiforme. In each case, samples were collected in accordance with approved protocols requiring informed consent. Patients were instructed to provide fresh, midstream urine specimens of at least 5 ml at three time points: (i ) before receiving any radiation therapy, (ii ) on the last day of radiation therapy and (iii ) one month following completion of their therapy.
Urine processing and VEGF analyses
After collection, urine specimens were divided into 4 ml aliquots and stored at -20°C. For each experiment, specimens were randomly selected from the cohort, thawed, divided into smaller aliquots for duplicate or triplicate analysis and stored at -20°C until analysis. VEGF levels were determined using a commercially available chemiluminescent ELISA kit (QuantiGlo ® ; R&D systems, Minneapolis, MN; http://www.rndsystems.com/pdf/qve00b.pdf) according to the manufacturer's instructions.
Variation in VEGF levels between ELISA kit lot numbers
Eleven randomly selected samples were run in triplicate using three different ELISA kits with varying lot numbers (236856,238697,239328). The reproducibility of results obtained from the three kit lots was evaluated.
Intra-technician reproducibility
Ten urine samples were randomly selected from the cohort. Two technicians, one with significant experience running the assay and the other with less experience, independently ran the 10 samples, in triplicate on the same plate. Measured VEGF levels were compared between the two technicians to determine intra-technician reproducibility.
Time of thaw
Nine samples were randomly selected from the cohort to evaluate the effects of time of thaw. For each sample, aliquots were thawed at room temperature for five different periods of time before VEGF measurement: 1, 3, 5, 7 and 24 hrs.
Evaluation of protease inhibition in urine samples
Thirteen randomly selected patient urine samples were divided into 200 µl aliquots and stored at Ϫ20°C overnight with or without the addition of a protease inhibitor. For samples with protease inhibitors, an appropriate mass of one mini, ethylenediaminetetraacetic acid (EDTA)-free tablet (Roche Applied Science, Mannheim, Germany) was added. Samples, stored at -20°C overnight, were run in triplicate following 3 and 24 hrs of sitting at room temperature on the lab bench. To confirm the protease inhibitor was not interfering with molecular activities of the ELISA assay, prepared VEGF standards (0, 6.4, 32, 160, 800, 4000 and 10,000 pg/ml) were run in duplicate in the presence or absence of a protease inhibitor. Differences in relative light units were determined by luminometry.
Effect of altering urine pH
The pH of four randomly selected patient urine samples was measured by a precision pH meter and microprobe (Accumet, Fischer Scientific) standardized for temperature. Alterations in pH were accomplished with the addition of 1N NaOH or 0.5N HCl. 200µl aliquots of samples were slowly titrated to pH 4, 5, 6 and 7 with the addition of the appropriate acid or base, stored overnight at -20°C, and analysed the next morning for the effect of pH on measured urinary VEGF levels.
Statistical analyses
For each sample, a mean VEGF level and standard error were calculated. Statistical significance was determined using paired t-tests. Results were considered to be significant at P < 0.05. In addition, each experiment was performed in duplicate and the results were compared for further insight into the reproducibility of the assay.
Effect of ELISA kit lot numbers on measured urinary VEGF levels
No significant variation was observed in urinary VEGF levels measured across three different lot numbers. The coefficients of variation (CVs) of 11 identical samples tested in triplicate ranged approximately 4-18%. Only three samples showed a CV of greater than 10% between lot numbers ( Fig. 1 and Table 1). The samples with high variability had levels in the mid-range of observed values, well within the range of the standard curve generated from the kit supplied standards.
Experience of technician improved the reproducibility of urinary VEGF measurements
To evaluate the reproducibility of the VEGF ELISA kit when the assay is performed by an experienced and an inexperienced laboratory technician, each technician ran triplicate samples in identical assay kits. Significant variations in VEGF concentrations were observed when VEGF levels were measured in ten identical samples by the two technicians (Fig. 2). For 9 of the 10 samples, the standard deviations of the experienced technician were lower than the standard deviations of the inexperienced technician (P = 0.041, paired t-test, one-tailed). In two replicate experiments, the more experienced technician again had a significantly higher reproducibility than the less experienced one.
Effect of increasing time of thaw on measured urinary VEGF levels
We next evaluated the effect of the amount of time that elapses between removing the samples from the freezer and performing
Effect of protease inhibitors on VEGF degradation between 3 and 24 hours
To determine whether the reduction of VEGF levels between 3 and 24 hr was due to the presence of urinary proteases, VEGF levels in samples with and without protease inhibitors were compared following 3 and 24 hrs of sitting on the bench top ( Fig. 4A and B) (Fig. 5).
Discussion
We investigated several potential variables that were suggested to affect the reproducibility and accuracy of the measurement of VEGF in urine samples obtained from patients receiving radiation treatment for glioblastoma multiforme. We suggested that differences in specimen characteristics, specimen processing and specimen storage may alter the results obtained. In addition to these specimen characteristics, we wished to evaluate the importance of technician experience and the use of ELISA kits from different lots. As urine samples may be collected over a period of years, the inter-lot variability between human VEGF ELISA kits was of primary importance. Kits of different lot numbers may have different concentrations of antibody in coated wells, thereby altering the antigen-antibody binding efficiency between assays. However, our findings showed that the variability between lot numbers was low enough to justify using human VEGF-ELISA kits of differing lots for long-term prospective studies. CVs of the 11 identical samples tested in triplicate were in accordance with the CVs obtained by the manufacturer (range 4-10%). Thus, an inter-lot reproducibility of 85-90% can be expected. Variations in VEGF ELISA results may also be dependent on the technician who is performing the experiment [17]. The results of this study showed that a more experienced technician may be able to perform the assay with higher reproducibility than a less experienced one. One possible solution to overcome this is the use of automated liquid handling, as pipetting variability is an obstacle to achieving intra-assay reproducibility in low-volume reactions.
Of additional importance is the time that elapses after urine biospecimens are removed from the freezer and left on the lab bench before assaying the biomolecules. As supersaturated urine specimens cool to ambient temperature, precipitation of calcium and phosphate, uric acid and proteins may occur. Additionally, urine may contain bacterial growth, with accompanying proteolysis that can increase over time [18]. Therefore, it is best to standardize the time the sample spends at room temperature to minimize protein degradation or loss of protein via precipitation. Our results show that a thaw time of 3-7 hrs is optimal for obtaining maximal and consistent VEGF levels. For the purpose of our experiments, the sample thaw period was standardized to 3 hrs.
The addition of a protease inhibitor did not appear to overcome the instability of VEGF between 3 and 24 hrs, suggesting that VEGF degradation is not affected by the presence or absence of proteases within this time frame. While other authors have found that the addition of protease inhibitors enhances the recovery of urinary-associated proteins, this effect appears only when the inhibitor is added soon after collection [19].
Based on our findings, VEGF does not appear to be sensitive to neutral or acidic pH as its stability was maintained at a pH of 4, 5, 6 and 7. Similarly, Klasen et al. [20] did not find any relationship between pH and the urinary protein levels of albumin, transferrin and ␣1-microglobulin. In addition, the pH level of the urine did not affect the ability of the ELISA assay to detect VEGF levels, an important result as pH is known to alter the avidity of antibodyepitope interactions [21].
As more studies collect and archive urine samples to measure protein levels, it is important to assess the impact of urine preservation and storage methods on the levels of these molecules. Based on results reported here, urinary VEGF levels can be measured for long-term prospective studies since variation between ELISA kit lot numbers is insignificant. Intra-assay precision appears to be greater when a more experienced technician performs the assay. As a standard practice we suggest that, for prospective studies, the same technician perform the ELISA assay for each investigation. In addition, the time that the urine sample spends at room temperature on the bench top, between removal from the freezer and inclusion in the assay, should be standardized to 3-7 hrs. The addition of protease inhibitors during the sample thaw period does not appear to be essential to maintain optimal VEGF integrity. Additionally, the sample pH does not need to be standardized if samples are between a pH of 4 and 7.
While these variables have been evaluated for the measurement of VEGF levels in urine by ELISA, it is possible that these same variables may have different effects in other biospecimens and for different biomarkers. It is important to fully evaluate potential causes of measurement variability prior to initiating large studies to evaluate a biomarker. Finally, it is of critical importance to standardize storage and assay performance variables to ensure maximal reproducibility.
|
2016-05-15T17:15:35.412Z
|
2008-08-01T00:00:00.000
|
{
"year": 2008,
"sha1": "c7ad6d52148a7ceabc64bd1de1ff923bcd550dd3",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3865669?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7ad6d52148a7ceabc64bd1de1ff923bcd550dd3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
220836706
|
pes2o/s2orc
|
v3-fos-license
|
Estimating the Growing Stem Volume of the Planted Forest Using the General Linear Model and Time Series Quad-Polarimetric SAR Images
Increasing the area of planted forests is rather important for compensation the loss of natural forests and slowing down the global warming. Forest growing stem volume (GSV) is a key indicator for monitoring and evaluating the quality of planted forest. To improve the accuracy of planted forest GSV located in south China, four L-band ALOS PALSAR-2 quad-polarimetric synthetic aperture radar (SAR) images were acquired from June to September with short intervals. Polarimetric characteristics (un-fused and fused) derived by the Yamaguchi decomposition from time series SAR images with different intervals were considered as independent variables for the GSV estimation. Then, the general linear model (GLM) obeyed the exponential distribution were proposed to retrieve the stand-level GSV in plantation. The results show that the un-fused power of double bounce scatters and four fused variables derived from single SAR image is highly sensitive to the GSV, and these polarimeric characteristics derived from the time series images more significantly contribute to improved estimation of GSV. Moreover, compared with the estimated GSV using the semi-exponential model, the employed GLM model with less limitations and simple algorithm has a higher saturation level (nearly to 300 m3/ha) and higher sensitivity to high forest GSV values than the semi-exponential model. Furthermore, by reducing the external disturbance with the help of time average, the accuracy of estimated GSV is improved using fused polarimeric characteristics, and the estimation accuracy of forest GSV was improved as the images increase. Using the fused polarimetric characteristics (Dbl×Vol/Odd) and the GLM, the minimum RRMSE was reduced from 33.87% from single SAR image to 24.42% from the time series SAR images. It is implied that the GLM is more suitable for polarimetric characteristics derived from the time series SAR images and has more potential to improve the planted forest GSV.
Introduction
With the growing area of planted forests, it is of great significant to reduce the carbon dioxide emission and slow down the global warming, due to the decrease of natural forests. Understanding how to evaluate and well-manage planted forests is becoming increasingly important [1][2][3][4]. The growing stock volume (GSV), defined as the total stem volume of living trees, is a basic key indicator for monitoring the planted forest resource at regional scales [4][5][6]. Traditionally, the GSV is derived by measuring heights and diameters at breast height in ground plots. However, the GSV of samples can hardly be realized by traditional ground survey in mountainous regions, which is time-consuming, labor-intensive and costly [3,4]. Optical remote sensing techniques have been widely used to monitor the forest resource by establishing the relationship between in-situ GSV and characteristics derived from remote sensing images [5,6]. Various images from optical remote sensing have bands to discriminate the difference between the forest type and other types [5,7]. However, disturbed by clouds, fog and mist, the high-quality optical images are hardly acquired in mountain areas, which are the major distribution regions for the planted forest. Microwave remote sensing technology, which is less affected by weather conditions, has ability to measure the forest GSV by their penetration depth related to wavelength [8][9][10]. Moreover, the integration of synthetic aperture radar and polarimetric techniques, polarimeric property related to structural characteristics of forest canopy is detected and the polarimeric information derived from quad-polarimetric SAR images has more potentiality for monitoring the GSV in these areas [9][10][11][12].
Initially, the model describing the relationship between the forest GSV and polarimeric information is critical for forest GSV estimation [7,9,12]. Physical models have many parameters, and are too complex to estimate the forest GSV [13,14]. Empirical models, including the first-order linear model, multi-variable linear model and the linear model based on the allometric equation, are often employed to describe the relationships between backscattering coefficients and GSV [8,15,16]. However, they omit the scattering property related to forest structure parameters. Semi-empirical models, considering the backscattering of ground and forest, are also applied to estimate the stand-level GSV using X and C band SAR images [17][18][19][20]. Further studies found that the polarimetric characteristics obeyed the exponential distribution [19,20], and a semi-exponential model, derived from the simplified water cloud model (WCM) based on the exponential distribution, was proposed in expressing the interaction between polarimetric characteristics and GSV [21][22][23][24][25][26].
Compared with the physical models and empirical models, fewer parameters contain in the semi-exponential model and it is convenient to estimate the forest GSV using different polarimeric information, including backscattering coefficients and powers of scattering from polarimetric decomposition [23,25]. The extra advantage of the semi-exponential model is that the saturation level is considered as a parameter and solved by estimating GSV [7,21,27]. However, the non-linear algorithms, with global optimum and the initial values of parameters, are necessary to accurately estimate forest GSV. Moreover, because of the low saturation level [7,16], the semi-exponential model is insensitive to the high GSV retrieval. Therefore, models with less limitations and higher saturation should be considered for mapping the GSV of plantations.
Besides the model, the sensitivity of the polarimetric characteristics derived from quad-polarimetric SAR images also determines the accurate of mapping the forest GSV [7,27]. There are several kinds of polarimetric characteristics related to GSV [7,[28][29][30]. The backscattering coefficients from dual-polarization or quad-polarization SAR images were directly employed to retrieve the forest GSV [18,23,[31][32][33][34][35], and the obtained results are in good agreement with the measured GSV in boreal forests [18,23,[31][32][33][34][35][36]. However, without the explicit physical meaning of scatterings, the backscattering coefficients are easier to reach the saturation value at low forest GSV [16]. As we know, the coherence of SAR images from different satellites (bands and polarization) has great potential to estimate the forest GSV because of rather sensitivity to the height of scatterings [22,23,37]. it is difficult to acquire appropriate polarimetric SAR images, by limiting to the spatial baseline, temporal baseline and environment factors, especially in the forest regions. Polarimetric decomposition, extracting physical parameters from SAR images, without any ground measurements, is another resource of polarimetric characteristics [13,14]. The powers of scattering derived from various polarimetric decomposition approaches describe the process of scattering, and have been showed explicit physical meaning related to the scatterings [28,29]. In the previous works, the approaches of model-based decompositions have been proposed for polarimetric decomposition, such as Freeman three-component decomposition, Yamaguchi four-component decomposition, Arii three-component decomposition and Neumann decomposition [28,29,[38][39][40]. Recently, several investigations were undertaken to further improve the methods by including more sophisticated vegetation volume scattering model or by limiting the negative scattering power effects, such as 4-component Rotated decomposition, the generalized 4-component unitary decomposition and Stochastic Distance-based 4-component decomposition [40][41][42]. Furthermore, a hybrid model-based and eigenvalue/eigenvector-based polarimetric decomposition technique was also proposed to derive the powers of scattering in vegetation cover regions [43]. Due to their simplicity and computational ease, the Yamaguchi decomposition and its improved algorithms are widely applied to estimate forest GSV [13,36,[39][40][41][42].
Moreover, previous studies have demonstrated the advantages of using time series SAR images for GSV estimation over single polarimetric SAR images [17,20,[44][45][46]. The methods for estimating GSV from time series SAR images, such as the regression model, average, and weighted average, often take the estimated GSVs from single SAR images as independent variables [47][48][49]. However, the sensitivity between the polarimetric characteristics and the forest GSV is affected by external disturbance, and the GSVs estimated from single SAR images are also affected by the quality of polarimetric characteristics. In addition, the image acquisition intervals also have influences on the results. Large intervals, ranging from several months to two years, lead to great uncertainty in the forest scatterings [47,48]. Therefore, in order to reduce the influence of season on forest GSV estimation, the intervals of the acquired time series images should be considered.
In this study, to accurately retrieve the planted forest GSV, four ALOS PALSAR-2 polarimetric SAR images of the planted forest in Youxian, China, were acquired during the forest growing season with intervals ranging from 15 to 42 days. Five un-fused and fused ploarimetric characteristics were derived by the Yamaguchi decomposition from single and time series SAR images. Then, to overcome the disadvantage of semi-exponential model, we developed a general linear model (GLM) for mapping stand-level forest GSV using these un-fused and fused polarimetric characteristics. The sensitivity and accuracy of the new model in estimating the GSV, using time series polarimetric SAR images, were analyzed in our study area.
Study Area
The study area (10,122.6 ha) is located in Youxian County, Hunan province of China (Figure 1a). The plantation forest is the dominant in the forest farm, and the classification of forest farm is ecological welfare forest farm and the elevation varies from 115 m to 1270 m. The major tree species includes Chinese fir (Cunninghamia Lanceolata), Pinus massoniana Lamb, bamboo, Liriodendron chinense and Cinnamomum camphora, and the planted Chinese fir is the dominant species. The forest coverage is 86.24% with GSV was close to 879,705 m 3 in 2014.
Ground Data
Based on the data from forest resource inventory and planning, most of the planted Chinese fir forests are located in the north and east part of the forest farm. Considering the age group and distribution of GSV in planted forest, 50 plots of the planted Chinese fir with a random stratification sampling were measured between 2016 and 2017 ( Figure 1b). In all ground measured plots, the percentage of young, immature and mature forests is 10%, 52%, and 38%, respectively. The percentage of mixed sample plots was 12% and the other tree species in mixed plots were broad-leaved tree species (Liriodendron Chinese and Cinnamomun camphcra) and Masson pine, Sensors 2020, 20, 3957 4 of 19 but the percentages of broad-leaved tree species and Masson pine were less than 8%. The size of plots (30 m × 30 m or 20 m × 20 m) was applied depending on the local terrain condition. Moreover, the corner points and central points of the investigated sample plots were surveyed using the global positioning system (GPS).
Study Area
The study area (10,122.6 ha) is located in Youxian County, Hunan province of China (Figure 1a). The plantation forest is the dominant in the forest farm, and the classification of forest farm is ecological welfare forest farm and the elevation varies from 115 m to 1270 m. The major tree species includes Chinese fir (Cunninghamia Lanceolata), Pinus massoniana Lamb, bamboo, Liriodendron chinense and Cinnamomum camphora, and the planted Chinese fir is the dominant species. The forest coverage is 86.24% with GSV was close to 879,705 m 3 in 2014. Totally, there were 4935 trees measured for the inventory in all 50 plots. Trees with the DBH smaller than 5 cm were not measured in the plots. Two parameters of each tree, the height and diameter at breast height (DBH), were measured to calculate the stem volume of each tree, then the GSV of plot was retrieved by [45,46], where GSV is the stem volume of the plot, g i is the cross-section area of each tree related to measured DBH, and H i is the height of each tree, f is the trunk taper coefficient of the planted Chinese fir related to height and DBH [43,44]. In all plots, the relationship between the average height and DBH of each plot was illustrated in Figure 2b, the maximum average height and DBH is 20.5 m and 29.48 cm, respectively ( Figure 2a). Moreover, the GSV is close to 63 m 3 /ha for young stands and the average GSV of over-mature forests is up to 322.59 m 3 /ha. The relationship between the GSV and measured DBH is illustrated in Figure 2b. Table 1 shows the ALOS-2 PALSAR-2 L-band in full polarimetry (http://global.jaxa.jp/) used for estimating the GSV in this study. Four single look complex (SLC) quad-polarimetric L-band SAR (HH + HV + VH + HH) images at level 1.1 were acquired at 04:22 am from 30 June to 22 September, 2016 and the intervals of the acquired images range from 14 days to 42 days. The off-nadir angle is 38.99 degree on the descending orbit, and the pixel resolution in the azimuth direction and the slant range direction is 2.83 m, and 2.86 m, respectively ( Figure 2a). In order to geocode the SAR images, ASTER GDEM and the slope of study area ( Figure 2b) with a spatial resolution of 30 m was employed in the following data processing. measured DBH, and is the height of each tree, is the trunk taper coefficient of the planted Chinese fir related to height and DBH [43,44]. In all plots, the relationship between the average height and DBH of each plot was illustrated in Figure 2b, the maximum average height and DBH is 20.5 m and 29.48 cm, respectively ( Figure 2a). Moreover, the GSV is close to 63 m 3 /ha for young stands and the average GSV of over-mature forests is up to 322.59 m 3 /ha. The relationship between the GSV and measured DBH is illustrated in Figure 2b. Based on the information from local weather forecast, the weather conditions of those four quad-polarimetric SAR images were different at the acquired time ( Table 2). The wind direction of images acquired on 30 June and 14 July was south with less than 3 grades. Inversely, the wind direction is north and the grade was ranged from 3 to 4 for the images acquired on 25 August and 22 September. It was cloudy and clear at the acquired time on 30 June and 22 September, and showers for the images acquired on 14 July, and 25 August, respectively. Moreover, for the images acquired on 14 July and 25 August, it was always rainy for three days before acquired time and the moisture of ground was larger than that on 30 June and 22 September.
Pre-Processing of the Polarimetric SAR Images
To retrieve the forest GSV, the polarimetrical calibration was performed to reduce the impact of Faraday rotation [5,38,41,47] Then, the errors induced by terrain slope were corrected by polarization orientation angle compensation and terrain radiometric correction with external digital elevation models (DEM) [5,[49][50][51][52][53][54][55]. After that, the Lee filter (7 × 7) was adopted to retrieve homogeneous pixels and reduce the errors caused by the speckle noise [28]. Finally, the coherency matrix was formed by the spatially averaged algorithm, which could be used to extract the polarimetric characteristics in the subsequent processing [28,51].
Yamaguchi Decomposition and Polarimatric Characteristics
Polarimetric characteristics are commonly used to describe the scattering features related to the forest structure parameters, such as height, DBH and age of trees. Therefore, retrieving the polarimetric characteristics from the coherency matrix is a crucial step for forest GSV mapping.
Yamaguchi proposed a four-component polarimetric decomposition method, which can retrieve the power of surface scattering (Odd), double-bounce (Dbl), volume scattering (Vol) and Helix scattering (Hlx) [28,52] without using any ground measurements. Normally, the power of these scatterings is related to forest structure parameters, incidence angle, wavelength and terrain, and so on. The four sub-matrices derived from the coherency matrix have a relationship as follows, where P t is the span of backscattering, P Odd is the component of the surface scattering from non-forest ground, P Dbl is the component of the double-bounce scattering from a dihedral corner reflector, such as tree trunks and the interaction between the trunks and big branches, P Vol is the component of the volume scattering from forest canopy with a number of randomly oriented dipoles, and P Hlx is the component of the Helix scattering related to buildings. The power of P Odd , P Dbl and P Vol can be considered as un-fused polarimetric characteristics to estimate the forest GSV. Some fused polarimetric characteristics formed by multiplication or division of polarimetric components are also used to map forest GSV, including P Dbl/Odd , P Vol/Odd , P Dbl×Vol and P Dbl×Vol/Odd . The performances of the un-fused and fused polarimetric characteristics in forest GSV mapping would be analyzed in Section 4.2.
Time Series Polarimetric Characteristics
The power of scatterings from different SAR images with a short interval could be quite different because of the disturbance of wind, soil moisture and speckle noise. However, the forest GSV cannot have great changes in a short interval, even months. Therefore, it is rather difficult to evaluate the reliability of the forest GSV retrieved from only one single SAR image.
Increasing the SAR images acquired in the same period, which are the time series images, is an effective way to guarantee the reliability of the mapped forest GSV. After pre-processing, the time series polarimetric characteristics, extracted by the Yamaguchi decomposition, were geocoded with external DEM and then registered with reference images. After that, the un-fused and fused polarimetric characteristics are calculated by the temporal average at the pixel scale. The fused polarimetric characteristics were constructed by fusing power of the surface scattering, double-bounce scattering and volume scattering. Due to the mismatches between the measured plots and pixels, the polarimetric characteristics of each plot were extracted by scale matching using the spatial average. Figure 3 is the flowchart of extracting the time series polarimetric characteristics. external DEM and then registered with reference images. After that, the un-fused and fused polarimetric characteristics are calculated by the temporal average at the pixel scale. The fused polarimetric characteristics were constructed by fusing power of the surface scattering, doublebounce scattering and volume scattering. Due to the mismatches between the measured plots and pixels, the polarimetric characteristics of each plot were extracted by scale matching using the spatial average. Figure 3 is the flowchart of extracting the time series polarimetric characteristics.
The General Linear Model for GSV Estimation
Theoretically, the model for GSV estimation should describe the relationships between the polarimetric characteristics and forest GSV as simple as possible [13,23,26]. The polarimetric characteristics retrieved from the coherency matrix obey the exponential distribution. Therefore, we used the general linear model (GLM) to describe the relationships between polarimetric characteristics and forest GSV, where σ denotes the power of polarimetric characteristics, GSV is the growing stock volume (m 3 /ha), a 0 and a 1 are the unknown parameters. To solve the unknown parameters, the non-linear model is reformed to a linear model with a link function, where g(σ) is the logarithmic form of σ. The two unknown parameters, a 0 and a 1 are solved by the least square linear regression algorithm without setting initial values. The forest GSV can be calculated as follows: In this study, the semi-exponential model proposed by Wagner et al. [36] was also employed as contrast model to estimate t forest GSV, where σ is one of the selected polarimetric characteristics, GSV is the measured growing stock volume (m 3 /ha). β n refers to the polarimetric characteristic of non-vegetated area, and β s refers to those of the forests with the highest GSV. k is the saturation level of the forest GSV. β n , β s and k are unknown parameters, whose initial values are determined by the range of polarimetric characteristics. The non-linear algorithm (software: Python) was employed to estimate the unknown parameters.
Model Assessment
To select the variables for the GSV estimation, Pearson correlation coefficient (γ) at the significant level of 0.01 was adopted to evaluate the relationship between the polarimetric characteristics and GSV. The approach of leave-one-out cross-validation (LOOCV) was used to compare the performance of selected models and variables, and the statistical criteria, such as the Root Means Square Error (RMSE), the coefficient of determination (R 2 ) and the relative RMSE, are employed to assess the difference between the predicted forest GSV data and the observed data [16,21].
The Extracted Polarimetric Characteristics
To match the size of the measured plots, un-fused and fused the polarimetric characteristics derived from the Yamaguchi polarimetric decomposition were spatially averaged with a size of 7 × 7. The relationships between the measured forest GSV and the polarimtric characteristics were plotted in Figure 4. Five time series formed by images 1, 2, 3 and 4, acquired on 30 June, 14 July, 25 August and 22 September, respectively and with different intervals (14 days to 84 days), were selected to investigate the influences of external disturbances. Pearson's correlation coefficient between GSV and the selected characteristics were also calculated ( Table 3 August and 22 September, respectively and with different intervals (14 days to 84 days), were selected to investigate the influences of external disturbances. Pearson's correlation coefficient between GSV and the selected characteristics were also calculated ( Table 3 For the un-fused polarimetric characteristics from single SAR images, some correlation coefficients of the volume scattering are smaller than the critical value of 0.345 at the significance level of 0.01, being too weak to be used for the forest GSV mapping. Although, the power of surface scattering has obvious negative correlation with the forest GSV (−0.557~−0.415), it is not rational to be viewed as an independent variable in the subsequence study as it relates to the ground without forest. The correlation coefficients of Dbl and Dbl×Vol/Odd have significant positive correlations ranging For the un-fused polarimetric characteristics from single SAR images, some correlation coefficients of the volume scattering are smaller than the critical value of 0.345 at the significance level of 0.01, being too weak to be used for the forest GSV mapping. Although, the power of surface scattering has obvious negative correlation with the forest GSV (−0.557~−0.415), it is not rational to be viewed as an independent variable in the subsequence study as it relates to the ground without forest. The correlation coefficients of Dbl and Dbl×Vol/Odd have significant positive correlations ranging from 0.392 to 0.702, so they are considered as independent variables. The other fused characteristics, Dbl/Odd, Vol/Odd and Dbl×Vol, with significant negative correlation (−0.709~−0.367) are also considered as independent variables.
As Table 3 shows, the correlation coefficients of the un-fused and fused polarimetric characteristics from time series images were significantly higher than those of single images, except the combination of 2 and 3 (acquired on 14 July and 25 August). The Odd, Dbl/Odd, Vol/Odd and Dbl×Vol have negative correlations and the remainders have positive correlations. This is the same for both the single and time series images. Overall, all these characteristics have significant correlations with GSV and could be used as independent variables to estimate the forest GSV, except the power of surface scatting (Odd) and volume scattering (Vol).
Accuracy of GSV Estimation Using Single SAR Images
Before retrieving the model parameters, the measured plots in the shadow region were discarded. Using the five independent variables (Dbl, Dbl/Odd, Vol/Odd, Dbl×Vol and Dbl×Vol/Odd) derived from each single SAR image, the parameters of GLM were solved by the least square algorithm. Moreover, to compare the results of the semi-exponential model, the optimal solutions of the semi-exponential model were also solved by the non-linear solution algorithm and the proposed initial value of unknown parameters. The LOOCV method was employed to calculate the RMSE and RRMSE between the estimated and measured GSV ( Figure 5).
Sensors 2020, 20, x FOR PEER REVIEW 9 of 19 As Table 3 shows, the correlation coefficients of the un-fused and fused polarimetric characteristics from time series images were significantly higher than those of single images, except the combination of 2 and 3 (acquired on 14 July and 25 August). The Odd, Dbl/Odd, Vol/Odd and Dbl×Vol have negative correlations and the remainders have positive correlations. This is the same for both the single and time series images. Overall, all these characteristics have significant correlations with GSV and could be used as independent variables to estimate the forest GSV, except the power of surface scatting (Odd) and volume scattering (Vol).
Accuracy of GSV Estimation Using Single SAR Images
Before retrieving the model parameters, the measured plots in the shadow region were discarded. Using the five independent variables (Dbl, Dbl/Odd, Vol/Odd, Dbl×Vol and Dbl×Vol/Odd) derived from each single SAR image, the parameters of GLM were solved by the least square algorithm. Moreover, to compare the results of the semi-exponential model, the optimal solutions of the semi-exponential model were also solved by the non-linear solution algorithm and the proposed initial value of unknown parameters. The LOOCV method was employed to calculate the RMSE and RRMSE between the estimated and measured GSV ( Figure 5). For the semi-exponential model, the RMSE is between 63.73 m 3 /ha and 93.01 m 3 /ha, and the RRMSE is between 31.97% and 45.67%, which are larger than those of the GLM. Moreover, the estimated GSV using GLM were compared with those using the semi-exponential model, and there is significant difference between them by the differences in the significant test at the significant level of 5%.
The fitted curves for images acquired on 30 June and 22 September were plotted in Figure 6, For the semi-exponential model, the RMSE is between 63.73 m 3 /ha and 93.01 m 3 /ha, and the RRMSE is between 31.97% and 45.67%, which are larger than those of the GLM. Moreover, the estimated GSV using GLM were compared with those using the semi-exponential model, and there is significant difference between them by the differences in the significant test at the significant level of 5%.
The fitted curves for images acquired on 30 June and 22 September were plotted in Figure 6, which illustrates the forest GSV estimation capability of the GLM model and the semi-exponential model. GLM was more sensitive than the semi-exponential model for high GSV values, even for the forest GSV larger than 300 m 3 /ha. The semi-exponential model lacks the capability to estimate the GSV when the forest GSV is larger than 100 m 3 /ha. The semi-exponential model also has a saturation level lower than that of GLM. The trends of fitted curves are more clearly for the polarimetric characteristics from the image acquired on 25 August 2016 (Figure 6f-j). To further analyze the sensitivities of the two models, the coefficients of determination (R 2 ) between the measured and the estimated GSV with five independent variables from a single image were calculated ( Table 4). The R 2 of GLM is larger than that of the semi-exponential model for most independent variables. For example, the R 2 of Dbl ranges from 0.43 to 0.60 for GLM, and from 0.46 to 0.58 for the semi-exponential model. Figure 7 illustrates the scatter diagrams between the observed and estimated GSV using the single SAR image acquired on 30 June 2016. Obviously, the GSV were overestimated by the semi-exponential model with Dbl (Figure 7f), Vol/Odd ( Figure 7h) and Dbl×Vol/Odd ( Figure 7j) and partially underestimated by GLM. In general, GLM is superior to the semi-exponential model using the same un-fused and fused independent variables. To further analyze the sensitivities of the two models, the coefficients of determination (R 2 ) between the measured and the estimated GSV with five independent variables from a single image were calculated ( Table 4). The R 2 of GLM is larger than that of the semi-exponential model for most independent variables. For example, the R 2 of Dbl ranges from 0.43 to 0.60 for GLM, and from 0.46 to 0.58 for the semi-exponential model. Figure 7 illustrates the scatter diagrams between the observed and estimated GSV using the single SAR image acquired on 30 June 2016. Obviously, the GSV were overestimated by the semi-exponential model with Dbl (Figure 7f), Vol/Odd (Figure 7h) and Dbl×Vol/Odd (Figure 7j) and partially underestimated by GLM. In general, GLM is superior to the semi-exponential model using the same un-fused and fused independent variables. Table 4. The coefficient of determination (R 2 ) between the measured and the estimated plots GSV obtained by GLM and semi-exponential model using five un-fused and fused polarimetric characteristics from single SAR images.
Accuracy of GSV Estimation Using Time Series SAR Images
By the temporal average approach, polarimetric characteristics were derived from time series with different intervals and taken as independent variables to estimate the forest GSV. The estimated results of GLM and the semi-exponential model are compared in Table 5. The RMSE ranges from 59.21 m 3 /ha to 70.76 m 3 /ha for the semi-exponential model, and from 50.64 m 3 /ha to 70.49 m 3 /ha for GLM. Compared with the results of the Dbl×Vol/Odd from single SAR images, the time series images can significantly improve the estimation accuracy. However, using the semi-exponential model, the fused polarimetric characteristics, derived from the images acquired on 14 July and 25 August, induced large errors and more than 25% plots with errors exceeding the threshold. Moreover, the minimum RRMSE (24.42%) is observed by GLM when all the time series images are used. Furthermore, the RMSE and RRMSE decrease as the number of SAR images increases ( Figure 5), except that of the independent variables from the time series images (2 and 3 in Table 5). Figure 8 compares the estimation accuracy of GLM and the semi-exponential model using time series images. For un-fused and fused independent variables, the estimated GSV using the semiexponential model easily reaches the saturation point, which is smaller than 200 m 3 /ha for some polarimetric characteristics. On the contrary, the GLM is more sensitive to the high GSV because of the higher saturation levels. In addition, using the GLM, the forest GSV estimated, using the
Accuracy of GSV Estimation Using Time Series SAR Images
By the temporal average approach, polarimetric characteristics were derived from time series with different intervals and taken as independent variables to estimate the forest GSV. The estimated results of GLM and the semi-exponential model are compared in Table 5. The RMSE ranges from 59.21 m 3 /ha to 70.76 m 3 /ha for the semi-exponential model, and from 50.64 m 3 /ha to 70.49 m 3 /ha for GLM. Compared with the results of the Dbl×Vol/Odd from single SAR images, the time series images can significantly improve the estimation accuracy. However, using the semi-exponential model, the fused polarimetric characteristics, derived from the images acquired on 14 July and 25 August, induced large errors and more than 25% plots with errors exceeding the threshold. Moreover, the minimum RRMSE (24.42%) is observed by GLM when all the time series images are used. Furthermore, the RMSE and RRMSE decrease as the number of SAR images increases ( Figure 5), except that of the independent variables from the time series images (2 and 3 in Table 5). Note: * indicates the percentage of plots with errors exceeding the threshold is than 25%. Figure 8 compares the estimation accuracy of GLM and the semi-exponential model using time series images. For un-fused and fused independent variables, the estimated GSV using the semi-exponential model easily reaches the saturation point, which is smaller than 200 m 3 /ha for some polarimetric characteristics. On the contrary, the GLM is more sensitive to the high GSV because of the higher saturation levels. In addition, using the GLM, the forest GSV estimated, using the independent variables from time series images, is more stable and more accurate than that using the variables from single SAR images ( Figure 6). Sensors 2020, 20, x FOR PEER REVIEW 12 of 19 independent variables from time series images, is more stable and more accurate than that using the variables from single SAR images ( Figure 6). in Table 5). Figure 5 illustrates the RMSE and RRMSE of the time series polarimetric characteristics got by the semi-exponential model and GLM. For each selected independent variable, the RMSE and RRMSE of the time series images are smaller than that of single SAR images. The average RRMSE is about 35.36% for the two models using the power of double-bounce scattering (Dbl) from single SAR image (Figure 5f). With the power of the double-bounce from time series images, the RRMSE decreases to 30.26% and 32.53% for GLM and the semi-exponential model, respectively. The improvements are more significant for Dbl/Odd (Figure 5g), Vol/Odd (Figure 5h) and Dbl×Vol/Odd (Figure 5i). Moreover, for the GLM results, the RRMSE of all variables from time series images is smaller than 30%, and that of variables from single images is larger than 32%.
Furthermore, the coefficients of determination (R 2 ) between the measured and the estimated GSV using variables from time series images are listed in Table 6. For each time series combination, the R 2 of GLM is slightly larger than that of the semi-exponential model. The maximum R 2 0.71 is got by GLM with Dbl×Vol/Odd. Therefore, the polarimetric characteristics with time series average can increase the estimation accuracy of GLM. Table 6. The coefficients of determination ( 2 R ) between the measured GSV and the estimated plots GSV using GLM and the semi-exponential model using five un-fused and fused polarimetric characteristics from time series SAR images. Table 5). Figure 5 illustrates the RMSE and RRMSE of the time series polarimetric characteristics got by the semi-exponential model and GLM. For each selected independent variable, the RMSE and RRMSE of the time series images are smaller than that of single SAR images. The average RRMSE is about 35.36% for the two models using the power of double-bounce scattering (Dbl) from single SAR image (Figure 5f). With the power of the double-bounce from time series images, the RRMSE decreases to 30.26% and 32.53% for GLM and the semi-exponential model, respectively. The improvements are more significant for Dbl/Odd (Figure 5g), Vol/Odd (Figure 5h) and Dbl×Vol/Odd (Figure 5i). Moreover, for the GLM results, the RRMSE of all variables from time series images is smaller than 30%, and that of variables from single images is larger than 32%.
Method Combination Dbl Dbl/Odd Vol/Odd Dbl×Vol Dbl×Vol/Odd
Furthermore, the coefficients of determination (R 2 ) between the measured and the estimated GSV using variables from time series images are listed in Table 6. For each time series combination, the R 2 of GLM is slightly larger than that of the semi-exponential model. The maximum R 2 0.71 is got by GLM with Dbl×Vol/Odd. Therefore, the polarimetric characteristics with time series average can increase the estimation accuracy of GLM. Table 6. The coefficients of determination (R 2 ) between the measured GSV and the estimated plots GSV using GLM and the semi-exponential model using five un-fused and fused polarimetric characteristics from time series SAR images.
Method
Combination The scattering diagrams between the measured and the estimated GSV ( Figure 9) show that the results of GLM have less overestimated or underestimated forest GSV than that of the semi-exponential model. Particularly, in the results of the semi-exponential model, some plots with high GSV induce larger errors and the estimated GSV exceed the rational range. Therefore, we mapped the GSV of the planted Chinese fir forest by GLM using the fused polarimetric characteristics (Figure 10), and the estimated forest GSV of most regions ranges from 100 m 3 /ha to 450 m 3 /ha. Sensors 2020, 20, x FOR PEER REVIEW 13 of 19 The scattering diagrams between the measured and the estimated GSV ( Figure 9) show that the results of GLM have less overestimated or underestimated forest GSV than that of the semiexponential model. Particularly, in the results of the semi-exponential model, some plots with high GSV induce larger errors and the estimated GSV exceed the rational range. Therefore, we mapped the GSV of the planted Chinese fir forest by GLM using the fused polarimetric characteristics ( Figure 10), and the estimated forest GSV of most regions ranges from 100 m 3 /ha to 450 m 3 /ha. The scattering diagrams between the measured and the estimated GSV ( Figure 9) show that the results of GLM have less overestimated or underestimated forest GSV than that of the semiexponential model. Particularly, in the results of the semi-exponential model, some plots with high GSV induce larger errors and the estimated GSV exceed the rational range. Therefore, we mapped the GSV of the planted Chinese fir forest by GLM using the fused polarimetric characteristics ( Figure 10), and the estimated forest GSV of most regions ranges from 100 m 3 /ha to 450 m 3 /ha.
The Sensitivity of Polarimetric Characteristics
Normally, the bands of SAR image and the polarimetric decomposition method are the major factors for assessing the sensitivity between polarimetric characteristics and forest GSV. As we know, lower frequencies (L-band and P-band) are more suitable than higher frequencies (X-band and C-band) as saturation emerges at higher forest growing stem volume levels [9][10][11]. That is to say, after a specific GSV level, the further increase of GSV caused no further increase in the intensity of polarimetric characteristics. Moreover, several decomposition methods, such as the Cloude decomposition, Pauli decomposition, Freeman three-component decomposition, and Yamaguchi four-component decomposition were applied for estimating the forest GSV in the previous study [9,16,25]. The improved decomposition methods, such as 4-component Rotated decomposition, the generalized 4-component unitary decomposition and Stochastic Distance-based 4-component decomposition are proposed to improve the powers of scatters [6,55]. The results showed that the Yamaguchi decomposition method is more suitable for mapping forest GSV.
In our study, the Yamaguchi four-component decomposition method was employed to retrieve the polarimetric characteristics, and the results showed that the measured GSV has strong positive correlations (ranged from 0.392 to 0.702), with the power of double-bounce scattering, and strong negative correlations (ranged from −0.613 to −0.415) with the power of surface scattering (Figure 5a,b and Table 3). However, the power of volume scattering related to forest canopy should be sensitive to the GSV, theoretically, but it showed have weak sensitivity to GSV for the un-fused polarimetric characteristics in our study. The reason is that the powers of radar backscattering at L bands is dominated by the scattering process from the trunks and large branches, while powers of X and C band is dominated by the scattering process in the crown layer of small branches, needles and therefore, it does not penetrate and scatter significantly from the stem.
Compared with un-fused polarimetric characteristics, the fused polarimetric characteristics can be improved the sensitivity to GSV (Table 3). Due to the complexity of the forest structures, polarimetric characteristics related to single scattering property is insufficient in describing the forest GSV. In this study, the average values of the Pearson correlation coefficients of single scattering (Odd, Dbl and Vol) are −0.469, 0.511 and 0.399 before the combination, respectively. After the combination, the Pearson correlation coefficients of fused polarimetric characteristics (Dbl/Odd, Vol/Odd, Dbl×Vol, and Dbl×Vol/Odd) are up to −0.476, −0.519, −0.510 and 0.507, respectively. The improvement in the fused polarimetric characteristics was caused by accurate description of forest structures using multi-scatterings. Fused by other scatterings, the power of volume scattering with weak correlation with GSV was also helpful to enhances the sensitivity, such as Vol/Odd and Dbl×Vol.
Moreover, sensitivity between polarimetric characteristics and forest GSV are also affected by weather conditions at the acquired time. In this study, the Pearson correlation coefficients of images acquired on 14 July and 25 August are obvious lower than that derived from the images acquired on 30 June and 25 September (Table 3). Based on the weather information listed in Table 2, the discrepancy of the Pearson correlation coefficients was mainly induced by the moisture of forest and ground at the acquired time. It was always rainy for three days for the images acquired on 14 July and 25 August (Table 3). Additionally, the sensitivity of polarimetric characteristics is also slight influenced by the wind force scale, and the Pearson correlation coefficients derived from the image acquired on 30 June are significantly higher than that from the images acquired on 25 September.
The Estimation Accuracy of GLM
In the previous study, the semi-exponential model was applied to estimate the forest GSV using different polarimetric characteristics and the values of RRMSE were ranged from 25% to 65% [7,15,21,22,26]. However, the parameters of the semi-exponential model should be solved by complicated algorithms with given appropriate initial values, which are not always available due to the lower accuracy observations. In our study, with simple forms and reliable algorithm, the GLM was employed to estimate the GSV of the planted Chinese fir forest. The parameters of GLM can be easily solved by the simple least square algorithm without initial values. For the powers of double-bounce derived from time series image (Figure 6), the improvement was observed in estimated results between the GLM (RRMSE: 25.84-32.77%) and the semi-exponential model (RRMSE: 28.99-35.34%). The improvement in the results from the GLM was also observed for the fused polarimetric characteristics, such as Dbl/Odd and Vol/Odd. Therefore, it was proven that the GLM is able to estimate the GSV of planted forest.
For the time series polarimetric characteristics with different intervals, significant improvement of estimated planted forest GSV was observed using the GLM. The RRMS and RRMSE between the measured and the estimated GSV by GLM are much smaller than that obtained by the semi-exponential model for both the un-fused and fused independent variables ( Figure 6). In particular, for the fused polarimetric characteristic (Dbl×Vol/Odd), the values of RRMSE (Table 6) range from 24.42% to 33.46% using the GLM, and from 29.60%, to 36.29%, using the semi-exponential model, respectively. The sensitivity and stability are obvious improvement for the fused independent variables, including Dbl/Odd (Figure 6g), Vol/Odd ( Figure 6h) and Dbl×Vol/Odd (Figure 6j). Compared with the results from single SAR image, independent variables from time series SAR images have more potential to map the planted forest GSV.
Moreover, the saturation level of GSV is a key indicator for assessing the performance of the estimation model. According to the fitted curves from single and time series polarimetric characteristics ( Figures 6 and 8), the saturation level of GLM is higher than the semi-exponential model, for either the fused or un-fused polarimetric characteristics. Once the GSV is larger than 300 m 3 /ha, the semi-exponential model cannot accurately retrieve the GSV, and low saturation levels may also cause overestimated GSV by the semi-exponential model (Figures 7 and 9). Therefore, GLM is more potential than the semi-exponential model in estimating GSV.
Single and Time Series SAR Images
The acquisition seasons and intervals of the polarimetric SAR images are rather important for time-series processing, as they lead to different interactions between trees and microwaves. Using the polarimetric characteristics, derived from single SAR images, cannot reduce the external disturbance [20,39,46,47]. In this study, four images were acquired in the growing season, which is shorter than three months (Table 1), and the discrepancies of sensitivity between these images were induced by external factors ( Figure 5 and Table 3), including wind, ground moisture and speckle noise. For the single SAR image, the moisture and wind force scale major affected the sensitivity of the polarimetric characteristics (Table 3) and estimated forest GSV. It is unreliable to estimate the forest GSV using a single SAR images.
For the time-series processing, the normal approach is to estimate the GSV from each single SAR image, and then apply average, weighted average or complicated time-spatial models, to map the GSV using multi-SAR images [20,44,[47][48][49]. Also, we can directly derive polarimetric characteristics from time-series SAR images with time-average, which are then used to estimate the forest GSV. By reducing the external disturbance, the polarimetric characteristics with less noise can be derive from time series SAR images using the latter approach. In this study, we chose the latter one. As showed in Table 3, the independent variables were derived from the time series images with different intervals, and have much higher Pearson correlation coefficients, than those of the variables derived from single SAR images. In addition, for the un-fused and fused polarimetric characteristics, the difference of the Pearson correlation coefficients between each time series is smaller than that from single SAR image. It is confirmed that polarimetric characteristics from time-series SAR images have less noise by time average. Moreover, employed the semi-exponential model or the GLM, the values of RMSE and RRMSE were obviously decreased after using the variables derived from the time series SAR images. For the fused polarimetric characteristics (Dbl×Vol/Odd), the minimum RRMSE was 36.97% for the semi-exponential model and 33.87% for GLM from single SAR image, and these values were reduced to 29.60% and 24.42% from the time series SAR image (Table 6).
Conclusions
This study proposed the GLM for the stand-level forest GSV estimation using the time series L-band ALOS PALSAR-2 quad-polarimetric SAR images. Before the estimation, the un-fused and fused polarimetric characteristics were derived by Yamaguchi decomposition and the polarization response was analyzed with the GSV in the planted Chinese fir forest. The main results of this research are as follows: (1) As the planted Chinese fir forest has narrow and short canopies with many canopy gaps, the measured GSV has strong positive correlations with the power of double-bounce scattering and weak correlations with the power of volume scattering. Furthermore, the power of the fused polarimetric characteristics is more sensitive to forest GSV than the power of un-fused polarimetric characteristics.
(2) The results showed that the RRMSE of the GLM results are much smaller than that of the semi-exponential model results. For the forest with high GSV, GLM has higher reliability than the semi-exponential model.
(3) The independent variables derived from time series polarimetric SAR images are more sensitive to forest GSV, and the minimal RRMSE, about 24%, was observed by Dbl×Vol/Odd using the fused independent variables from time series.
In summary, the proposed general linear model has potential to describe the relationship between the forest GSV and time series polarimetric characteristics. In the future, the research will concentrate on improving the GSV estimation accuracy using multi-band and multi time images.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-07-23T09:01:45.749Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e20909ada88bc79e432da16b5ba41f10b62c3506",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s20143957",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "568e5fe6085dd13d93859730c59b27ae270ce021",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Medicine"
]
}
|
265777930
|
pes2o/s2orc
|
v3-fos-license
|
Allergic rhinitis
Allergic rhinitis (AR) is caused by immunoglobulin E (IgE)-mediated reactions to inhaled allergens and is one of the most common chronic conditions globally. AR often co-occurs with asthma and conjunctivitis and is a global health problem causing major burden and disability worldwide. Risk factors include inhalant and occupational allergens, as well as genetic factors. AR impairs quality of life, affects social life, school and work, and is associated with substantial economic costs. The Allergic Rhinitis and its Impact on Asthma (ARIA) initiative classified AR into intermittent or persistent and mild or moderate/severe. The diagnosis is based on the clinical history and, if needed in patients with uncontrolled rhinitis despite medications or with long-lasting symptoms, on skin tests or the presence of serum-specific IgE antibodies to allergens. The most frequently used pharmacological treatments include oral, intranasal or ocular H1-antihistamines, intranasal corticosteroids or a fixed combination of intranasal H1-antihistamines and corticosteroids. Allergen immunotherapy prescribed by a specialist using high-quality extracts in stratified patients is effective in patients with persistent symptoms. Real-world data obtained by mobile technology offer new insights into AR phenotypes and management. The outlook for AR includes a better understanding of novel multimorbid phenotypes, health technology assessment and patient-centred shared decision-making. This Primer by Bousquet and colleagues summarizes the epidemiology, mechanisms, diagnosis and treatment of allergic rhinitis. In addition, it reviews the quality-of-life issues faced by patients and provides an overview of how mobile health technologies could improve patient care.
Health Survey (ECRHS)) 3 ; however, more recent large studies have not been undertaken. Collectively, these studies demonstrated that AR often begins early in life, with a prevalence of more than 5% at 3 years of age. In the ISAAC phase III study (consisting of data from 236 centres in 98 countries), AR prevalence increased from 8.5% in individuals aged 6-7 years to 14.6% in those aged 13-14 years 8 . Overall, AR in children and adolescents was more prevalent in high-income countries, but the prevalence of severe symptoms was higher in LMICs 9 . In the ECRHS study (conducted in 35 centres in 15 countries), the age-standardized and sex-standardized prevalence of AR in people aged 20-44 ranged from 11.8% in Oviedo (Spain) to 46.0% in Melbourne (Australia) 3 . Of note, AR is more common in males before puberty, whereas it is more common in females after puberty; these differences are more pronounced in those with asthma and AR concomitantly 10 . In the US National Health and Nutrition Examination Survey III from 1988 to 1994, the prevalence of nasal symptoms during the previous 12 months was highest (~30%) in individuals aged 17-29 years and lowest (~10%) in those older than 60 years 11 .
In general, the prevalence of AR has increased worldwide since the 1960s in parallel to the increase in the prevalence of atopy (that is, the tendency to produce IgE antibodies owing to genetic and/or environmental factors) 12,13 . The ISAAC also evaluated the change in prevalence of allergic diseases between 1994-1995 and 2002-2003 (ref. 8 ). For the two age groups evaluated as part of this study (6-7-year-olds and 13-14-year-olds), the prevalence of AR increased from the 1990s to the early years of the first decade of the twenty-first century in many LMICs but decreased or had little change in western Europe. The reasons for the observed changes in prevalence are unclear 8 .
Risk factors
Allergens associated with AR include pollens (tree, grass and weed, including ragweed), moulds and indoor allergens (house dust mites and animal allergens) and have a large geographical variability within and between countries 14 . Occupational AR includes both IgE (vegetal and animal proteins as well as certain chemicals) and non-IgE (isocyanates, persulfate salts and woods) mechanisms 15 . Risk factors for AR include antibiotic use, self-reported air pollution, exposure to farm animals (only in LMICs), exposure to cats and/or dogs, maternal and paternal smoking and vigorous physical activity in adolescents 16 . Of note, many of these risk factors are shared with asthma and atopic dermatitis 16 . Overweight and obesity are not associated with AR 17 . Of note, many of these exposures and lifestyle risk factors have not been established as major risk factors for AR 18 ; for example, ambient air pollution and passive smoking do not seem to have a large effect on AR development, but pollution may be associated with increased AR severity 19 .
The proportion of rhinitis in general that is attributable to atopy is ~50% in the overall population 20 . AR, asthma and atopic dermatitis often coexist in the same individual, partially due to a shared genetic origin 21,22 . Indeed, data from genome-wide association studies (GWAS) have demonstrated that allergic diseases and traits share a large number of genetic susceptibility loci, of which IL33, IL1RL1 (also known as IL33R), IL13-RAD50, C11orf30 (also known as EMSY)-LRRC32 and TSLP seem to be important for multimorbid allergic diseases 18,23 . In addition, rhinitis was associated with TLR expression, whereas AR associated with asthma was associated with IL5 and IL33, suggesting a different genetic cause for AR alone compared with multimorbid AR 24 . Further research is warranted to explore transcriptomic signatures as biomarkers for single and multimorbid allergic diseases.
Susceptibility loci for AR have various immune functions, such as the inflammatory adhesion process for MRPL4 (19q13), in the activation, development and maturation of B cells and epithelial barrier function/ regulatory T cell function for BCAP (also known as PIK3AP1; 10q24) and immune tolerance for C11orf30-LRRC32 (11q13), whereas other loci have unknown functions, such as FERD3L (7p21) 23 . In addition, data from one large GWAS and HLA fine-mapping study found 20 new loci that were associated with AR, many of which had immune functions related to both innate and adaptive IgE-related mechanisms 25 . In that study, the estimated proportion of AR attributable to the key identified AR-associated loci was 39%, which is a relatively high estimate for a complex disease. Other GWAS analyses have found common genetic mechanisms in AR and non-allergic rhinitis 22,25,26 .
Classification
Although historically AR has been categorized as seasonal and perennial, this distinction has not been well reproduced in epidemiological studies assessing molecular allergens as most patients are polysensitized (sensitized to more than one allergen) 27 . Accordingly, the organization Allergic Rhinitis and its Impact on Asthma (ARIA) proposed replacing seasonality with intermittent and persistent rhinitis 1 .
The large number of risk factors associated with AR suggest that the geographical variations in prevalence are due to a constellation of environmental factors varying between locations and time. These varying constellations of risk factors are also a plausible explanation for the time and place distribution of allergic multimorbidity 16 . No biomarker that can be used in clinical practice to predict the type (that is, phenotype or endotype) and severity of AR and the development of its common co-morbidities is available.
Multimorbidities and sensitization
Most patients with asthma have multimorbid rhinitis (AR or non-allergic rhinitis), whereas less than onethird of patients with AR have asthma associated with rhinitis 1 . The Mechanisms of the Development of ALLergy (MeDALL) study, which included 12 European birth cohorts, demonstrated that the coexistence of rhinitis with asthma and/or atopic dermatitis is more common than expected by chance alone, both in the presence and in the absence of IgE sensitization, suggesting that multimorbidity and IgE have different genetic mechanisms 28 . In addition, data from the MeDALL study suggested that type 2 signalling pathways represent a relevant multimorbidity mechanism of allergic diseases 22 .
Many patients with AR also have conjunctivitis, but rhinitis and rhinoconjunctivitis seem to be two separate diseases [29][30][31] and should be differentiated. In addition, data from the mobile application MASK-air have identified an extreme pattern of uncontrolled multimorbidity (uncontrolled rhinitis, conjunctivitis and asthma during the same day) 32 . More recent classical epidemiological studies showed that ocular symptoms are more common in polysensitized patients 29 whether they have asthma or do not have asthma 30 , ocular symptoms are associated with the severity of nasal symptoms 33 , ocular symptoms are important to consider in severe asthma 33 and the severity of allergic diseases increases with the number of allergic multimorbidities 34 .
Monosensitization and polysensitization represent different phenotypes of IgE-associated disease 35 . Polysensitization is associated with an earlier onset of allergy and with more severe symptoms compared with monosensitization. In addition, the multimorbidity polysensitization phenotype seems to occur at various ages and in various allergenic environments and may be associated with specific mechanisms of disease 36 .
Mechanisms/pathophysiology
Allergen exposure, either topically intranasally or in an exposure chamber 37 , can be used to study nasal allergic reactions, monitor symptoms and collect nasal secretions and serum for mediator measurements in response to the allergen 38 or nasal scrapings or biopsy from patients [39][40][41] . In addition, blood cells (such as basophils and antigen-specific T cells) and nasal mucosa tissue can be studied ex vivo 42 using different stimuli and interventions.
The nasal epithelium
The nasal mucosa is the primary air conditioner of the respiratory tract and the first line of defence against airborne infectious agents. For these roles, maintaining and restoring epithelial integrity and the ability to initiate immune responses are essential. In the presence of conditions or factors that impair mucosal integrity, the epithelium releases alarmins and other damage-associated molecular patterns that initiate repair mechanisms but can also induce protective inflammation. In AR, the same mechanisms may be active in inducing disease. For example, allergens with protease activity (such as Der p 1 in house dust mites) can directly compromise the epithelial barrier, whereas others (such as Der p 2 in house dust mites) can activate pattern recognition receptors; in both cases the epithelium can initiate innate immune responses through the release of alarmins such as IL-33, thymic stromal lymphopoietin (TSLP) or IL-25 (refs 43-46 ). Alarmins can, in turn, activate group 2 innate lymphoid cells (ILC2s), which rapidly produce in situ type 2 cytokines (IL-5, IL-13 and IL-4), therefore having a key role in the initiation and maintenance of type 2 adaptive immune responses, leading to IgE class switching and mucosal inflammation. Additional environmental factors are probably involved in the pathophysiology of AR through their effects on the nasal epithelium. These include pollutants (diesel exhaust particles 47 or other air pollutants 48 ), irritants and infectious agents (Staphylococcus aureus 49 or viruses). The exact mechanisms through which those factors contribute to disease manifestation have not been elucidated.
Antigen presentation and sensitization
The allergic immune response begins with a sensitization phase when the patient is first exposed to an allergen without experiencing clinical symptoms ( fig. 1). During this phase, dendritic cells in the nasal mucosa take up the allergen, process it and transport it to the draining lymph node, in which the processed allergen is presented to naive CD4 + T cells. Following antigen presentation, naive CD4 + T cells are activated and differentiate into allergen-specific type 2 T helper cells (T H 2 cells) 50-55 that induce the activation of B cells and IgE class switching, which leads to B cell differentiation into plasma cells that produce allergen-specific IgE. IgE enters the circulation and binds through its Cε3 domain to the high-affinity IgE receptor (FcεRI) on the surface of effector cells (for example, mast cells and basophils). These processes lead to the formation of a pool of memory allergen-specific T H 2 cells and B cells. Although this constitutes the first step in the development of allergy, it is also an ongoing process as the mucosa becomes exposed to various allergens on a chronic, seasonal or episodic basis.
Symptom generation and inflammation
AR symptoms are caused by biochemical products released in the nasal tissue during an allergic reaction. When a patient who has been sensitized by previous exposure to the allergen re-encounters the causative allergen, the allergen binds to allergen-specific IgE on mast cells in the nasal mucosa, resulting in IgE and FcεRI crosslinking and subsequent mast cell activation and degranulation. This leads to the release of prestored and newly synthesized mediators, including histamine, sulfidopeptide leukotrienes (leukotriene C 4 and leukotriene D 4 ), prostaglandin D 2 and other products 52 ( fig. 1). These mediators interact with nasal sensory nerves, vasculature and glands, resulting in acute AR symptoms.
In addition to acute symptoms, experimental nasal exposure to an allergen in an individual with AR produces immediate signs of inflammation, such as plasma exudation and the development of a type 2 inflammatory infiltrate characterized by eosinophils, neutrophils and basophils and by a mononuclear infiltrate (primarily T H 2 cells) 56 . Indeed, classic type 2 cytokines -such as IL-4, IL-5 and IL-13 -can be detected in tissue and measured in nasal secretions several hours after allergen exposure 57,58 ( fig. 1). In the natural presentation of AR, the histopathology is quite similar to that after experimental allergen exposure, and activation of allergen-specific memory T H 2 cells by dendritic cells and other antigen-presenting cells such as B cells via mechanisms partially depending on IgE-facilitated allergen presentation is believed to play a critical role 49,50 . Activated allergen-specific T H 2 cells produce large amounts of IL-4, IL-5 and IL-13 that contribute to vascular permeability, the infiltration of eosinophils and other inflammatory cells into the nasal mucosa, local IgE production, increased mucus production, vascular leakage and expansion, and activation and differentiation of different subsets of T H 2 cells 40,42,43 . Of particular interest are antigen-specific T H 2 peripheral blood cells that display markers such as CRTH2, CD161 and CCR4 with very little expression of CD27. These cells, also known as T H 2α cells, are associated with AR severity and are characteristically suppressed by allergen immunotherapy 57 . Although the histology of AR shows a typical type 2 inflammation, there is little evidence of mucosal remodelling in contradistinction with asthma 46,52 .
Although patients with AR experience acute symptoms on exposure to a known allergen, the natural symptomatic state can be chronic with relatively low fluctuations and frequently includes non-allergen triggers such as irritants and changes in environmental conditions. Experimental allergen provocations (nasal challenges), whether conducted by direct instillation of an allergen into the nasal cavities or through an environmental exposure unit, help identify a number of pathophysiological phenomena that offer better understanding of the overall clinical picture of AR. Acute symptoms are reduced within minutes but can persist
Allergic sensitization
Symptom generation and inflammation at lower levels for hours after allergen exposure or can recrudesce in a phenomenon known as the 'late-phase' reaction 59 . Late-phase reactions may explain why, after sudden exposure to a large amount of an allergen, a patient with AR may remain symptomatic for prolonged periods. Repetitive allergen exposure leads to another phenomenon, whereby progressively lower amounts of the allergen are required to elicit symptoms; this has been termed 'priming' 60 . Nasal priming may explain why, towards the end of a pollen season, patients with AR tend to become symptomatic even when exposed to very low levels of pollen. A third phenomenon is the induction of nasal hyper-responsiveness, where the nasal response (sneezing, rhinorrhoea and so on) to a stimulus that is not an allergen (for example, histamine) can be augmented by a prior allergen challenge. In the natural history of AR, this phenomenon may explain the high sensitivity of patients to irritants and to environmental changes 61 . Nasal priming may be caused by the increased accumulation of mast cells or basophils -which is a consequence of repeated allergen exposure -at the site of the allergic reaction in the nose 62 , in addition to the induction of non-specific hyper-responsiveness at the 'endorgan' (nose) level. In addition, late-phase reactions and chronicity might also depend on the perpetuation of allergen-specific adaptive immune responses. Indeed, persistent symptoms following allergen challenge could be mediated mainly by the accumulation of allergic mediators in the nasal mucosa but also by the activation of allergen-specific memory T H 2 cells by dendritic cells and other antigen-presenting cells such as B cells via mechanisms partially depending on IgE-facilitated allergen presentation 63,64 . Activated T H 2 cells produce large amounts of type 2 cytokines that contribute to enhance all the local pathophysiological mechanisms described above, including the enhanced and sustained activation not only of tissue-resident mast cells but also of basophils infiltrating the nasal mucosa, which also might well contribute to the clinical symptoms after allergeninduced IgE crosslinking and subsequent release of mediators 50-52 ( fig. 1). Nasal late-phase reactions, priming and non-specific hyper-responsiveness are inflammation dependent and can be suppressed by nasal corticosteroids. However, the specific molecular and cellular events underlying these phenomena are not yet fully understood.
Role of nasal nerves
The nervous system has a key role in the nasal symptoms of AR. The nasal mucosa is densely innervated by adrenergic and cholinergic nerve fibres, and the epithelium is innervated by interdigitating sensory nerve endings, mostly nociceptors (receptors that respond to noxious stimuli) 65 . In addition, nasal chemosensory cells that express bitter taste receptors may reflect a specialized sensory nerve system that reacts to noxious stimuli and bacterial products 66 . Cholinergic and adrenergic nerve fibres are activated through central reflexes initiated at the nasal mucosa by sensory nerves, including nociceptor C fibres. Nasal C fibres and other sensory nerves express receptors for some mediators of the allergic response (such as histamine and bradykinin), and also express several transient receptor potential ion channels that are activated by noxious physical or chemical stimuli, such as low pH, high or low temperatures, CO 2 and hypertonicity. These fibres may also have a local effector function as they produce and release neuropeptides via axonal, antidromic reflexes 67 . However, the mechanism and clinical role are unclear. Nasal hyper-responsiveness seems to have a strong neural component 68 . Indeed, changes in the density and neuropeptide content of sensory nerves have been found in the nose of patients with AR. A putative mediator of these changes, nerve growth factor, is released on nasal allergen provocation 69 and can be found in nasal glandular epithelial cells and eosinophils in the nasal mucosa 70 .
Diagnosis, screening and prevention AR is often under-recognized owing to poor public awareness, limited access to allergologists and confounding diagnoses, such as the common cold 71 . The diagnosis of AR is made by considering a detailed history that is supported by examination findings (physical examination and, if needed, nasal endoscopy) and, if necessary, testing for allergen-specific IgE. Other tests such as nasal allergen challenge, CT scans, evaluation of nasal nitric oxide and ciliary beat frequency, nasal smears, nasal cultures and analysis of nasal fluid for β-transferrin) may be required to include or exclude different forms of rhinitis 72 . The ARIA guidelines propose classifying AR as intermittent or persistent depending on the duration of symptoms, with persistent rhinitis occurring for more than 4 days for 4 weeks at a time, and as mild or moderate to severe, depending on whether sleep and daily activities are affected or whether symptoms are troublesome 73 ( fig. 1). In addition, AR can also be classified as mild, moderate or severe 74 .
Clinical history
The clinical history should note symptoms, particularly those that cause major problems, where and when they occur, and any exacerbating and relieving factors. Other symptoms in the chest, ears, throat, gut or skin, in addition to whether there is a patient history or a family history of allergic disease and/or immune problems, together with a review of treatments previously tried, those currently being taken and their efficacy, should all be noted.
All individuals are familiar with rhinitis because of the common cold. Rhinitis is defined clinically as having two or more of the following symptoms for more than 1 hour per day: nasal running, blocking, itching or sneezing. The diagnosis is made by an accurate history but can be missed owing to misperceptions such as symptoms being ascribed to frequent colds, mouth-breathing children having enlarged adenoids and secretions being missed when they pass posteriorly. Questioning patients with asthma regarding nasal function (such as the ability to breathe through the nose and to smell) should be undertaken as most patients with asthma have rhinitis or rhinosinusitis 73 .
The clinical history may also provide a clue to the inciting allergen or allergens. Of note, long-term allergen exposure is harder to diagnose than short-term exposure as the major symptom is often nasal blockage and postnasal discharge, with fewer obvious symptoms such as nasal itching, running and sneezing. Nasal inflammation can also cause non-specific nasal hyper-reactivity to non-allergic stimuli 75 and a poor sense of smell, among other multimorbidities 76 . Some pollen-sensitive individuals with AR may present with oral symptoms of pollen food syndrome, such as itchy mouth and throat after ingestion of the food 77 .
Examination
An examination of the whole patient is necessary as rhinitis has important co-morbidities 76 . Children should have their growth assessed, as severe airway problems are associated with reduced growth, and the combined use of INCS and inhaled corticosteroids may reduce height at high doses 78 . The presence of facial features such as conjunctivitis, nasal allergic crease, allergic salute or double creases beneath the eyes (Dennie-Morgan lines) all suggest that the patient has an allergic diathesis.
Nasal examination is needed in patients with moderate to severe AR or in those with uncontrolled symptoms despite optimal treatment. This examination should include assessment of the external appearance followed by internal examination, preferably with a nasendoscope. An otoscope may suffice to examine the nose in children. The position of the nasal septum, as well as the size and colour of the inferior turbinates, should be noted, together with the appearance of the mucosa and the presence and nature of any secretions, polyps, bleeding, tumours, crusting or foreign bodies. The classic appearance of the nasal cavity in patients with AR is swollen pale bluish inferior turbinates with copious clear secretions; however, the nose may look normal, and these features are not restricted to AR. Of note, the mucosa may be slightly reddened in patients using INCS 72 . Referral to an ear, nose and throat specialist is advised for patients with nasal polyps, bleeding, unilateral disease, high crusting and septal perforations 79 . High crusting and septal perforations most commonly result from previous septal surgery but can also occur with cocaine abuse and vasculitides.
Asthma should always be assessed in patients with AR by asking patients about wheezing, shortness of breath and sleep disturbance plus, if needed, an objective measurement such as spirometry 80 and vice versa in patients with asthma owing to the frequent co-occurrence of these disorders. Patients should also undergo ear inspection as otitis media with effusion (also known as glue ear) may be a co-morbidity in children with rhinitis and in adults with severe forms of rhinosinusitis 81 . The general examination needs to also include skin examination for atopic dermatitis, and, in patients with obstructive rhinitis, assessment of thyroid function by checking for slow relaxation after the ankle jerk and for eye signs such as puffiness, redness and/or bulging, as hypothyroidism.
Tests
Whether further diagnostic testing for AR is required in all patients is disputed. Some clinicians do not recommend further testing in those with a clear history of nasal symptoms that are provoked by allergen exposure 82 . This occurs in most northern hemisphere patients allergic to pollen and with intermittent exposure to animal or occupational allergens but is not possible with long-term exposure, as with pollens in most tropical climates or indoor allergens such as house dust mites. By contrast, other clinicians recommend further testing in all patients with symptoms suggestive of AR. Testing for allergen-specific IgE using skin prick or blood tests 72 to identity the allergen can be performed to support the diagnosis and is mandatory when AIT is being considered as part of the treatment. Of note, the results from IgE testing need to be interpreted in the light of the clinical history, as both false-positive and false-negative results can occur 83 . In one meta-analysis of skin prick tests, the sensitivity ranged from 68% to 100% and the specificity ranged from 70% to 91% (ref. 84 ).
Component-resolved diagnosis (that is, using purified native or recombinant allergens to identify IgE sensitivity to individual allergens) is not yet routine in AR diagnosis but may provide important information. For example, the detection of serum IgE antibodies to specific molecules (Phl p 1, Phl p 5, Bet v 1 or Pru p 3) could be used as a biomarker to predict AR persistence and the future onset of multimorbidities, such as asthma and/or pollen food syndrome [85][86][87] . Moreover, component-resolved diagnosis may be useful in understanding cross-sensitizations and in proposing AIT where specific molecular sensitization can guide the content of the vaccine 86 .
Other diagnostic tests may be required to verify the diagnosis of AR or to make an alternative diagnosis. Additional tests include nasal allergen challenge, nasal cytology, nasal nitric oxide measurements and ciliary beat frequency analysis 72 . Nasal smear cytology is practised in some centres. The presence of a high number of eosinophils (although the necessary percentage of cells is debatable) suggests an inflammatory process but not necessarily AR (AR or non-allergic rhinitis with eosinophilia), which is likely to be corticosteroid responsive. Unilateral eosinophilia can occur so bilateral samples must be taken 88 . Nasal nitric oxide measurement is a simple and rapid test to discriminate AR, non-allergic rhinitis, and subgroups with acceptable sensitivity and specificity, but it can be done only in highly specialized centres 89 .
Testing for local AR
Some patients with rhinitis who do not have systemic IgE sensitization identified via a skin prick test and serum allergen-specific IgE show nasal reactivity on a nasal allergen provocation test (whereby an allergen is entered into the nose). Nasal allergen provocation tests can be used when the history and systemic IgE results are not concordant, to monitor the progress of AIT and in research studies 90 . Local AR might be an independent rhinitis phenotype, although it is treated in the same way as AR 91 .
Patients with rhinitis who have negative results on tests for allergy have non-allergic rhinitis, which can be caused by several factors that can broadly be divided into infectious and non-infectious factors. Tests to identify these factors are detailed in an EAACI position paper 72 .
Mobile health
Mobile health tools use algorithms created using advanced statistics (neural networks) on data including medical diagnoses and questionnaire answers 92 . Very large numbers of patients can be studied with data obtained by mobile phones, and, in the future, mobile health added to machine learning may be useful for the screening of undiagnosed patients with AR in the general population.
Prevention
Many different attempts have been made to prevent allergic diseases, although most of these attempts have been unsuccessful. However, farm animal exposure in early life is a protective factor for allergic diseases in high-income countries 93 and also possibly in some LMICs, although the mechanisms are unclear 94 . In addition, early-life exposure to cats and dogs may prevent the development of allergy but results are not consistent 95 . The use of probiotics or prebiotics prenatally and postnatally has failed to reduce AR 96,97 . Moreover, the use of pharmacological therapies before allergen exposure cannot prevent the onset of symptoms in AR 98 .
Management
Treatments for AR include education, allergen avoidance, pharmacotherapy and AIT 1,79,99 . Pharmacotherapy is effective in most patients and, when properly implemented, improves QOL; however, many patients do not follow prescriptions and are poorly adherent to pharmacotherapy.
No biologic has been approved for AR except omalizumab in Japan for Cryptomeria japonica allergy 100 , and, owing to their costs and the prevalence of AR, newly developed biologics may be restricted to highly stratified patients with severe AR. Of note, some changes to the management of AR have occurred owing to the COVID-19 pandemic (Box 1).
Education
As AR is a chronic condition that is caused by specific allergens, it is important for patients to try to identify allergens and/or environmental agents that may precipitate their AR. In addition, it is important for clinicians to inform patients of the importance of the correct use of intranasal sprays to ensure they correctly adhere to the therapy.
Allergen avoidance
When possible, avoiding or minimizing exposure to the causative allergens should be the first management step for AR. Although allergen avoidance may reduce symptoms of AR, it should never isolate individuals from social interactions. Data regarding the benefits of house dust mite avoidance in asthma are conflicting, and interventions designed to reduce house dust mites have unknown effectiveness 101 . For patients who are polysensitized, avoidance strategies are often challenging as it is difficult to eliminate all causative allergens or triggers 18 . Some classical, but often unproven allergen avoidance measures for AR include use of bed covers for house dust mite allergy, removing pets from the home, changing profession for occupational allergens or wearing masks for pollen allergy. An innovative approach is to feed cats with a cat food containing antibodies against Fel d 1 to reduce allergenicity of the cat 102 .
Pharmacotherapy
Many pharmacological treatment options are available for the management of AR (TaBle 1) 103 . Different H 1 -antihistamines have different chemical structures, pharmacokinetics and potential for drug-drug and drug-food interactions, and they are classified into non-sedating, lesssedating and sedating groups on the basis of brain H 1 receptor occupancy 103 . The less-sedating secondgeneration oral H 1 -antihistamines (such as desloratadine, loratadine, cetirizine, levocetirizine and rupatadine) and the non-sedating H 1 -antihistamines (such as fexofenadine and bilastine) are well tolerated, safe and effective 104 . First-generation oral H 1 -antihistamines should be avoided owing to adverse effects, in particular sedation, and are not recommended for AR treatment 103,105 . Firstgeneration H 1 -antihistamines have not been optimally studied as most trials of these therapies do not meet appropriate standards in terms of study design, meaning their relative efficacy is unknown 106 .
Box 1 | Managing patients with AR during the COVID-19 pandemic
The global spread of COVID-19 has caused sudden and dramatic changes in society and health care, including management of allergic rhinitis (AR). Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) particles that are inhaled though the nose or the mouth bind to upper respiratory tract cells, and the nose is the first organ to be invaded 203 . In addition, SARS-CoV-2 binds to angiotensin-converting enzyme 2 (ACE2), and expression of ACE2 is lower in patients with allergic diseases, suggesting that they may be less prone to SARS-CoV-2 infection 204,205 . The effect of AR on COVID-19 is still a matter of debate, and no firm conclusion can be drawn 206 . Smell dysfunction and anosmia are common COVID-19 symptoms 207 .
The allergy and immunology communities have quickly responded by mobilizing practice adjustments and embracing new paradigms of care to protect patients and staff from severe SARS-CoV-2 exposure 208-211 using telehealth 212 . Some recommendations regarding the treatment of AR in patients with COVID-19 have been made. For example, although oral corticosteroids are contraindicated in patients with COVID-19, they can be used without restriction in patients with AR and COVID-19 (ref. 213 ). Practical recommendations were made to organize an allergy clinic 208 . In addition, Allergic Rhinitis and its Impact on Asthma (ARIA) and EAACI recommend stopping all forms of allergen-specific immunotherapy in patients with AR who have COVID-19 and continuing allergen-specific immunotherapy in patients with AR who do not have COVID-19 (refs 214-216 ). The possibility of expanding injection intervals in the continuation phase should be checked and may be beneficial. On the other hand, face masks may reduce the severity of AR symptoms 217 .
As for any other chronic disease, is clear that the COVID-19 pandemic will profoundly change AR management owing to the infectivity of allergic patients, probable changes in management and major changes in health-care systems.
In general, the advantages of oral H 1 -antihistamines are once-a-day administration, rapid and effective action and low cost. However, they are less effective than INCS, particularly for nasal congestion (which is a common symptom of AR). Oral H 1 -antihistamines are often sufficient for the treatment of mild AR, and many patients prefer oral drugs to other formulations. Some oral H 1 -antihistamines may, with caution, be used in pregnancy or in women who are breastfeeding (for example, cetirizine, levocetirizine and loratadine 107 ).
Topically administered H 1 -antihistamines (alcaftadine, azelastine, bepotastine, cetirizine, epinastine, ketotifen, and olopatadine (as eye drops) and azelastine and olopatadine (as nasal sprays)) are suitable for patients with mild AR or ocular symptoms. These medicines may have a bitter taste in 15-20% of patients. The effect begins 1-3 hours after administration 104,108 . Mild uncommon adverse effects include dry mouth, drug-induced rash and swelling of the salivary glands. As previously mentioned, patients should be educated on the correct way to administer intranasal H 1 -antihistamines.
Intranasal corticosteroids. INCS such as beclomethasone, budesonide, ciclesonide, fluticasone propionate, fluticasone furoate, mometasone furoate and triamcinolone acetonide are first-line therapeutic options for patients with persistent or moderate to severe symptoms. INCS effectively control the four major symptoms of AR (TaBle 1), and some INCS (such as intranasally administered fluticasone furoate) can reduce ocular symptoms 109 . INCS are more effective than H 1 -antihistamines and leukotriene receptor antagonists, particularly for nasal congestion, although their efficacy requires several hours or days 104,110 (TaBle 1). The mechanism of action of INCS is related to the local anti-inflammatory effect on nasal mucosal cells. As with intranasal formulations of other drugs, all patients should be educated on the correct way to administer intranasal products.
INCS are not systemically absorbed; therefore, there are no systemic adverse effects. The most common INCS adverse effects are local, including nasal irritation, stinging and epistaxis 111 , and can usually be prevented by aiming the spray slightly away from the nasal septum. Long-term INCS use does not damage nasal mucosa or induce glaucoma 112 , and growth effects in children seem to be minimal 78 . Some INCS, such as budesonide, can be safely used during pregnancy at the recommended therapeutic dose after a thorough medical evaluation 113 . 114 and mometasone-olopatadine, which was approved in Australia in 2019 (ref. 115 ). These medications are more effective than the individual compounds administered separately, are well tolerated (except for some bitter taste in a few patients) and are effective within minutes (fluticasone propionate-azelastine) 116 or within 1 hour (mometasone-olopatadine) 117 (TaBle 1). These therapies are typically used in patients who failed to benefit from INCS treatment alone and have been suggested for use in non-adherent patients who treat their symptoms intermittently. Combining oral H 1 -antihistamines and INCS does not seem to increase the efficacy of INCS 118,119 .
INCS and intranasal H 1 -antihistamine fixed combination. Fixed-dose combinations of INCS and intranasal H 1 -antihistamine for treatment of AR and rhinoconjunctivitis include fluticasone propionate-azelastine
Other drugs. Leukotriene receptor antagonists, montelukast and zafirlukast, are used in the treatment of AR. Their effect is close to that of oral H 1 -antihistamines 104 . In Europe, leukotriene receptor antagonists have been approved by the EMA only for patients with co-morbid asthma and AR. Other AR therapies, such as chromones and ipratropium bromide, are effective for only some symptoms. For example, chromones (disodium cromoglycate) can mostly be self-administered and tried in patients with mild local ocular symptoms 104 . Chromones are safe, but their effectiveness is usually quite modest. The ipratropium bromide nasal spray is well tolerated but is effective only for nasal secretion 104 . Decongestants include intranasal sprays (for up to 7 days), such as oxymetazoline or phenylephrine sprays, and H1-antihistamines combined with decongestant sympathomimetic tablets or capsules (for up to 10 days), such as acrivastine, cetirizine hydrochloride or desloratadine plus pseudoephedrine 104 , and are indicated only in those with severe nasal obstruction and should not be used long term to avoid rhinitis medicamentosa (for intranasal preparations). Saline irrigation may reduce patientreported disease severity in adults and children with AR, with no reported adverse effects 120,121 . Herbal products, homeopathy and acupuncture are still largely used for treatment of AR but lack clear evidence, and herbal medicine can cause adverse effects such as contact dermatitis, headache, itchy eyes and gastrointestinal symptoms 122 .
AR pharmacotherapy and children.
Few studies have evaluated AR pharmacotherapy in preschool children 99 . The therapies with demonstrated efficacy are rupatadine 123 and, in school-aged children, cetirizine 124 , azelastine hydrochloride and the fluticasone propionateazelastine fixed combination 125 . Cetirizine is approved at 6 months of age in some countries. In addition, INCS can be prescribed in preschool children, H 1 -antihistamines are suitable for persons older than 1 year, and cromoglycate or antihistamine eye drops are suitable for patients older than 3 years.
AR pharmacotherapy in elderly patients.
In most elderly patients, rhinitis symptoms, diagnosis and treatment are the same as for other adult age groups. However, elderly patients often have mixed AR and non-allergic rhinitis with multiple triggers and have higher levels of mucosal dryness than younger patients 126 . Rhinitis symptoms in elderly patients may include profuse rhinorrhoea without itching, isolated nasal obstruction usually when lying down, and nasal crusting in winter or in patients treated with diuretics. INCS, oral H 1 -antihistamines, intranasal H 1antihistamines and the azelastine hydrochloridefluticasone propionate fixed combination are first-line therapeutic options in elderly patients 127 . Of note, ageing can affect the nasal mucosa by increasing cholinergic activity and atrophy 126 ; thus, the dose of topically administered therapies may need to be reduced in elderly patients. In addition, some therapies can cause specific adverse effects in elderly individuals. Indeed, oral decongestants or systemic glucocorticosteroids are not recommended in elderly patients owing to adverse effects 127 . Oral decongestants can cause, for example, palpitations, insomnia, nervousness, irritability, trouble with urination and reduced appetite, whereas glucocorticosteroids can induce glaucoma, cataract, osteoporosis and diabetes mellitus. Moreover, first-generation H 1 -antihistamines are strongly discouraged in elderly individuals owing to sedation and anticholinergic effects 128,129 . Caution should be exercised in patients with co-existing diseases, polymedication and organ (such as renal or liver) dysfunctions 127 .
Real-life data and next-generation guidelines.
Real-life observational studies using mobile health have found that most patients with AR self-medicate or use over-thecounter medications, placing the pharmacist at the forefront of treatment 130 . Patients consulting primary care physicians usually have uncontrolled symptoms despite use of multiple medications. Adherence to treatment is a major issue 131 , as many patients do not seek advice from physicians, do not follow the physician's prescriptions and self-medicate to control their symptoms often using over-the-counter medications 130 . Surprisingly, the use of multiple medications is associated with poor rhinitis control 132 .
Some recommendations for AR treatment are based on the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) guidelines 104,118,119,133 . Next-generation guidelines 134 were subsequently developed using existing GRADE-based guidelines and real-world evidence including data from randomized controlled trials, real-world data provided by mobile technology 131,132 and data from additive studies, such as allergen chamber studies assessing the speed of onset of medications 116,117 (TaBle 2). Real-life data clearly indicate that patients prefer as-needed treatment to continuous treatment, and this should be reflected in future guidelines.
Care pathways for a digitally enabled, patient-centred approach. AR treatment should be individualized according to the symptom profile, severity and duration, the patient's preference of oral versus intranasal administration, and the availability and affordability of medications. Mobile technology can improve shared decision-making, and one example has been recognized as a Good Practice by the Directorate-General for Health and Food Safety of the European Commission for digital health in AR 135 and for change management 136 (fig. 2). Multistakeholder care pathways should be established in each country or region as health-care systems differ 135,137 . As many patients with AR self-medicate and have poor symptomatic control, there is a great need to improve self-management for patients with AR, which can be aided by mobile health. In this scenario, patient counselling is first done by pharmacists with the help of allergy guides and websites ( fig. 3), following which patients should be referred to physicians. Mobile technology could reduce the time between the first symptoms and referral of patients with uncontrolled symptoms to specialists, as primary care physicians can have an objective assessment after the first allergy season (pollen or indoor allergens), including adherence to treatment and control of rhinitis and asthma.
Allergen-specific immunotherapy
AIT is indicated for AR, allergic rhinoconjunctivitis and/or asthma when symptoms remain uncontrolled with avoidance measures and appropriate pharmacotherapy in adherent patients 138 .
The aim of AIT is to induce tolerance to the allergens and, therefore, to reduce the symptoms of allergic diseases. For a sustained effect, AIT should be applied for a minimum of 3 years, either continuously or preseasonally 139 . The induction of tolerance by AIT leads to changes in allergen-specific memory T cell and B cell responses and in the allergen-specific IgE and IgG antibody levels, and modifies the activation thresholds for mast cells, basophils and dendritic cells 140 . Of note, the levels of allergen-specific nasal and serum IgG4 antibodies correlate closely with the clinical response to AIT in patients with AR 141 .
Selecting allergens that have a clinical effect on the patient is paramount to the success of AIT. These causative allergens can be identified through clinical history taking, component-resolved diagnosis and, if indicated and necessary, nasal provocation testing. The use of prescription databases has indicated that the productspecific efficacy demonstrated in double-blind, placebocontrolled, randomized trials translates into real life 142 . Evidence of efficacy of AIT has been demonstrated for grass, birch (covering the homologous group of Betulaceae tree pollen), ragweed and Cryptomeria japonica pollen 143,144 , as well as house dust mites 145 , whereas less evidence is available for other types of pollen, animal dander or moulds. Only regulated, standardized allergen extracts that have demonstrated efficacy and safety should be used for AIT 138,146,147 . However, efficacy can be assumed for allergens within homologous groups, including several pollen and house dust mite extracts as defined by respective EMA guidelines for allergen products 138 . There is no evidence that mixing different allergens is effective in AIT, as this may result in underdosing and a potential degradation of specific allergens.
AIT can be applied via the subcutaneous or the sublingual routes, as tablets or drops, following the same indications and contraindications 148 . Natural allergens or chemically modified allergens (known as allergoids) may be used, with the aim of reducing the risk of adverse events but maintaining efficacy or enabling an increased dosage 138,149 . International and national guidelines are available 104,[150][151][152] and are updated on a regular basis.
Owing to the need for long-term use and the cost in most countries, only selected patients should receive AIT, which should be prescribed by allergists (fig. 4). However, no validated biomarkers for predicting or monitoring the efficacy of AIT at an individual patient level are available in clinical practice 153 , although mobile health may be of great interest. The symptommedication score, grading symptoms and medication use on a daily basis, still remains the most reliable parameter of success in daily practice.
Adverse effects of AIT are relatively common but are rarely severe 138 . Local reactions include redness and swelling at the injection site that occurs immediately or several hours after injection. Other adverse effects, such as sneezing, nasal congestion or hives, indicate systemic reactions. Serious reactions such as swelling of lips and tongue, laryngeal oedema, shortness of breath and chest tightness (asthma) in response to injections are very rare but require immediate medical attention and upfront preparations, such as availability of equipment and medications, and training of the personnel. Symptoms of an anaphylactic reaction typically include swelling in the throat, wheezing or tightness in the chest, nausea and dizziness and should be immediately treated with adrenaline (auto-injector) and preparation of an intravenous access. As most serious reactions develop within 30 minutes after injection, it is recommended that patients are supervised in the physician's office for at least 30 minutes before leaving. Allergen drops or tablets have a more favourable safety profile than injections.
The first sublingual therapy dose should be administered under the supervision of a physician, but subsequent doses can be administered at home. Most adverse events are local (mouth itching, lip swelling and nausea) and spontaneously subside with further administration.
Quality of life
Health-related QOL (HRQOL) is the most frequently patient-reported outcome in AR (TaBle 3). The ARIA recommendations 73 have proposed grading AR severity by taking into account the effect of AR on HRQOL. Moreover, regulatory authorities such as the FDA 154 and the EMA 155 have provided guidance to the industry on how to use patient-reported outcomes to support labelling claims and routinely consider patient-reported outcomes as a tool used for data collection. In addition, patient preferences and values are introduced as cornerstones in GRADE, which is the best option for grading clinical evidence and developing recommendations for diagnostic and therapeutic interventions 156 . Although patient-reported outcomes are not explicitly included in the definition of personalized medicine, they represent a real opportunity for involving patients in each step of disease management 157 . The availability of validated questionnaires for AR has permitted the evaluation of the effect of this disease in adults, children and adolescents. The use of generic tools, which are applicable to all health conditions, has underlined that adults with AR who have HRQOL scores significantly lower than those of the general population have HRQOL scores that are lower than those of patients with asthma 158 . In addition, children and adolescents with AR had lower HRQOL scores than healthy peers 159 . The effect of AR on the physical domain of QOL was comparable in teenagers with AR and in those with asthma 160 .
The aspects of HRQOL that are relevant for patients with AR have been identified by disease-specific questionnaires 161,162 . A rich literature shows how the presence and severity of symptoms negatively affect daily activities, performance, sleep, physical and emotional status and social functioning at all ages 163 ; the effect of AR and multimorbid asthma [164][165][166] ; the possibility to minimize or delete AR impact 164 ; and the effect of symptomatic treatments 167 or specific immunotherapy. Of note, AR impairs QOL to a greater extent than moderate asthma 158 and significantly impairs work productivity 4 .
Interest in the patients' perspective is continuously growing in AR research. Nonetheless, the routine use of HRQOL in clinical practice, which is encouraged owing to the potential to optimize disease management 168,169 , remains limited. In the next few years, the questionnaires for assessing and monitoring AR HRQOL in individual patients need to be validated 170 and introduced into routine care.
Mobile health
Multimorbidity in allergic airway diseases is well known 76 , but a mobile application (MASK-air) created to assess how multimorbidity affects symptoms and severity has provided further findings 32 . Indeed, data from this mobile application have revealed that AR and rhinoconjunctivitis do not seem to be the same disease and have identified a pattern of uncontrolled multimorbidity in some patients (uncontrolled rhinitis, conjunctivitis and asthma on the same day) 32 . Data from such mobile applications are generating hypotheses that need confirmation in epidemiological studies. In this regard, differences between AR alone and AR associated with conjunctivitis were previously known 29 but epidemiological studies using data from mobile applications demonstrated that ocular symptoms are more common in patients with polysensitization 30 , are associated with nasal symptom severity 33 and are important to consider in severe asthma 33 . Moreover, the severity of allergic diseases increases with the number of allergic multimorbidities 34 . This is the first example of a discovery of novel allergic phenotypes using a mobile health application confirmed by classic epidemiological studies. Other mobile tools have been proposed 171 but few have been tested 172 . There is an urgent need to replicate existing data and to optimize mobile health for AR management in the digital transformation of health and care 173 . An interesting approach will be to propose alerts for pollen 174 , pollution or asthma exacerbations 175 .
Mechanisms and multimorbidity
As previously mentioned, allergic diseases are heterogeneous; some patients have AR alone, whereas others have AR and asthma (with or without other allergic manifestations), although few patients have asthma alone. In addition, there are probably common genes associated with asthma and AR and specific genes associated with AR alone. Combining big data analyses (such as from the MASK-air application 135 ), classical epidemiological studies, in silico analysis, transcriptomics using microarray data (as exemplified in MeDALL 22,24,35 ) or RNA sequencing 176 has led to the reclassification of the mechanisms of allergic diseases. For example, polysensitization and multimorbidity represent the extreme allergic phenotype, starting early in life, and are associated with IL5 and IL33 activation. Notably, several of these and other genes associated with allergic multimorbidity point towards type 2 inflammation 177 and eosinophil activation 24 . By contrast, rhinitis alone is associated with Toll-like receptor pathways, which have a key role in the innate immune system. Assessing these mechanisms in more detail may allow better understanding of the mechanisms of allergy and provide novel insights for prevention and treatment. Although it is well established that rhinitis can lead to asthma, the exact phenotype of AR prone to developing asthma is still unclear. It is possible that polysensitized individuals can more commonly develop asthma.
New treatments
As many patients do not experience full relief from AR symptoms with available treatments, knowledge gaps undermine the development of new pharmacological and biological interventions to improve management. One important area of research is the identification of novel reliable biomarkers to phenotype or endotype patients with AR to predict treatment response and management strategies. Other areas of research include the elucidation of novel molecular mechanisms involved in allergen-specific response perpetuation in the nasal mucosa and translational integration of genomics, transcriptomics, proteomics and metabolomics into health care.
Novel cost-effective pharmacological treatments
Despite AR being one of the most prevalent diseases in the world with a high economic burden, no big pharmaceutical companies are developing novel treatments. The reasons for this paradox are complex but can be summarized as follows: overall, low-cost medications are effective in most patients if they are used appropriately, and, in many countries, they are given over the counter with no cost for the payers; AR is not a life-threatening disease and payers prioritize life-saving medications, for example, for cancer, rare diseases and COVID-19; developing medications with an efficacy substantially higher than that of those currently on the market may be difficult; and, owing to the number of patients with AR, only low-cost medications may eventually be reimbursed if patients with AR are not stratified, and therefore the cost of the development of an AR treatment largely surpasses potential revenues and very few trials are ongoing 178 .
In addition, although monoclonal anti-IgE or anti-IL-4/IL-13 is effective in AR 179,180 and some of the available biologics (or those in the pipeline) for asthma may be suitable for patients with severe, multimorbid allergic conditions or stratified patients with very severe AR, there are only a few repurposing attempts for asthma medications. One example is montelukast, which, in Europe, is indicated only for asthma with AR multimorbidity as there was a request from the payers to lower the price of the asthma drug if the medication was also approved for AR alone. In addition, monoclonal antibodies to allergens may be of interest in those with AR caused by a single major allergen driving the allergic reaction.
Patient stratification is needed to find a group of patients that are unresponsive to the current medications and to obtain indirect costs incurred by these patients. The estimated cost of AR in Europe owing to presenteeism ranges from €25 billion to €50 billion 6 . A novel model of reimbursement of medications should be developed with, for example, enterprises paying for a potential new treatment with a precise cost-effectiveness analysis showing potential benefits. Mobile health can have a role in the cost-effectiveness analysis.
Improved treatment
Increasing safety while maintaining or even increasing efficacy are the main goals of research for novel vaccine development and improvement of treatment schemes in AIT. Perspectives in AIT are well established 181 , and many new products are in development [182][183][184][185] . However, the vast majority of previous attempts failed because safety issues or lack of efficacy was observed. Future directions for conventional AIT include use of adjuvants, including vitamin D, Toll-like receptor ligand agonists, biologics 186 or probiotics 187 . Several attempts have been made to increase tolerance and efficacy using molecular allergy vaccines acting on B cells or T cells but none has produced convincing results 188 .
Treatment of AR could also be improved by the use of shared decision-making, a process whereby both the patient and the physician contribute to the medical decision-making process. In AR, data from mobile technology have revealed that patients are not adherent to treatment and self-medicate using as many medications possible to control their disease. Accordingly, there is an urgent need to propose shared decision-making using mobile health tools to optimize AR treatment. In addition, drug repurposing could be useful for treatment of AR and could be aided by mobile health 135,189,190 . Value-added medicine can help address unmet patient needs and could improve treatment-associated QOL.
The global allergy solution
The delivery of cost-effective modern health care is challenging for the management of chronic diseases and in particular for allergic diseases 191 , as management, which is often dependent on specialist and supporting services, is becoming unaffordable. Accordingly, innovative solutions often based on mobile health are required 192,193 .
Authorities should be supported for the transformation of health and care towards integrated care with organizational health literacy for allergic diseases 135,173,194 A global allergy simple solution should provide the framework to digitally transform the prevention and control of allergic diseases in a cost-effective manner. Mobile health could be used optimize an accessible and affordable treatment in stratified and participatory patients with allergic diseases to provide change management 195 .
Published online xx xx xxxx
|
2020-12-03T14:46:33.726Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e1a80dea72983a3f7018cc0f50d234cbfa23bc9c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41572-020-00227-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e1a80dea72983a3f7018cc0f50d234cbfa23bc9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
228081725
|
pes2o/s2orc
|
v3-fos-license
|
Prevention of SARS-CoV-2 Infection Among Police Officers in Poland—Implications for Public Health Policies
Background: This study aimed to characterize sources of knowledge on the means of prevention of SARS-CoV-2 infections as well as to assess the methods of preventing SARS-CoV-2 infection among police employees in Poland and their potential impact on the risk of SARS-CoV-2 infection. Methods: The study consisted of two phases: questionnaire and laboratory tests for SARS-CoV-2 infection. The questionnaire included 30 questions related to risk factors, knowledge about SARS-CoV-2, and methods of infection prevention. Results: Data were obtained from 5082 police employees. The most common source of knowledge for a daily update on SARS-CoV-2 infection prevention was the Internet (42.6%), television (40.3%), and radio (39.7%). The most commonly used methods of SARS-CoV-2 infection included washing one’s hands for at least 20 s (95.8%), wearing facemasks (82.9%), and physical distancing (74.9%). Results of IgG tests were lower in police units where the overall compliance with the preventive measures was higher (p < 0.01). Women were more likely to exercise SARS-CoV-2 infection prevention behaviors compared to men. Compliance with the recommended protective measures increased with age. Conclusions: Lower anti-SARS-CoV-2 IgG seropositivity rates were observed in police units with better overall compliance with the preventive measures, suggesting the key importance of group rather than individual behaviors.
Study Design and Population
This cross-sectional survey was carried out between 22 June and 8 July 2020 among police employees (officers and civilians) from the Mazowieckie Province, Poland. The study consisted of two phases-a questionnaire and laboratory tests for current (RT-PCR) and previous (ELISA) SARS-CoV-2 infection.
The questionnaire prepared for this study included 30 questions related to risk factors, knowledge about SARS-CoV-2, and methods of infection prevention. Moreover, questions related to the socioeconomic characteristics and the types of occupational activities were also addressed. The questionnaire was based on the previously published COVID-19-oriented research, with special emphasis on the WHO guidebook on behavioral insights studies related to COVID-19 [35]. A pilot test of the first draft of the questionnaire was carried out among 10 police employees. After the pilot test, two questions related to the type of service were added and 1 question related to the personal characteristics was removed due to the potential possibility of identifying individual respondents. The questionnaires were collected using a Computer-Assisted Web Interview (CAWI) method. The survey takes no more than 15 minutes. Field control was enabled to avoid missing data.
For laboratory testing, nasopharyngeal swabs and serum samples were collected. Detection of SARS-CoV-2 RNA by RT-PCR qualified as a positive test result (current infection). Anti-SARS-CoV-2 IgM + IgA index above 8 was considered positive, while participants with indexes between 6 and 8 were considered equivocal. Anti-SARS-CoV-2 IgG index above 6 was considered positive and participants with indexes between 4 and 6 were considered equivocal. All the testing procedures were carried out in accordance with the WHO guidelines. A detailed testing methodology was described in the previous article [36].
This study was carried out on an effective random sample of 5082 police employees from the Mazowieckie Province, Poland. A random sample selection was ensured by using the group selection technique (with the probability of drawing proportional to the size of the group) and stratified selection. Participation in the study was voluntary. The study protocol was approved by the Ethical Review Board at the Medical University of Warsaw, Warsaw, Poland (document no. KB/87/2020).
Variables
The scale for the assessment of compliance with the measures aimed at preventing SARS-CoV-2 infection was based on 8 questions regarding: (i) hand washing; (ii) avoidance of touching one's nose and mouth; (iii) the use of hand disinfectants; (iv) caution when opening mail; (v) wearing face masks; (vi) physical distancing (at least 2 m); (vii) surface disinfection; and (viii) phone disinfection. Other behaviors not directly related with the reduction of SARS-CoV-2 infection risk (e.g., covering one's mouth, flu vaccinations) were not taken into account. A score of 2 was awarded if a subject practiced a particular form of prevention at the time of the study; a score of 1 was awarded when the subject used to practice that form before (and discontinued it). Zero points were awarded for non-compliance with a particular preventive measure. The scale was an additive scale with the total score ranging from 0-16 points. The scale reliability test afforded a Cronbach's alpha of 0.720.
The scale for the assessment of the use of information sources was based on 8 questions regarding the frequency of using the following sources when searching for information on the novel coronavirus: (i) TV; (ii) radio; (iii) press; (iv) conversations with family and relatives; (v) conversations with friends; (vi) websites/news; (vii) social media; and (viii) official government announcements. For each source, the subjects chose one of 6 possible answers ranging from "never" to "several times a day". A scale of 0 to 40 points was constructed by adding individual scores (from 0 to 5) assigned to each answer. The scale reliability test afforded a Cronbach's alpha of 0.848.
Each subject was asked a series of questions regarding the prevalence of chronic diseases. Diseases taken into account included respiratory diseases, allergies, urinary tract diseases, cardiovascular diseases, diabetes, gastrointestinal tract diseases, endocrine diseases, and cancer.
External data regarding the number of registered cases and deaths per 10,000 inhabitants of individual districts in the Mazovian voivodeship were also included in the logistic regression models. Epidemiological data were derived from the reports published by the State Sanitary Inspection (as of 8 July 2020).
Statistical Analysis
Statistical analysis was carried out using the SPSS software package (IBM, Armonk, NY, USA) version 26. Chi-square test was used for the determination of significance for ordinal and categorical variables. Statistical significance was defined as p < 0.05. The scales were verified using scale reliability analysis. The reliability of the scales was assessed using Cronbach's alpha. The powers of relationship between the variables were assessed using odds ratios (OR) as calculated from multivariate logistic regression models.
Model I. The explained variable consisted of a positive or an ambiguous (i.e., ≥4) result of the anti-SARS-CoV-2 IgG screening test. Explanatory variables were entered as a series of dummy variables (0-1) and included: gender, age, population of the area of residence, living alone or with others, type of service (officer vs. civilian employee), type of work, daily number of contacts with other people, leaving the country to visit selected destinations since 1 January 2020, as well as the eight preventive measures to minimize the risk of novel coronavirus infection. Continuous variables entered into the model included the number of infection cases per 10,000 district inhabitants, the number of infection-related deaths per 10,000 district inhabitants, and the percentage of subjects within the particular police unit presenting with a positive or ambiguous result of anti-SARS-CoV-2 antibody tests.
Model II. The explained variable produced a score of 13-16 using the scale of compliance with the preventive measures aimed at reducing the risk of SARS-CoV-2 infection (0-16 points). Explanatory variables were entered as series of dummy variables (0-1) and included: gender, age, population of the area of residence, living alone or with others, type of service (officer vs. civilian employee), type of work, daily number of contacts with other people, prevalence of at least chronic variables included in the list, self-assessed health status, and frequency of using various sources of information regarding the novel coronavirus. Continuous variables entered into the model included the number of infection cases per 10,000 district inhabitants, the number of infection-related deaths per 10,000 district inhabitants, and the overall score in the scale assessing the use of coronavirus information sources.
Group Characteristics
Police officers accounted for 79.2% of the study population of 5082 subjects, whereas civilian employees accounted for the remaining 20.8%. Female subjects accounted for 33.5% of the study population. The mean age of subjects was 39.6 years (SD = 8.9) for the overall population and 40.7 years (9.6) and 39 years (8.5) for female and male subjects, respectively.
A total of 30.1% of subjects were engaged only in office work, while another 17.3% were engaged in field service alone. The remaining subjects (52.6%) were engaged in both types of work. Among all participants, 2.4% received flu shots during the previous influenza season.
Sources of Knowledge about SARS-CoV-2 Infection Prevention
A vast majority of the overall population of police workers (95.6%) agreed with the statement that they were well informed on the SARS-CoV-2 coronavirus. In the group of police officers, 23.0% of subjects agreed with the statement whereas 72.4% rather agreed with it. In the group of civilian employees, the respective percentages were 18.0% and 78.3% (p < 0.01).
The most common source of knowledge for a daily update on SARS-CoV-2 infection prevention was the Internet (websites), but also traditional media, such as television and radio. Almost one third of participants (31.1%) followed daily announcements published by the official government institutions. Among participants, 28.5% used social media to search for information about SARS-CoV-2 infection daily. Every fourth respondent talked with family or friends about the coronavirus daily. Newspapers were the least common source of knowledge about the coronavirus. Details are presented in Table 1. Table 1. Source of knowledge on SARS-CoV-2 infection prevention (n = 5082).
Prevalence of SARS-CoV-2 Antibodies
No active SARS-CoV-2 infection was detected by means of RT-PCR in any of the tested subjects. A positive result of the IgG screening test (>6) was determined in 4.3% of subjects. In another 13.2% of subjects, ambiguous results (range of 4-6) were obtained. A positive result of the IgA + IgM screening test (>8) was determined in 8.9% of subjects. In another 9.8% of subjects, ambiguous results (range of 6-8) were obtained. Detailed data on the infection rates within the group of police workers are provided in the previously published paper [36].
SARS-CoV-2 Infection Prevention Methods
In the study group (n = 5082), the most commonly used methods of SARS-CoV-2 infection included washing one's hands for at least 20 seconds and covering one's nose and mouth while coughing or sneezing. These behaviors were reported by 95.8% and 93.8% of responders, respectively. Wearing face masks was reported by 82.9% of subjects, while physical distancing was declared by 74.9%. Previous practicing (and subsequent discontinuation) of physical distancing and wearing face masks was declared by 18.6% and 15.7% of subjects, respectively. Detailed data are presented in Figure 1. The mean score on the scale of compliance with the preventive measures aimed at reducing the risk of SARS-CoV-2 infection (0-16) was 13.4 (SD = 2.8). The median score was 14.0. In the overall population, 36.0% of subjects scored the complete number of 16 points, meaning that they practiced all eight preventive measures selected for further analyses at the time of the study. Another 31.2% of subjects scored 13-15 points, meaning that they practiced nearly all the preventive measures. Every 10th subject (10.0%) practiced the preventive measures in a selective manner or did not practice them at all (score of 0-9). Compliance with the prevention principles was declared more frequently by female subjects as compared to male subjects. Scores of 13 or higher were obtained by 71.6% of female responders and 65.1% of male responders (p < 0.001). Age was another factor that had a statistically significant impact on answers provided (p < 0.001). Scores of 13 or higher were obtained by 62.9% of the youngest police workers (age range 20-29 years) as compared to 82.9% in the oldest age group (60 or older). Lower compliance with the recommendations was observed in police officers as compared to civilian employees. Scores of 13 or higher were obtained, respectively, by 64.9% and 76.2% of subjects in these groups (p < 0.001). Significant relationships were also observed with regard to the type of work (office vs. field) (p < 0.01), self-assessed health status (p < 0.05), traditional smoking status (p < 0.05), and level of awareness regarding the novel coronavirus (p < 0.001). No differences were observed with regard to the number of inhabitants in the area of residence (p = 0.997), prevalence of at least one chronic disease (p = 0.283), or flu shots received during the last influenza season (p = 0.925). Detailed data are presented in Figure 2.
Analysis of Relationships Between Variables
Cox & Snell R-Squared of 0.044 and Nagelkerke R-Squared of 0.073 were obtained for the logistic regression model aimed at predicting a positive or ambiguous (i.e., ≥4) result of an IgG screening test. Among the analyzed variables, statistically significant differences were identified for: age of 60 years or older (OR = 2.125; 95%CI 1.300-3.475) vs. age of 20-29 years; below 20,000 inhabitants in the area of residence (OR = 1.361; 95%CI 1.063-1.742) and more than 500,000 inhabitants in the area of residence (OR = 1.387; 95%CI 1.122-1.714) vs. rural residence, and particular caution when opening mail being practiced at the time of the study (OR = 0.813; 95%CI 0.666-0.992) vs. no caution when opening mail being practiced at any time. Statistical significance was also observed for the continuous variable corresponding to the percentage of workers at the particular police unit from whom positive or ambiguous results of IgG screening tests were obtained (p < 0.001). Details are presented in Table S1.
The percentage of workers of whom positive or ambiguous results of IgG screening tests were obtained was determined for each police unit taking part in the survey. Then, the results were compared with the mean score in the scale assessing compliance with the protective measures aimed at reducing the risk of SARS-CoV-2 infection. The obtained result was suggestive of the infection rate being lower in groups of more compliant subjects, with Pearson's coefficient amounting to −0.253 (p < 0.01).
The other logistic regression model explained the high scores (13-16 points) in the scale assessing compliance with the protective measures aimed at reducing the novel infection risk (scale of 0-16) against selected factors. Cox & Snell R-Squared of 0.109 and Nagelkerke R-Squared of 0.152 were obtained for this model. Women were more likely to follow more recommendations for preventing new infections than men (OR = 1.182; 95%CI: 1.009-1.385). The willingness to comply with various prevention measures increased with subjects' age. This was most evident in the eldest age group (60 or more years) where the odds ratio against the group of workers aged 20-29 was determined at OR=2.504 (95%CI: 1.467-4.272). Officers were less willing to comply with recommendations compared to civilian employees (OR = 0.648; 95%CI: 0.518-0.810). Subjects who declared daily numbers of contacted individuals as 20-49 (OR = 0.811; 95%CI: 0.659-0.998) or 50-100 (OR = 0.760; 95%CI: 0.586-0.987) were less strict in their attitudes towards preventive measures than individuals with either very high (over 100) or lower numbers (less than 20) of contacts. Frequent use of radio, press, or government announcements as sources of information on the novel coronavirus favored stricter compliance with prophylactic recommendations (OR between 1.326 and 1.915). On the other hand, a negative relationship with compliance was observed for the overall scale assessing the use of coronavirus information sources (0-40 points). No statistical significance was determined for the remaining parameters. Details are presented in Table 2. * Reference category: male; age: 20-29 years; place of residence: rural; civilian employee; field work; daily number of contacts: <10 individuals; no chronic diseases; self-assessed health status: bad or very bad; TV: never or less than once a week; radio: never or less than once a week; press: never or less than once a week; conversations with relatives: never or less than once a week; conversations with friends: never or less than once a week; websites: never or less than once a week; social media: never or less than once a week; government announcements: never or less than once a week. Bold format reffers to statisticaly significant results.
Discussion
To the authors' best knowledge, this is one of the first studies on SARS-CoV-2 infection prevention methods and their impact of seroprevalence among uniformed services (police employees). We observed a relatively high level of knowledge about SARS-CoV-2 infection prevention methods among police employees. Most of the participants practiced preventive behaviors, such as handwashing and using facemasks, however significant deficiency was observed with regard to the practice of social distancing. Our findings showed that women were much more likely to exercise SARS-CoV-2 infection prevention behaviors compared to men. Compliance with the recommended protective measures increased with age.
Our study showed that police employees used both traditional media (radio and television) as well as new media (Internet) to search for daily information about SARS-CoV-2 infection prevention methods. Moreover, almost one third followed the official government announcements. A study from China showed that media coverage can be considered an effective way of mitigating the spread of the COVID-19 pandemic [39]. In Poland, information regarding the epidemiological situation in the country and news about the SARS-CoV-2 coronavirus were available on the headlines of all mass media; therefore, there was wide access to the information. Instructional materials on SARS-CoV-2 infection prevention methods were broadly available across the traditional as well as online media. Moreover, the Ministry of Health published a daily report with the number of new laboratory-confirmed COVID-19 and COVID-19-related deaths as well as a list of subregions with the highest COVID-19 burden [40]. Police employees also received dedicated informative materials related to SARS-CoV-2 infection prevention methods that should be applied on duty [41]. This document included hazard identification, basic principles of protection against COVID-19, personal protective equipment (PPE) characteristics, and methods of deactivation of SARS-CoV-2 [41]. We can hypothesize that due to the multiple sources of information on the prevention of SARS-CoV-2 infections, police officers declared a high level of knowledge about SARS-CoV-2 infection prevention methods. The study conducted among 471 health care workers in Greece showed that a high level of knowledge concerning the SARS-CoV-2 pandemic among health care workers was significantly associated with positive attitudes and practices towards preventive health measures [42]. Similarly, as in our study, the major source of knowledge about COVID-19 among Greek health care workers was TV/radio (69.8%) and the Internet/web pages/blogs (63%) [42]. This observation confirms the need for health communication regarding the coronavirus using traditional media as well as Internet-based communication methods.
Various public health measures aimed at limiting the spread of SARS-CoV-2 infection were applied across the world [33,43]. However, findings from the meta-analysis showed that physical distancing, wearing a facemask, eye protection, and hand hygiene are the most effective ways of SARS-CoV-2 infection prevention [26]. In our study, the most widely promoted methods of preventing SARS-CoV-2 infections, i.e., hand hygiene, maintaining physical distance, and wearing a facemask, were declared by the vast majority of respondents. However, a quarter of the respondents did not pay attention to keeping their physical distance at the time of the survey and almost a fifth did not wear a face mask. It is estimated that implementation of physical distancing is associated with a 29% reduction in COVID-19 incidence and a 35% reduction in COVID-19 mortality [44]. We can hypothesize that the lack of compliance with physical distancing rules may result from the occupational duties, especially among police officers designated to protecting public gatherings. Compliance with wearing a facemask may be associated with the ergonomic characteristics of different types of facemasks [45,46]. The study on facial skin temperature and discomfort when wearing a facemask showed that wearing N95 respirators compared to surgical masks produces increased facial skin temperature, greater discomfort, and lower wearing adherence [45]. Professional groups required to wear a facemask in the workplace may use a personalized fitting method (e.g., device with a 3-dimensional solution) for prevention of oronasal mask-related pressure ulcers [46]. Moreover, our findings revealed that a significant percentage of police employees did not comply with the principles of mobile phone disinfection and touching surfaces. A systematic review including 56 papers showed that mobile phones represent a significant pathway for microbial transmission in the healthcare setting as well as in community settings [47]. It is suggested that surfaces of mobile phones may play an important role in the transmission of SARS-CoV-2 infections in an epidemic outbreak [47]. Disinfecting objects and surfaces is a part of a strategy aimed at mitigating the spread of SARS-CoV-2 infection [48,49]. Given that the study looked at police employees who should be considered a high-risk group, the observed shortcomings in compliance with SARS-CoV-2 infection prevention guidelines require urgent educational actions. The study conducted among 123,768 large labor-intensive factory workers in China showed that the majority of respondents had a strong awareness of COVID-19, however some knowledge misconceptions (e.g., related to garlic and Vitamin C in infection prevention) were also observed [50]. Moreover, better-educated respondents had increased levels of knowledge and practices related to COVID-19 [50]. In our study, eating garlic, ginger, or lemon was practiced by 35.8% of respondents. This observation suggests a need to provide an evidence-based educational campaign to combat the spread of misinformation on SARS-CoV-2 prevention methods.
Women were more likely to practice preventive behaviors than men. This is in line with previous observations related to preventive behaviors during the COVID-19 pandemic. The study conducted in China after the lockdown of Hubei Province showed that women displayed a higher level of knowledge about SARS-CoV-2 infection prevention than men [51]. Moreover, women more often practiced behaviors such as wearing a facemask and maintaining social isolation than men [51]. Additionally, numerous studies showed that gender is a determinant of compliance with handwashing recommendations and impacts the frequency of handwashing [52,53]. Our findings are in line with the study carried out in a sample of 2323 Polish secondary school students [54], where female secondary school students presented a higher level of knowledge about SARS-CoV-2 infection prevention. Similarly, as in our study, females practiced hand hygiene and personal protection behaviors more often than men [54].
Our findings showed that police employees aged 60 years and more presented higher compliance with SARS-CoV-2 infection prevention methods compared to the younger groups. Older adults are at higher risk for developing more serious complications from COVID-19 [55]. We can hypothesize that older police employees were afraid of the potential health consequences of SARS-CoV-2 infection and more often implemented preventive methods than younger groups because of that. Non-compliance with the recommended protective measures among adolescents and young adults was noted by the WHO, who asked the adolescents and young adults to follow the recommendations aimed at preventing SARS-CoV-2 transmission in the community [56]. An asymptomatic or oligosymptomatic COVID-19 course is observed among younger age groups [6,7] and due to this, they may underestimate the threat of COVID-19 to their own health, while becoming a source of transmission of the virus to their parents and grandparents, who may be severely affected.
In our study, people using more reliable/institutionalized sources of knowledge about the coronavirus (i.e., radio, press, official announcements) showed a higher level of compliance with infection prevention measures. An excessive amount of information, including that of questionable reliability, is conducive to departing from the principles of infection prevention. The COVID-19 pandemic is the first pandemic occurring in the era of social media. Social media allows for the quick provision of information on SARS-CoV-2 infection prevention, but may also be a source of misinformation. Misinformation on COVID-19 may shape individual's response, increasing the risk of hazardous behaviors [57,58]. Healthcare professionals are responsible for providing evidence-based knowledge about SARS-CoV-2 infection prevention. In Poland, the family doctor has the highest level of trust of all medical professions [59], so family doctors should be tasked with disseminating information about evidence-based preventive methods to limit the transmission of SARS-CoV-2 infection.
Among the infection prevention methods analyzed in the logistic regression model, a statistically significant relationship between the risk of a positive or ambiguous result of IgG screening test was observed only for caution when opening mail. The authors are inclined towards a hypothesis that this finding is not necessarily suggestive of the efficacy of this particular preventive measure; instead, it may mean that individuals declaring this behavior during the study tended to more strictly comply with other recommendations as well. Thus, this measure can be considered an indicator of one's attitude to the prevention of new SARS-CoV-2 infections.
With the exception of one situation, no relationships were observed between compliance with prevention methods and positive or ambiguous history of infection (IgG). Results of IgG screening tests were lower in police units where the overall compliance with the preventive measures was higher. Lack of relationships between compliance with most measures aimed at preventing SARS-CoV-2 infection and the positive or ambiguous IgG screening test results (which confirmed the history of infection) at the level of individual subjects may suggest that the problem of infection prevention should be considered from the perspective of groups rather than individuals. In other words, in the case of non-complying individuals, the risk may be markedly reduced by proper behavior of other individuals in their close surroundings. The phenomenon is similar to that observed in relation to vaccinations and herd immunity. The aptness of this hypothesis is supported by the result of ecological assessment in which a relationship was demonstrated between the degree of compliance with the preventive measures at the level of police units and the incidence of positive or ambiguous IgG screening test results among the employees of these units. This may be indicative of the key importance of group rather than individual behaviors when it comes to preventing new infections. Public health policies aimed at mitigating spread of SARS-CoV-2 infection should include group behaviors.
Our findings have some practical implications. Lower compliance with SARS-CoV-2 infection prevention methods among men and younger age groups points to the need for education on the methods of preventing SARS-CoV-2 infections in these groups. In clinical practice, physicians, especially family doctors, should make sure to educate on SARS-CoV-2 prevention methods, especially among the high-risk groups, such as the uniformed services. Employers should promote group preventive behaviors as such measures have the greatest effects on limiting SARS-CoV-2 transmission in occupational settings. Methods of SARS-CoV-2 infection prevention should be included in employee training and work regulations.
Most studies concerning occupational exposure to the SARS-CoV-2 coronavirus were carried out among healthcare workers [28][29][30][31][32]. Data on preventive behaviors among other occupational groups is very limited. Due to the limited scientific evidence, comparing our results to other studies is very difficult. The results of our study emphasize the urgent need for further research on the topic in other occupational groups at the highest risk of SARS-CoV-2 transmission, including transport workers, services and retail workers, as well as uniformed services [29].
This study has several limitations. Firstly, the practiced preventive behaviors were self-declared by the participants, but were not verified by the research team, so we cannot exclude the possibility of recall bias. Secondly, we did not assess the frequency of practiced preventive behaviors. We cannot rule out that some of the participants exercising the SARS-CoV-2 preventive methods may have done it too rarely. Nevertheless, this is the first study on SARS-CoV-2 infection prevention among police employees in Poland. Thirdly, this study was carried out among uniformed services and results cannot be generalized to the whole population.
Conclusions
Lower anti-SARS-CoV-2 IgG seropositivity rates were observed in police units with better overall compliance with the preventive measures, suggesting the key importance of group rather than individual behaviors when it comes to preventing new infections; such group behaviors should be included in public health policies aimed at mitigating SARS-CoV-2 transmission. Willingness to comply with a larger number of preventive methods increased with age. The group of subjects aged 60 or older was most willing to comply with the recommendations for the prevention of SARS-CoV-2 infection. Individuals seeking information on the novel coronavirus in the more institutionalized sources, such as radio, press, or TV, showed better compliance with the recommended prevention methods.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-12-10T09:04:31.695Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "a0802b2353d85baadd31225d6b1770fa6f263f83",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/23/9072/pdf?version=1607422031",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "db679107a2e726b9a046d9678719aeaa82dea8e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
199158037
|
pes2o/s2orc
|
v3-fos-license
|
Relationships between Microstructural and Mechanical Performance on Example of an Air‐Hardening Steel
Because of an excellent combination of strength and ductility, mono‐phase low‐alloyed steels with a bimodal grain size are an appropriate alternative to conventionally cold‐rolled and annealed steels as well as to steels with a dual‐phase microstructure. This study investigates how the microstructure of a low‐alloyed air‐hardening steel either with a homogeneous, a dual‐phase, or a bimodal grain structure influences its mechanical and fatigue performances. The homogeneous ferritic grain microstructure of the steel sheets is adjusted by an intercritical annealing at 790 °C, along with subsequent air hardening to obtain a dual‐phase state. Then, the ferritic‐martensitic material is cold‐rolled and annealed at 550–700 °C to produce different bimodal grain microstructures. The evolution of microstructure and mechanical properties are characterized. An annealing temperature of 600 °C is considered to be the optimal temperature resulting in pronounced bimodal grain size distribution. The sheet with a bimodal microstructure exhibits a higher strength and equal ductility compared to one with a homogeneous ferritic microstructure. Additionally, high‐cycle fatigue tests of the material with a bimodal microstructure shows its superior fatigue behavior at a loading of above 800 000 cycles compared with both the homogeneous ferritic microstructure as well as the dual‐phase microstructure.
Introduction
The automotive industry is forced to reduce the weight of its products in order to minimize its footprints in social challenges such as global warming or the finiteness of resources. One way to reach this goal is to use lighter materials in body-in-white products. Additionally, lighter structural parts can be produced by adapting their geometry to specific loading conditions. Therefore, besides using high strength materials that allow for a thickness reduction in structural parts, the sheet material should have a moderate to excellent formability. However, the use of different alloying concepts to meet the mentioned criteria is restricted by the material's weldability and the resulting costs, which are of high importance to the automotive industry. [1] New thermo-mechanical processing routes have enabled to produce steels with two or more phase components in their microstructure, i.e., dual-phase (DP) or multiphase steels. [2][3][4] In DP steels, the presence of both a soft ferrite phase and a hard martensite phase results in a high strength and a moderate formability. Here, the combinations of strength and ductility properties can vary immensely and are easily adjustable through the appropriate parameters of intercritical annealing. [2,3] However, due to localized straining at cold forming, which primarily occurs in the softer ferrite phase, these steels have a comparably low local formability. [5] In contrast to DP steels, complex phase steels consist mainly of bainite or tempered martensite. These phases are harder than soft ferrite but softer than untempered martensite in DP steels, resulting in a high local formability. Yet, due to the reduced ductility of the phases, these steels possess a relatively poor global formability. [5] Furthermore, their high yield strength (YS) results in a significant springback during cold forming and complicates the design process of the cold forming tool. [6] Because of their microstructure, which largely consists of a ferrite matrix and retained meta-stable austenite, transformationinduced plasticity steels (TRIP steels) have high strength properties similar to those of DP steels, but with a better global formability due to the transformation of metastable retained austenite into martensite during cold forming. [5] In contrast to TRIP steels, twinning-induced plasticity steels (TWIP steels) possess a mainly stable austenitic microstructure at room temperature. Relatively low values of stacking fault energy of austenite in these steels enable cold deformation via the twinning mechanism, which results in a very high strain hardening and thus ductility. [7] However, to retain austenite at ambient temperatures, alloying elements need to be added, which increases the material cost and reduces the weldability. [5] Structural parts with both high formability and strength can be produced by a hybrid thermo-mechanical processing. [8] These functionally graded parts have tailored properties, i.e., high ductility and formability in areas that are either to be cold formed or to absorb energy in the case of a crash, as well as a high strength in areas that should prevent intrusions in the case of a crash. This kind of processing can be complex in its practical implementation and needs to be adjusted on a case-by-case basis.
One way to improve a material's strength independent of its phase content is to refine the microstructure. Grain boundaries, for instance, act as obstacles against crack propagation from one grain to another. Hence, a high amount of grain boundaries results in an improved high cycle fatigue life of structural parts. [9,10] This can be achieved by various processing routes of severe plastic deformation. [9] However, this leads to a reduction in ductility and formability, making fine-grained materials less suitable for automotive applications. [9,10] Designing a bimodal grain microstructure with heterogeneous grain size improves the ductility of fine-grained materials while maintaining their high strength and resistance to high cycle loads. [11][12][13][14][15] These materials commonly show phase homogeneity, i.e., consisting of one phase, and simultaneously have a heterogeneous grain-size distribution, i.e., comprising coarse grains surrounded by fine grains. [13] In addition to improving a fine-grained steel's ductility, a bimodal grain microstructure could potentially enhance its corrosion resistance as well. [13] In the last decade, the topic of bimodal ferritic microstructures in bulk low-carbon steels has been researched extensively. The following processing techniques lead to a bimodal grain size in ferrite at room temperature: 1) Hot deformation at a temperature of 1000 C, resulting in a partial recrystallization of austenite grains, and thus, in a heterogeneous grain growth of the ferrite during the subsequent controlled cooling. [16,17] 2) Hot deformation slightly above the A C3 -temperature, allowing for a partial strain-induced γ!α transformation, and thus, the formation of a heterogeneous distribution of ferrite grain sizes after a subsequent controlled cooling. [16,17] 3) Hot deformation between the A C1 -and A C3 -temperatures, recrystallizing fine ferrite grains due to strain accumulation, and transforming austenite into coarse ferrite grains during a subsequent controlled cooling. [17,18] 4) Hot deformation at 1050 C, followed by a warm deformation at 550 C or by a cold deformation with a subsequent rapid annealing between the A C1 -and A C3 -temperatures. The rapid annealing of the deformed ferritic microstructure with a heterogeneous distribution of cementite results in the formation of very fine austenite grains in the vicinity of cementite particles as well as in the coarsening of the existing deformed ferrite grains due to recrystallization. A final controlled cooling leads to the transformation of austenite into fine ferrite grains and coarse recrystallized ferrite grains. [19] 5) Annealing between the A C1 -and A C3 -temperatures with [12] or without a deformation, [13,14] followed by quenching to obtain a DP microstructure consisting of martensite and ferrite. Cold rolling and annealing below the A C1 -temperature with a subsequent controlled cooling results in an abnormal coarsening of severely deformed ferrite grains, which are surrounded by fine ferrite grains transformed from martensite. [12,13,15] The last processing technique, i.e. "annealing between A C1 -and A C3 -temperatures ! cold rolling ! annealing below A C1 -temperature", enables a more simple temperature control during the annealing and the deformation steps compared to techniques comprising warm or hot rolling. Due to a high dimensional control, cold rolling is an elementary deformation step to produce thin steel sheets, which are commonly used for structural parts of body-in-white. Therefore, the mentioned processing technique seems to be the most appropriate for the specific goals of the automotive industry.
The temperature during the last annealing step directly influences the extent of the ferrite recrystallization as well as the transformation behavior of martensite into ferrite. Hence, it can be adjusted to reach the desired combination of strength and ductility in steel sheets with a bimodal microstructure. However, this has only been researched superficially to date. [12,15] Furthermore, there is no information about the fatigue behavior of low-carbon steels with a bimodal ferritic microstructure, and therefore, it has not been compared to the fatigue performance of the same steel in other microstructural states. Thus, the aim of this work was to establish relationships between the microstructure and the mechanical properties, i.e., depending on the temperature during the final annealing step, as well as to compare the fatigue behaviors of low-carbon steels with homogeneous ferritic, bimodal ferritic, and ferritic-martensitic DP microstructures.
Processing of the Investigated Material
The air-hardening steel RobuSal 800 (in the past-LH800) with a thickness of 1.65 mm was examined both in a CR and an annealed state. Due to a well-designed alloying concept (see Table 1), this steel demonstrated a high formability in its annealed state as well as a high strength after austenitization and quenching. Simultaneously, its critical cooling rate amounts to 10 Ks À1 , i.e. the steel can be easily hardened by air cooling. More detailed information about this steel's cold formability in an annealed state as well as its heat treatability can be found elsewhere. [2,20,21] As the first processing step, the strips cutoff from the sheet that initially had a homogeneous ferritic microstructure (homogeneous microstructure or HM state) were intercritically annealed to obtain a ferritic-martensitic microstructure, with each phase approximately making up 50% (DP microstructure or DP state). Both the temperature and the annealing time necessary to obtain this phase composition were chosen based on a previous study, [2] amounting to 790 C and 15 min, respectively. Subsequently, while some strips were air-cooled, others were cold-rolled with a reduction of 70% to produce a severely deformed DP microstructure (cold-rolled or CR state). Here, Table 1. Chemical composition (in weight percent) of the steel RobuSal800. [ the cold rolling direction corresponded to the rolling direction of the initial material. Lastly, the CR strips were annealed at temperatures between 550 and 700 C, with an interval of 50 C lasting for 45 min, and air-cooled to room temperature to form a ferritic bimodal grain microstructure (bimodal microstructure or BM state).
Characterization of the Mechanical Properties
The tensile tests were carried out using an MTS 858 Table Top System to measure the material's 0.2% YS, its ultimate tensile strength (UTS), its uniform elongation (E u ), and its elongation at fracture (E lt ). At least three dog bone specimens (see Figure 1a) with a gage length of 8 mm and a width of 3 mm were sampled from strips of each investigated state by wire erosion-the samples' longer side corresponding to the previous rolling direction. The specimens were tested with a cross-head speed of 0.01 mm s À1 .
For the fatigue tensile tests, samples with a geometry similar to ASTM E466-07 were used (see Figure 1b). After the wire erosion, all samples' surfaces were ground gradually with SiC papers with a grit range of P500, P1200, P2500, and P4000. The fatigue tests were carried out using the MTS 858 Table Top System at a frequency of 10 Hz and a load ratio of R ¼ 0.1. For every S-N curve, i.e., the stress amplitude versus the corresponding number of cycles to failure, at least ten specimens were tested for up to 10 6 cycles. Subsequently, they were defined as run outs.
Characterization of the Microstructure
To characterize the grain structure, the samples were cold mounted, mechanically ground, polished, etched in 3% nitric acid dissolved in ethanol (Nital etching), and then examined with the scanning electron microscope (SEM) Zeiss Ultra Plus. The SEM was operated at an acceleration voltage of 20 kV by using a secondary electron detector (SE).
Additionally, some specimens were exposed to LePera etchant to reveal the microstructures' ratios of martensite and ferrite. [23] The etched microstructure was examined under a Keyence VHX5000 digital optical microscope. Four lines were drawn in each analyzed image: two diagonal lines with a horizontal and a vertical line at their intersection. These lines were used to measure the total length of the grains of each phase, and thus, to determine the percentage portions of each phase. To do so, a minimum of 200 grains of each phase were considered.
Additionally, the microhardness was measured with a testing force of 0.098 N (HV0.01) in order to reveal differences in strength properties between phases as well as between fine and coarse grains.
A detailed characterization of the microstructure was performed with the transmission electron microscope (TEM) CM 200 from Philips. The TEM sample was prepared by grinding the material to a thickness of 150 μm and by a subsequent twin-jet electropolishing, utilizing a 5% perchloric acid in ethanol solution under an applied potential of 25 V at À40 C until electron transparency was achieved. The analysis was carried out at a nominal acceleration voltage of 120 kV.
Results
The material's mechanical properties in different microstructural states are summarized in Table 2. Typical stress-strain curves are presented in Figure 2.
The material with a homogeneous ferritic microstructure forms the basis for the evaluation of both the properties and microstructure of the material with a DP or bimodal microstructure. A relatively low difference between the UTS and YS values, i.e., 27%, indicates a low strain hardening capacity of the material with a homogeneous ferritic microstructure. Simultaneously, its high ductility, i.e., E u and E lt values, at low strength values indicates good formability, which is necessary for the cold forming operation (HM in Table 2). Lastly, a slightly pronounced stress plateau indicates that the plastic deformation in this area of the stress-strain curve proceeds by the growth of Lüders bands (HM in Figure 2). [24] In contrast to the HM material, the material with a DP microstructure has a high UTS value and lower E u and E lt values, whereas its YS value is almost equal to that of the HM material (DP in Table 2). The difference between the UTS and YS values of the DP material amounts to 49%, indicating a pronounced strain hardening during the plastic deformation.
The YS and UTS values of the material with a severely cold deformed DP microstructure are very similar, i.e., strain hardening is almost absent, and thus, the ductility is very low (CR in Table 2).
The mechanical properties of the bimodal microstructure, which results from annealing the material in a CR state, strongly depend on the annealing temperature: 1) Annealing at 550 C reduces the strength properties by 20% (UTS), with a slight increase in E u and E lt values in comparison with the CR state. Figure 2). The representative micrographs of the materials with a homogeneous ferritic microstructure (HM), a DP microstructure, and a severely cold deformed microstructure (CR) are presented in Figure 3.
In the HM state, the material consists of equiaxed ferrite grains with carbides distributed both within them as well as on the grain boundaries (Figure 3a). In the DP material, separate martensite islands can be observed within the ferrite. Some ferrite grains still contain separate carbides, which can rarely be observed on the grain boundaries ( Figure 3b). Generally, the microstructure of the DP state comprises approximately 44% martensite and 56% ferrite, which differs only slightly from the intended ratio of 50% between the two phases ( Figure 3d). The CR microstructure consists of wavy elongated ferrite grains that bend around slightly deformed martensite islands (Figure 3c).
The microstructures of the material in the BM state, depending on the annealing temperature, are presented in Figure 4.
Annealing at the lowest temperature investigated (550 C) does not significantly influence the morphology of ferrite grains that were elongated during the cold rolling process: the banded microstructure is very pronounced. Simultaneously, martensite islands are transformed into fine ferrite grains (black rectangles), www.advancedsciencenews.com www.aem-journal.com which are still acicular and contain numerous fine carbides (Figure 4a). Increasing the annealing temperature results in the formation of a recrystallized bimodal microstructure with fine equiaxed grains (white circles) distributed between coarse grains (black circles) and partially oriented along the rolling direction. While the martensite transformed into fine grains, the ferrite grains were coarsened during the annealing process. A banded microstructure can still be observed here (Figure 4b).
After annealing at a temperature of 650 C, the bimodal microstructure is no longer banded and consists entirely of equiaxed nonoriented ferrite grains with a high amount of carbides, both in ferrite grains as well as on the grain boundaries (Figure 4c). The size difference between the individual ferrite grains is less pronounced than that in the bimodal microstructure formed at 600 C. Instead, the microstructure resembles that of the material in the homogeneous state. Annealing at the highest temperature (700 C) with subsequent air cooling generally leads to a further coarsening of the ferrite grains, while islands of a different phase can be observed in some triple junctions of the grain boundaries (white triangles in Figure 4d). Additional LePera etching of this sample revealed the presence of martensite at %9% (Figure 4e).
Thus, the bimodal microstructure formed during the annealing of the CR material at 600 C demonstrates both the most pronounced inhomogeneity of grain-size distribution ( Figure 4e) and a good combination of mechanical properties, i.e., simultaneously a high-level of strength and ductility (Table 2). Therefore, this state was chosen to characterize the fatigue property.
The fatigue properties of the materials with a homogenous ferritic grain microstructure, in a DP state and in the bimodal state after annealing at 600 C, were characterized in the high cycle fatigue region. The corresponding S-N curves and the SEM pictures of the samples' fracture surfaces in the different states are presented in Figure 5.
The material exhibits the worst fatigue properties in the homogenous state (solid line). To withstand 10 6 loading cycles, the maximum stress amplitude must not exceed 197 MPa. In the DP state (dashed line), however, the material has a significantly better fatigue strength than in its initial state: the maximum stress amplitude allowing to withstand 10 6 loading cycles amounts to 239 MPa. It is noteworthy that the fatigue-life curve of the material in the DP state has a higher slope than that of the material with a ferritic microstructure. At a low cycle number, i.e. up to approximately 8 Â 10 5 cycles, the samples with a www.advancedsciencenews.com www.aem-journal.com bimodal grain microstructure (dash-dot line) show an intermediate fatigue behavior in comparison with other material states, i.e., at a given cycle number, the corresponding stress is higher than that in the initial state and lower than that in the DP state. However, the maximum cycle number of 10 6 can be maintained at 245 MPa, which is higher than that in both the other states, i.e., for the ferritic state with a homogenous microstructure and the DP state. Furthermore, the slope of the fatigue life curve equals that of the material in the homogenous state. Independent of the material state, a crack developed at the samples' edges and then propagated inside (indicated by dashed lines in Figure 5b-d) until a forced fracture, which manifests itself in the characteristic surface of a ductile fracture with dimples (not shown), occurs. The fracture surfaces of the samples in the homogenous state (Figure 5e) and the bimodal state ( Figure 5g) are similar, although the latter one mainly consists of smaller dimples. The surface of the material in the DP state also exhibits cracks and cleavages (white circles in Figure 5f), which are not present in other states.
Discussion
During the intercritical annealing at 790 C, a partial α!γ transformation occurs. Both the ferrite grain boundaries and the interfaces between carbides and multipoint junctions exhibit the lowest energy for the nucleation of austenite grains. Simultaneously, carbides on the grain boundaries dissolve, allowing for carbon diffusion in the nucleated austenite grains and their chemical stabilization. [25] Therefore, only martensite islands that transformed from austenite during the air-hardening process are present on the ferrite grain boundaries, whereas carbides are rarely observed (Figure 3b). The transformed martensite serves as a strengthening hard phase, which leads to a significant improvement in the material's strength and a moderate reduction in its ductility ( Figure 2 and Table 2).
In accordance with previous studies of Azizi-Alizamini et al. [15] and Okitsu et al., [26] it was found that the deformation during the cold rolling of DP steels mainly occurs in ferrite, resulting in fine lamellar grains elongated in the rolling direction and bent around hard and significantly less deformed martensite islands. Such a severely deformed microstructure exhibits excellent strength properties, as its UTS value is 21% higher than that of materials in an as-quenched state. [2] However, the extremely low E u and E lt values do not allow for a practical application of those materials. After the second annealing step, the material's bimodal microstructure, and therefore its mechanical properties, are both strongly influenced by the annealing temperature. The lowest investigated annealing temperature of 550 C generally only results in the precipitation of carbides and tempering of www.advancedsciencenews.com www.aem-journal.com the martensite and its subsequent transformation into less distorted ferrite, which retains the acicular form of the parent martensite. As Speich et al. [27] and Tokizane et al. [28] have shown, the recrystallization of as-quenched martensite in low-carbon steels occurs at temperatures near the corresponding A C1 -temperature, i.e., the initial temperature of the α!γ transformation. Simultaneously, a cold plastic deformation of the martensite facilitates the recrystallization process and shifts it toward lower temperatures and duration times. [29] Martensite grains are not recrystallized at 550 C, i.e., no formation of new equiaxed ferrite grains can be observed, which confirms a minor plastic deformation of martensite occurring during the cold rolling process. The wavy form of the severely deformed ferrite grains remains almost equal to that of the material in a CR state. These observations indicate that no recrystallization process occurs at this annealing temperature. For this reason, the mechanical properties change only slightly from the CR state ( Figure 2 and Table 2). Increasing the annealing temperature to 600 C results in the recrystallization of the martensite islands and formation of very fine ferrite grains. This fine grain size, i.e., the absence of any coarsening due to recrystallization, can be attributed to a pinning effect of the carbides that precipitated during the tempering of the martensite. [12,15] Additionally, coarse ferrite grains can be observed, some of which are still elongated in the rolling direction. This was only true in areas that previously consisted of severely deformed ferrite grains, indicating a faster recrystallization. The difference to fine grains can also be explained by the absence of a pinning effect in the former martensite areas. [15] Furthermore, although both cold deformed ferrite and martensite have nearly the same dislocation density, the dislocation distribution in martensite is homogeneous, whereas a cellular dislocation substructure is formed in cold deformed ferrite. Because these dislocation boundaries between the separate cells act as starting points for recrystallization, the recrystallization process starts earlier than that in martensite. [30] A detailed observation of the bimodal microstructure in the TEM is presented in Figure 6.
Both the fine (Figure 6a) and coarse grains (Figure 6b) of the bimodal microstructure are free from any dislocations, thus indicating a complete recovery. In contrast to the coarse grains, the fine grains contain numerous equiaxed carbides. The migration of their grain boundaries seems to be restricted in comparison with the free carbide grains, which is a result of the abovementioned pinning effect. Furthermore, due to the more heterogeneous distribution of the alloying elements, i.e., a higher percentage of fine carbides in fine grains, the individual fine grains exhibit a higher microhardness (200 AE 12 HV0.01) than the coarse grains (161 AE 1 HV0.01). According to Azizi-Alizamini et al. [15] , the fine grains seem to act similarly to martensite in the DP microstructure. During the tensile test, however, the behavior of the material with the bimodal microstructure is more equivalent to the material in the homogenous state (see Figure 2), due to a pronounced Lüders strain and a relatively low strain hardening.
Furthermore, as the results of the fatigue tests show, the slopes of the fatigue curves of the materials in the ferritic homogenous state and with the bimodal microstructure resemble each other. It can be assumed that the crack initiation and propagation mechanism is similar in both microstructures because of their mono-phase nature. In contrast to this, the curve of the DP microstructure is more sloped, and for high cycle numbers, i.e., upwards of 10 6 cycles, the maximum stress amplitude is lower than that for the bimodal microstructure. This fatigue curve can be explained by a pronounced crack initiation, predominantly on the boundaries of hard martensite, with a microhardness of 240 AE 8 HV0.01, and ductile ferrite phases, which have a microhardness of 173 AE 17 HV0.01, even at lower stress levels (see Figure 5f). [31] Contrary to the DP material, the crack initiation in mono-phase materials, both with normal and bimodal microstructures, seems to be delayed due to the more ductile nature of ferrite. Simultaneously, the fine grains, i.e., a high amount of grain boundaries, in the material with the bimodal microstructure pose an additional obstacle to crack propagation and thus improves the material's fatigue strength compared to the initial state, in which only coarse grains are present. [32] Annealing at 650 C results in a higher energy input and thus in a higher driving force of the recrystallization and in coalescence and coarsening of the precipitated carbides, thereby reducing their pinning effect on the grain growth. Therefore, the coarsening of ferrite grains that were formed from prior martensite islands leads to a less pronounced bimodality of the www.advancedsciencenews.com www.aem-journal.com microstructure, reduced strength, and slight improvement of the ductility as compared to the material annealed at 600 C.
Although the A C1 -temperature for the investigated steel is 750 C, [2] the microstructure of the material annealed at 700 C contains martensite (Figure 4d), and correspondingly, has a stress-strain curve that is typical for DP microstructures, i.e., no Lüders strain and a relatively high strain hardening. The CR microstructure has a higher amount of energy stored and more defects than the homogeneous ferritic microstructure with an A C1 -temperature of 750 C. According to numerous previous studies, [33,34] such a distorted microstructure facilitates the α!γ transformation and shifts the A C1 -temperature to lower values, resulting in a formation of austenite even below the expected A C1 -temperature of 750 C.
Based on these results, the appropriate parameters for a thermo-mechanical treatment, i.e., intercritical annealing at 790 C, cold rolling with a rolling reduction of 70%, and subsequent annealing at 600 C, create a bimodal grain-size distribution in the mono-phase ferritic microstructure. Air-hardening steel with the bimodal microstructure exhibits superior strength properties and nearly the same ductility (E u -values) in comparison with a conventionally CR and annealed microstructure. With regard to high cycle fatigue, which is of high importance for automotive components, the sheet with a bimodal microstructure has the best fatigue strength as compared to both the traditional DP microstructure and the CR and annealed microstructure.
Varying the martensite fraction through the intercritical annealing temperature, and thus, changing the fraction of fine grains after a subsequent cold rolling and annealing process, presents a relatively simple way to adjust the ratio between the two grain fractions and therefore the strength and ductility properties of a sheet with the bimodal microstructure. [15] Hence, the aim of the on-going research is to investigate the influence of a martensite fraction in the DP state on the resulting properties of materials with the bimodal microstructure.
Conclusion
In summary, low-alloy steel with the bimodal microstructure presents a new class of materials with high strength and ductility properties. The results of this study confirm both the feasibility and the high controllability of the investigated thermo-mechanical process, which enables the formation of a bimodal grain-size distribution in air-hardening steel. The most important processing steps are intercritical annealing between the A C1 -and A C3 -temperatures, e.g., 790 C; cold rolling with a high rolling reduction, e.g. 70%; and subsequent annealing of the material below the A C1 -temperature. The temperature for the last annealing step strongly influences the recrystallization process of the severely deformed ferrite grains and the martensite islands, and thus the microstructure and resulting mechanical properties. The microstructure with the most pronounced bimodal grain-size distribution, obtained by annealing at 600 C, exhibits a superior strength (UTS value of 567 MPa vs 485 MPa) and nearly the same uniform elongation (E u -value of 23% vs 22%) compared to the material with the homogeneous ferritic microstructure. While the fatigue curve of the material with the bimodal microstructure, i.e., the behavior at fatigue loading, is similar to that of the material with the HM, the corresponding maximum stress amplitudes at a given number of cycles are higher at approximately 48 MPa. In addition, the maximum stress amplitude of the bimodal mono-phase microstructure in a high cycle regime (at 10 6 cycles) is even better (245 MPa vs 239 MPa) than that of the material with the DP microstructure.
The influence of the martensite fraction before cold rolling and annealing, i.e., the influence of the temperature during intercritical annealing, is another factor to consider in order to achieve the desired combination of strength and ductility in an air-hardening steel with the bimodal microstructure and will the topic of further research.
|
2019-08-02T22:12:36.995Z
|
2019-08-08T00:00:00.000
|
{
"year": 2019,
"sha1": "82102f637f794d693193d17189637651cd73fb79",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/adem.201900134",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "612320693b5b2df5d9b4287bd713a46e773bc87b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.