id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
269111224
|
pes2o/s2orc
|
v3-fos-license
|
Cardiorespiratory fitness mediates the relationship between depressive symptomatology and cognition in older but not younger adults
Aging is commonly associated with emotional, physical, and cognitive changes, with the latter, particularly affecting executive functioning. Further, such changes may interact. For instance, depressive symptomatology is a known risk factor for developing cognitive deficits, especially at older ages. In contrast, an active lifestyle, reflected in high cardiorespiratory fitness (CRF) levels, has proven to protect against adverse effects on cognition across the adult lifespan. Hence, this study aimed to investigate the relationships between depressive symptomatology, CRF, and cognition during critical developmental stages, namely in young adults (YA), when cognitive abilities are at their peak, and in older adults (OA), when they may start to decline. Eighty-one OA with ages between 60 and 89 years ( M = 70.46; SD = 7.18) and 77 YA with ages between 18 and 34 years ( M = 22.54; SD = 3.72) went through (i) a sociodemographic interview, (ii) an emotional assessment, (iii) a battery of cognitive tests, and (iv) a physical evaluation assessing CRF levels, visceral fat and body-mass index. Results showed that OA exhibited lower general cognitive performance, inhibitory control, cognitive flexibility, memory, and CRF. Depressive symptoms and anxiety were not different among groups, with CRF mediating the relationship between depressive symptoms and cognition in the OA group. The present study provides valuable insights into the interplay between emotional, physical, and cognitive well-being. Additionally, it calls attention to how lifestyle factors can play a protective role against the adverse effects that depressive symptoms have on cognition, particularly at older ages.
Introduction
Aging, as part of biological development, is associated with emotional, physical, functional and cognitive changes.Despite the interindividual variability observed (Wu et al., 2021), late adulthood can be a period of vulnerability to cognitive deficits, particularly regarding executive functions (EF) (Kirova et al., 2015).Evidence shows that older adults (OA) perform worse in different EF domains than younger adults (YA) (Kirova et al., 2015;Bedard et al., 2002).Further, previous studies have demonstrated that age-related cognitive decline manifests an augmented likelihood of conversion into dementia when concomitant depressive symptoms are present (Diniz et al., 2013;Modrego and Ferrández, 2004).Interestingly, depression, alongside dementia, is the most common mental disorder in the OA population, which presents a higher percentage when compared to any other age group (World Health Organization, 2023).Even YA shows impairments in complex tasks requiring EF when suffering from depression (Castaneda et al., 2008).Hence, the presence of depressive symptomatology is an important risk factor to account for when investigating cognitive decline across the lifespan.
Nevertheless, other factors, such as physical activity, seem to mitigate the rate of cognitive decline and the effects of depressive symptomatology on cognition at different ages.Thus, physical activity and cardiorespiratory fitness (CRF) have been shown to act as protective factors that can alter the trajectory of age-related cognitive decline (Falck et al., 2019;Northey et al., 2018).CRF is a component and objective measure of physical activity that reflects the ability of the cardiovascular and respiratory systems to supply oxygen during prolonged physical activity (Lee et al., 2010).Specifically, higher CRF has been associated with overall better cognitive function (Freudenberger et al., 2016), including memory (Hayes et al., 2016;Vesperman et al., 2022) and EF (Freudenberger et al., 2016;Hayes et al., 2016;Mekari et al., 2019;Pentikäinen et al., 2019).This is particularly important in advanced stages of life, once CRF tends to decline with aging (Jackson et al., 2009).Furthermore, lower CRF levels in middle-age have also been associated with a risk of developing dementia in later life (DeFina et al., 2013).
Considering this evidence, a cardiorespiratory hypothesis has been proposed, stating that improved cognitive functioning may be partly explained by the positive physiological processes that result from physical activity (Agbangla et al., 2019).In accordance, previous research has shown that physical activity and CRF are associated with structural and functional brain changes, such as increased gray matter volume (Raichlen et al., 2020), promotion of neuroplasticity in the medial temporal lobes (Raichlen et al., 2020;Erickson et al., 2011), and improved hippocampus and prefrontal cortex connectivity in YA (Kronman et al., 2020).Specifically, Kronman et al. (2020) showed that people with higher CRF had more effective connections between the hippocampus and prefrontal areas -regions directly related to EF (Mekari et al., 2019;Monteiro-Junior et al., 2016) -but also to emotional circuits -that are commonly involved in depression (Firth et al., 2020).
Despite the substantial influence of genetic predisposition in the etiology of mental health disorders such as depression (Hyde et al., 2016), emerging empirical data has highlighted the role of environmental and lifestyle determinants.For example, several meta-analyses have shown that low physical activity is associated with a higher prevalence of depression (Schuch et al., 2018;Teychenne et al., 2010).Accordingly, studies show that individuals with depression engage in 50 % less moderate-to-vigorous physical activity (Vallance et al., 2011), presenting lower CRF than the general population (Boettger et al., 2009).Similarly, a systematic review conducted by Schuch et al. (2016) revealed that compared to individuals with high CRF, those with low CRF experienced a 76% higher incidence of depression, whereas those with medium CRF had a 23 % elevated likelihood of suffering from depression.Nonetheless, a meta-analysis found a modest correlation (r = − 0.16) between the severity of depressive symptoms and CRF in healthy and depressed individuals (Papasavvas et al., 2016).
Based on the spectrum of emotional, cognitive and somatic consequences associated with depressive symptoms, several hypotheses have emerged to elucidate the potential mechanisms underlying the role of CRF.First, the emergence of depressive symptoms may lead to reduced motivation to engage in physical activity, resulting in greater physical inactivity, which, in turn, leads to decreased CRF levels (Hollenberg et al., 2003;Nikolakaros et al., 2020).Second, depressive symptoms are associated with multiple disturbances, such as sleep disorders (DSM-5) (American Psychiatric Association, 2013) that can disrupt the role of sleep in the restoration and consolidation of physical and cognitive health, contributing to greater physical inactivity and obesity (Watenpaugh, 2009), as well as worse cognitive performance (Rasch and Born, 2013).Finally, the psychosocial factors associated with depressive symptoms, such as social isolation, anxiety and negative affect (DSM-5) (American Psychiatric Association, 2013), can disrupt healthy lifestyle behaviors and, thus, have detrimental effects on CRF and cognition.
Considering these postulates and the underlying analogous neural correlates of depression, cognition, and CRF, our main aim was to further explore these relationships at two different stages of the lifespan.Additionally, we intended to evaluate if depressive symptoms and CRF were predictors of general cognition, short-term memory, and EF at those stages, i.e., when these abilities are at their peak -YAand when they may start to decline -OA.Additionally, we tested if an active lifestyle proxy, such as the CRF, could mediate the relationship between depressive symptoms and cognition.Specifically, and considering the evidence on the relationship between depression, CRF and cognitive status, we hypothesized that OA would present higher depressive symptomatology, lower CRF, and lower cognitive abilities (worse performance on tests measuring overall cognition, short-term memory, and EF) in comparison with YA; and that the adverse effects that an increase in depressive symptomatology has on cognition would be driven by the decrease of CRF at both age-groups.
Participants
OA over 60 years of age were recruited through contacts with associations and daycare centers.In addition, YA between 18 and 35 years of age were recruited at the university campus and received course credits for their participation in the study.Study protocols were in accordance with the principles outlined in the Declaration of Helsinki and received approval from the local ethics committee (CE.CVS 095/2018).Exclusion criteria were presenting history of stroke, transient ischemic attack, head injury, epilepsy, Parkinson's disease, Alzheimer's disease, or other neurological or psychiatric diseases.Further, volunteers were excluded if they were taking anxiolytic or antidepressant medication or scored below the cut-off for probable dementia established in the Portuguese version of the Mini-Mental State Examination (MMSE) (Folstein et al., 1975) by Santana and colleagues (2016).In addition, included participants scored for total independence or mild dependency in the Instrumental Activities of Daily Living Scale (IADL) (Lawton et al., 1969;Reis et al., 2012).A priori sample size calculations on G*Power software (https://www.psychologie.hhu.de,accessed on 12 November 2019) estimated for the regression analyses a minimum sample size of participants per age group, considering a medium effect size (f 2 = 0.15), an alpha (α) of 0.05, and a statistical power of 0.80.
Eighty-one participants (71.6 % female) with ages between 60 and 89 years (M = 70.46;SD = 7.18) were included in the OA group, while 77 participants (64.9 % female) with ages between 18 and 34 years (M = 22.54; SD = 3.72) were included in the YA group.(Folstein et al., 1975).General cognition was evaluated with the MMSE.A maximum of points can be achieved, and cutoff scores are adapted according to the participants' years of formal education as established in the Portuguese version of the MMSE (illiterate: ≤ 15; from 1 to 11 years of formal education: ≤ 22; >11 years of formal education: ≤ 27 (Santana et al., 2016);).(CANTAB®, 2019).CANTAB is a validated cognitive research software that comprises 18 standardized tests to assess different domains of cognitive function.It was used to evaluate memory and EF using the Spatial Span and the Multitasking tests, respectively.
Cambridge neuropsychological test automated battery -CANTAB
The Spatial Span (SSP) test is based on the Corsi block tapping test and measures short-term memory.In this test, a set of gray boxes lights up a color in a specific sequence.The participant's task is to remember the sequence and then touch the boxes on the screen in the same order.The sequence length increases throughout the test, and the participant has a maximum of 3 attempts at each sequence.The longest sequence of boxes successfully recalled by the participants was recorded for further analyses.
The Multitasking (MTT) test measures cognitive flexibility and inhibitory control through the ability of the participant to use multiple sources of potentially conflicting information to guide behavior.In each trial, an arrow appears on the middle, left, or right side of the screen, and the participant is asked to click on the right/left side button in accordance with the direction or the side that the arrow points to or appears on the screen.During training, the participant learns to respond according to the arrow's direction/side of the screen, separately.During the assessment stage, each trial is preceded by a cue indicating whether the participant should respond according to direction or side (the rule is changed randomly).In some trials, the direction of the arrow and the side in which the arrow appears are incongruent.The outputs used for further analyses were the Incongruency Cost, an indicator of inhibitory abilities, and the Multitasking Cost, a measure of cognitive flexibility.The former is calculated by subtracting the mean response latency (in ms) of congruent trials from that of the incongruent trials.A higher Incongruency Cost indicates that the participant takes longer to process conflicting information.The latter output is calculated by subtracting the mean response latency during single task block(s) from that during multitasking block(s).A positive score indicates difficulties in managing multiple sources of information (i.e., less flexibility).(Portuguese validation, Campos and Gonçalves, 2011).This self-report questionnaire comprises 21 sets of 4 statements.For each set, the participant must choose the statement that better describes how they have felt for the last two weeks.The score ranges from 0 to 63 points, with higher scores indicating more depressive symptoms.The internal consistency of the Portuguese version is high (Cronbach's alpha = 0.90), and it shows high convergent validity with other depressive symptomatology scales (Campos and Gonçalves, 2011).Cruz and Mota, 1997;Spielberger et al., 1983).STAI-Y measures state and trait anxiety separately, with 20 questions for each kind.For each question, the participant has to rank on a 4-point Likert scale how much that sentence describes them at the moment (state scale) or generally in their life (trait scale).Scores range from 0 to 80 points for each scale.In the current study, we have only used the trait subscale, where a higher score indicates more severe anxiety as a personality trait.The Portuguese version of STAI-Y has demonstrated high internal consistency (Cronbach's alpha = 0.85) even when the trait subscale is considered alone (Cronbach's alpha = 0.88.
Physical measures
2.2.3.1.International physical activity questionnaire -IPAQ (Craig et al., 2003).This self-report questionnaire evaluates physical activity across five domains, including activity related to work, physical activity as a means of transport, domestic and gardening activities, leisure time activity, and sedentary time.The questions under each domain provide the score for walking, moderate-intensity activity, vigorous-intensity activity, and overall activity level.Responses to the questionnaire were used to categorize participants according to the five levels of physical activity defined by Jurca et al. (2005) for the calculation of CRF without performing exercise testing.
Anthropometric assessment.
The participants' height was measured by a stadiometer.Weight and visceral fat level were measured with a bio-impedance scale (TANITA MC-780MA Segmental).Resting heart rate (HR) values were collected with a wrist pulsometer (POLAR M200).Finally, height and weight were used to calculate the body-mass index for each participant using the standard formula of weight (Kg) divided by the square of height (in meters).
Cardiorespiratory fitness -CRF.
CRF was estimated from the following equation (Jurca et al., 2005): CRF = (sex*2.77)-(Age*0.10)-(BMI*0.17)-(Resting Heart Rate*0.03)+ Self-reported Physical Activity derived from IPAQ +18.07.Sex was coded as 0 for females and 1 for males.Jurca's et al. (2005) equation shows similar estimates of CRF as graded exercise tests (GXT) -considered the gold standardwith the advantages of being a simple, low-cost and low-risk measure, especially beneficial for clinical and research settings with OA.
Table 1 summarizes OA and YA's mean and standard deviation for emotional (BDI-II and STAI-Y), cognitive (MMSE score and CANTAB subtests results), and physical (CRF, visceral fat and BMI) variables.
Procedure
Data collection was carried out in two different moments.First, participants in the study received a detailed explanation of the procedures, provided written informed consent, and completed a standard interview to collect demographic information and assess their physical and cognitive health status, ensuring compliance with the exclusion and inclusion criteria.Then, in a second session, participants underwent (1) a cognitive evaluation using CANTAB, (2) an emotional assessment using BDI-II and STAI-Y Trait, and (3) a physical evaluation and IPAQ assessment.The physical evaluation encompassed the measurement of resting heart rate with a pulsometer (POLAR M200), the utilization of bio-impedance scale (TANITA MC-780MA Segmental) and a stadiometer to later determine the CRF level.
Statistical analysis
Statistical analyses were conducted using IBM Statistical Package for Social Sciences (SPSS; Version 27).Scores for each variable (i.e., age, sex, BDI-II, STAI-Y Trait, MMSE, MTT Incongruency Cost, MTT Multitasking Cost, SSP Forward Span Length, CRF, Visceral fat, BMI) that deviated at least 2.2 times the interquartile range from its mean were considered univariate outliers (Hoaglin and Iglewicz, 1987) and winsorized before conducting the statistical tests (Tukey, 1962).The alpha level was set at p ≤ .05.
Firstly, we conducted a series of independent samples t-tests to evaluate the possible differences between groups on CRF, MTT Incongruency Cost and MTT Multitasking Cost.For the variables whose distributions deviate from normality (MMSE, BDI-II and STAI-T scores, SSP Forward Span Length, Visceral fat and BMI levels), Mann-Whitney tests for independent samples were conducted to test the possible differences between groups.Secondly, for each age group separately, linear regression analyses were used to evaluate the predicting value of depressive symptomatology (BDI-II) on the cognitive and physical variables, as well as the predicting value of the physical variables (used as independent predictors, given the high correlation among them and the potential effects of multicollinearity) on the cognitive ones.Finally, we conducted mediation analyses using model 4 of PROCESS macro for SPSS v4.3 (Hayes, 2017) with the emotional, physical, and cognitive variables.Notes: SD = standard deviation.
C. Barros et al.
Other regression analyses having the STAI-T (anxiety measure), Visceral fat and BMI as predictors of cognition and physical variables are shown in Supplementary Table 1.
Mediation models
Since no regression models were significant for the YA group (see Table 2), mediation models were only carried out for the OA group.We used the cognitive measurements significantly predicted by depressive symptoms and CRF as output variables separately.Consequently, mediation analyses were performed to assess the mediation role of CRF in the relationship between depressive symptomatology and 1) general cognition, 2) cognitive flexibility, and 3) memory.
Discussion
The main objective of this paper was to explore the interplay between depressive symptomatology, cognition, and CRF at two different stages of the lifespan: YA and OA.Our results showed that OA presented lower cognitive performance in all cognitive tests, lower CRF, and higher visceral fat and BMI scores, according to our hypothesis.However, no differences were found between both age-groups for depressive symptoms and anxiety.Furthermore, as expected, CRF mediated the relationship between depressive symptoms and cognition (general cognition and memory).However, contrary to our hypothesis, this mediation was only significant in OA, suggesting that the adverse effects that depressive symptomatology has on cognition seem to be driven by decreased CRF at older ages.
Regarding age-group differences, we found that OA, compared to YA, presented lower general cognitive performance, inhibitory control, cognitive flexibility, memory and CRF.These differences are in accordance with previous literature showing that aging can be associated with cognitive decline in multiple domains (Kirova et al., 2015;Bedard et al., 2002) and with lower CRF (Jackson et al., 2009).However, we found no group differences in depressive symptomatology.Even though OA is C. Barros et al. widely described as displaying higher levels of depression (World Health Organization, 2023), our sample comprised only individuals with subclinical symptoms (with current depression diagnosis or medication being exclusion criteria), which could explain the lack of differences.Moreover, although depressive symptomatology predicted active lifestyle indices (e.g.CRF) for both age groups, it predicted cognitive performance only on the OA.These results suggest that while the impact of depressive symptoms on active lifestyles, as reflected by CRF, can be observed from a young age, their impact on cognition manifests solely in later stages of life.This goes in line with longitudinal studies showing that, in older populations, the level and duration of depressive symptoms are associated with lower CRF levels measured two or even four years later (Hollenberg et al., 2003).
As previous literature has shown, CRF can be a protective factor against age-related cognitive decline (Falck et al., 2019;Northey et al., 2018) and further protect from conversion to dementia (DeFina et al., 2013).Accordingly, our results showed that CRF was a predictor of cognitive functioning in all the domainsoverall cognition, short-term memory, and EFin OA but not YA.This pattern aligns with Hayes et al. (Hayes et al., 2016), who also found that CRF was associated with cognition and EF only in OA.The lack of an association between CRF and cognition in YA may be attributed to the fact that cognitive abilities usually reach their peak performance during this life period.Consequently, the impact of CRF on cognition may be masked by other factors within this age-group.Overall, the present results are in accordance with and extend the age-dependence hypothesis (Hötting and Röder, 2013), which sustains that CRF impacts cognitive and brain function during childhood, with its influence fading during young adulthood.Our results showed that, in older adulthood, when cognitive decline may develop, CRF exerts once again a beneficial impact on cognitiongeneral cognition, short-term memory, and EF.Furthermore, studies show that for OA, key factors influencing their quality of life revolve around health, social connections, independence in daily activities, and engagement in an active lifestyle (Agustí et al., 2023).Additionally, when considering the sense of satisfaction with health status for OA, the absence of physical illnesses and psychological problems becomes essential, as well as maintaining adequate levels of physical activity (Rizo, 2017).Thus, an active lifestyle seems to emerge as pivotal determinant in enhancing the overall well-being and life satisfaction among OA, alongside improving cognitive abilities.
Regarding the mediation models, results revealed the mediating role of CRF in the interplay between depressive symptomatology and general cognition as well as memory, but only among OA.In essence, decreased CRF was shown to drive the negative effect that depressive symptoms have on general cognition and memory performance.Previous literature has evidenced the adverse effects that depressive symptomatology has on cognition, especially in OA (Diniz et al., 2013;Modrego and Ferrández, 2004), while also showing the protective role that greater CRF can have on cognition (Freudenberger et al., 2016) and memory (Hayes et al., 2016;Vesperman et al., 2022).Thus, in the presence of depressive symptomatology, individuals tend to exhibit reduced levels of physical activity (Vallance et al., 2011), and this decline may be associated with a subsequent decrease in CRF (Boettger et al., 2009).Results from this study corroborate this hypothesis, showing that this chain of events may have significant negative implications for cognitive and memory performance.
Several mechanisms have been suggested that explain the complex relationship between depressive symptomatology and CRF and their impact on cognitive abilities.For instance, depressive symptoms encompass a lack of motivation, social isolation, anxiety, negative affect, and sleep disorders (American Psychiatric Association, 2013) that have been shown to contribute to greater physical inactivity (Nikolakaros et al., 2020;Watenpaugh, 2009).Our results show that, in fact, depressive symptoms have a detrimental impact on physical well-being, predicting worse CRF in both OA and YA.However, despite previous studies that have shown that depressive symptoms also have a negative impact on cognitive performance (Rasch and Born, 2013), our statistical models suggest that the negative impact of depressive symptoms on cognitive performance is totally mediated by CRF.That is, a decrease in CRF capacity explains the negative effect that higher depressive symptoms have on cognition.These results further extend previous literature by showing that an individual's lifestyle and physical well-being better explain the influence of depressive symptoms on cognition.
Other hypotheses suggest that these results may be explained by the overlap in the neural bases of the effects of a healthy and active lifestyle and those of depressive symptomatology, which affect brain regions essential for cognitive performance (Raichlen et al., 2020;Kronman et al., 2020;Firth et al., 2020).This is consistent with the cardiorespiratory hypothesis (Agbangla et al., 2019), which establishes that improved cognitive functioning is related to the positive neurobiological outcomes of physical activity.Nevertheless, no mediation model was found regarding EF, although there was an interaction between depressive symptomatology and CRF predicting EF performance.In this way, a possible moderator role of CRF in the relationship between depressive symptoms and EF was observed, particularly concerning cognitive flexibility.In other words, the negative effect of depressive symptomatology on cognitive flexibility can be exacerbated in individuals with low CRF.Thus, it seems that a different mechanism involving an interaction between depressive symptomatology and CRF may be responsible for the impact of depressive symptoms on higher cognitive abilities such as EF.Future studies are needed to clarify the moderator role that depressive symptomatology and an active lifestyle play in shaping EF in later life.
The present work showed that the effect of depressive symptomatology on cognition does not appear at younger ages.It is important to note that our sample comprised subclinical depressive symptoms, meaning that all participants were healthy.Additionally, during young adulthood, cognition, as well as brain function and structure, are at their peak.Thus, it is possible that subclinical symptoms are not enough to impact cognition compared to clinical depressive symptoms -which previous literature has shown to affect cognition at younger ages (Castaneda et al., 2008).In contrast, according to our results, in old age, when cognition may start to suffer alterations, even subclinical symptoms seem enough to trigger and exacerbate cognitive differences.However, the cross-sectional design of our study poses an undeniable limitation.Additionally, our focus solely on subclinical depressive symptoms overlooks the potential influence of more severe depression symptoms, on YA, or other emotional well-being constructs on both agegroups, which should be considered as well.Moreover, CRF serves just one aspect of an active lifestyle, providing only a limited perspective on this multifaceted concept, which is also a constraint of our study.Therefore, further studies are needed to evaluate the effect of clinical depression on cognition, considering longitudinal approaches and accounting for active lifestyle-related variables to establish more robust conclusions.
Nevertheless, the present paper emphasizes the pivotal role of CRF as a potential active lifestyle index mediating the impact of depressive symptomatology on cognitive function and memory abilities, particularly in OA.Thus, early detection and intervention for depressive symptoms and promoting CRF through physical activity may help delay or diminish age-related cognitive decline.However, given the differences between age-groups, interventions addressing cognitive health and emotional well-being should be tailored to each age-group specific needs and characteristics and designed to target specific cognitive domains for the most successful outcome.Thus, understanding the intricate relationships between emotional, cognitive, and physical wellbeing in different stages of adulthood calls for an in-depth approach to health promotion.These practical implications can guide the development of targeted interventions and health promotion strategies to support the population's emotional, cognitive, and physical well-being.
Conclusions
In conclusion, our results demonstrated that OA presented lower performance on tests measuring overall cognition, short-term memory, and EF, as well as lower CRF than YA.Additionally, subclinical depressive symptomatology predicted CRF in OA and YA while also predicting cognitive abilities only in OA.Likewise, CRF predicted cognitive performance only in the OA group.Finally, the adverse effects that an increase in depressive symptomatology has on cognition and memory may be driven by the decrease of CRF in late life.Hence, the present study stands out in its comprehensive exploration of the relationships between emotional, cognitive, and physical well-being across two different and crucial stages of adulthoodwhen cognition is at its peak and when it may start to decline.Further, it highlights the pivotal role of CRF as an index of active lifestyles that protects from the impact of depressive symptomatology on cognitive function and memory abilities in OA, while also being the first study to indicate its mediating role between depressive symptoms and cognition.Thus, our results suggest avenues for improving the quality of life of OA by emphasizing the importance of maintaining an active lifestyle and managing depressive symptoms.This may help maintain a healthy cognitive trajectory into older age, fostering longer years of independence and autonomy.Interventions aimed at enhancing CRF and addressing even subclinical depressive symptomatology could potentially mitigate age-related cognitive decline and contribute to better cognitive outcomes in late life.Future studies could build upon our study by incorporating additional variables related to emotional wellbeing and active lifestyle while employing machine learning methodologies to have a deeper understanding of those variables' impact on cognitive processing across different ages.
Supplementary data to this article can be found online at https://doi.
Declaration of competing interest
Authors declare no conflict of interest Fig. 1.Scatter plots showing the relationship between cognitive performance scores and BDI-II (left column) and CRF scores (central columns) for OA.The relationship between BDI-II scores and physical variables (visceral fat and CRF) is shown on the right column for OA (top) and YA (inside the black square).
Fig. 3 .
Fig. 3. Schematic descriptions of total and mediated effects of depressive symptomatology (BDI-II) on memory.(Top) Schematic description of the total effect of depressive symptomatology (BDI-II) on memory (SSP Forward Span Length) -path c-.(Bottom) Schematic description of the mediated effect of depressive symptomatology (BDI-II) on memory through CRF -path β 1 and β 2 , and c'-.
Table 2
Regression models with the BDI-II score and CRF level as predictors and cognitive and physical variables as outcomes for the OA and YA groups, separately.
|
2024-04-14T06:17:52.565Z
|
2024-04-10T00:00:00.000
|
{
"year": 2024,
"sha1": "0f515e065143b588197bec86f3ad8a7373b2f32b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.exger.2024.112429",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "85e6c5a751fabbfa001a50c94290b34cb7aa0c37",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17416148
|
pes2o/s2orc
|
v3-fos-license
|
Optimization : Competing with Dynamic Comparators
Recent literature on online learning has focused on develop ing adaptive algorithms that take advantage of a regularity of the sequence of observations, yet retain worst-case perf ormance guarantees. A complementary direction is to develop prediction methods that perform well against c omplex benchmarks. In this paper, we address these two directions together. We present a fully adaptive method tha t competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost fun c ions and comparators. Notably, the regret bound adapts to the smaller complexity measure in the problem environmen t. Finally, we apply our results to drifting zero-sum, two-player games where both players achieve no regret guara ntees against best sequences of actions in hindsight. I. I NTRODUCTION The focus of this paper is an online optimization problem in w hich a learner plays against anadversaryor nature. At each roundt ∈ {1, . . . , T }, the learner chooses an action xt from some convex feasible set X ⊆ R. Then, nature reveals a convex function ft ∈ F to the learner. As a result, the learner incurs the correspon ding loss ft(xt). A learner aims to minimize his regret, a comparison to a single best action in hindsight: RegT , T ∑ t=1 ft(xt)−min x∈X T ∑ t=1 ft(x). (1) Let us refer to this as tatic regret in the sense that the comparator is time-invariant . In the literature, there are numerous algorithms that guarantee a static regret rate of O( √ T ) (see e.g. [1]–[3]). Moreover, when the loss functions are strongly convex, a rate of O(log T ) could be achieved [4]. Furthermore, minimax optimality of algorithms with respect to the worst-case adversary has bee n established (see e.g. [5]). There are two major directions in which the above-mentioned results can be strengthened: (1) by exhibiting algorithms that compete with non-static comparator sequen ces (that is, making the benchmark harder), and (2) [1] Ali Jadbabaie and Shahin Shahrampour are with the Departmen t of Electrical and Systems Engineering at the University of Pennsylvania, Philadelphia, PA 19104 USA. (e-mail: shahin@seas.upenn.e du; jadbabai@seas.upenn.edu). [2] Alexander Rakhlin is with the Department of Statistics at th e University of Pennsylvania, Philadelphia, PA 19104 USA. ( e-mail: rakhlin@wharton.upenn.edu). [3] Karthik Sridharan is with the Department of Computer Scienc e at Cornell University, Ithaca, NY 14850 USA. (e-mail: srid haran@cs.cornell.edu). January 27, 2015 DRAFT 2 by proving regret guarantees that take advantage of nicenessof nature’s sequence (that is, exploiting some nonadversarial quality of nature’s moves). Both of these disti nct directions are important avenues of investigation. In the present paper, we attempt to address these two aspects by developing a single, adaptive algorithm with a regret bound that shows the interplay between the difficulty of the c omparison sequence and niceness of the sequence of nature’s moves. With respect to the first aspect, a more stringent benchmark i s a time-varyingcomparator, a notion that can be termeddynamicregret [3], [6]–[8]: RegT , T ∑
I. INTRODUCTION
The focus of this paper is an online optimization problem in which a learner plays against an adversary or nature. At each round t ∈ {1, . . . , T }, the learner chooses an action x t from some convex feasible set X ⊆ R d .
Then, nature reveals a convex function f t ∈ F to the learner. As a result, the learner incurs the corresponding loss f t (x t ). A learner aims to minimize his regret, a comparison to a single best action in hindsight: Let us refer to this as static regret in the sense that the comparator is time-invariant. In the literature, there are numerous algorithms that guarantee a static regret rate of O( √ T ) (see e.g. [1]- [3]). Moreover, when the loss functions are strongly convex, a rate of O(log T ) could be achieved [4]. Furthermore, minimax optimality of algorithms with respect to the worst-case adversary has been established (see e.g. [5]).
There are two major directions in which the above-mentioned results can be strengthened: (1) by exhibiting algorithms that compete with non-static comparator sequences (that is, making the benchmark harder), and (2) by proving regret guarantees that take advantage of niceness of nature's sequence (that is, exploiting some nonadversarial quality of nature's moves). Both of these distinct directions are important avenues of investigation. In the present paper, we attempt to address these two aspects by developing a single, adaptive algorithm with a regret bound that shows the interplay between the difficulty of the comparison sequence and niceness of the sequence of nature's moves.
With respect to the first aspect, a more stringent benchmark is a time-varying comparator, a notion that can be termed dynamic regret [3], [6]- [8]: where x * t argmin x∈X f t (x). More generally, dynamic regret against a comparator sequence {u t } T t=1 is It is well-known that in the worst case, obtaining a bound on dynamic regret is not possible. However, it is possible to achieve worst-case bounds in terms of i.e., the regularity of the comparator sequence, interpolating between the static and dynamic regret notions. Furthermore, the authors in [9] introduce an algorithm which proposes a variant of C T involving a dynamical model.
In terms of the second direction, there are several ways of incorporating potential regularity of nature's sequence.
The authors in [10], [11] bring forward the idea of predictable sequences -a generic way to incorporate some external knowledge about the gradients of the loss functions. Let {M t } T t=1 be a predictable sequence computable by the learner at the beginning of round t. This sequence can then be used by an algorithm in order to achieve regret in terms of The framework of predictable sequences captures variation and path-length type regret bounds (see e.g. [12], [13]).
Yet another way in which niceness of the adversarial sequence can be captured is through a notion of temporal variability studied in [14]: What is interesting-and intuitive-dynamic regret against the optimal sequence {x * t } T t=1 becomes a feasible objective when V T is small. When only noisy versions of gradients are revealed to the algorithm, Besbes et al.
in [14] show that using a restarted Online Gradient Descent (OGD) [3] algorithm, one can get a bound of form T 2/3 (V T + 1) 1/3 on the expected regret. However, the regret bounds attained in [14] are only valid when an upper bound on V T is known to the learner before the game begins. For the full information online convex optimization setting, when one receives exact gradients instead of noisy gradients, a bound of order V T is trivially obtained by simply playing (at each round) the minimum of the previous round.
The three quantities we just introduced -C T , D T , V T -measure distinct aspects of the online optimization problem, and their interplay is an interesting object of study. Our first contribution is to develop a fully adaptive method (without prior knowledge of these quantities) whose dynamic regret is given in terms of these three complexity measures. This is done for the full information online convex optimization setting, and augments the existing regret bounds in the literature which focus on only one of the three notions -C T , D T , V T -(and not all the three together). To establish a sub-linear bound on the dynamic regret, we utilize a variant of the Optimistic Mirror Descent (OMD) algorithm [10].
When noiseless gradients are available and we can calculate variations at each round, we not only establish a regret bound in terms of V T and T (without a priori knowledge of a bound on V T ), but also show how the bound can in fact be improved when deviation D T is o(T ). We further also show how the bound can automatically adapt to C T the length of sequence of comparators. Importantly, this avoids suboptimal bounds derived only in terms of one of the quantities -C T , V T -in an environment where the other one is small.
The second contribution of this paper is the technical analysis of the algorithm. The bound on the dynamic regret is derived by applying the doubling trick to a non-monotone quantity which results in a non-monotone step size sequence (which has not been investigated to the best of authors' knowledge).
We provide uncoupled strategies for two players playing a sequence of drifting zero sum games. We show how when the two players play the provided strategies, their pay offs converge to the average minimax value of the sequence of games (provided the games drift slowly). In this case, both players simultaneously enjoy no regret guarantees against best sequences of actions in hindsight that vary slowly. This is a generalization of the results by Daskalakis et al. [15], and Rakhlin et al. [11], both of which are for fixed games played repeatedly.
A. Notation
Throughout the paper, we assume that for any action x ∈ X ⊂ R d at any time t, it holds that We denote by · * the dual norm of · , by [T ] the set of natural numbers {1, . . . , T }, and by f 1:t the shorthand of f 1 , ..., f t , respectively. Whenever C T is written without arguments, it will refer to regularity C T (x * 1 , . . . , x * T ) of the sequence of minimizers of the loss functions. We point out that our initial statements hold for the regularity of any sequence of comparators. However, for upper bounds involving √ C T , one needs to choose a computable quantity to tune the step size, and hence our main results are stated for C T (x * 1 , . . . , x * T ). The quantity D T is defined with respect to an arbitrary predictable sequence {M t } T t=1 , but this dependence is omitted for brevity.
B. Comparing with existing regret bounds in the dynamic setting
We state and discuss relevant results from the literature on online learning in dynamic environments. For any comparator sequence {u t } T t=1 and the specific minima sequence {x * t } T t=1 the following results are established in the literature:
Reference
Regret Notion Regret Rate hides the log T factor. Lemma 1 below also yields a rate of O √ DT + 1(1 + CT (u1, . . . , uT )) for any comparator sequence {u t } T t=1 . A detailed explanation of the bounds will be done after Theorem 3. We remark that the authors in [14] consider a setting in which a variation budget (an upper bound on V T ) is known to the learner, but he/she only has noisy gradients available. Then, the restarted OGD guarantees the mentioned rate for convex functions; the rate is modified to (V T + 1)T for strongly convex functions.
For the case of noiseless gradients, we first aim to show that our algorithm is adaptive in the sense that the learner needs not know an upper bound on V T in advance when he/she can calculate variations observed so far.
Furthermore, we shall establish that our method recovers the known bounds for stationary settings (as well as cases where V T does not change gradually along the time horizon)
C. Comparison of Regularity and Variability
We now show that V T and C T are not comparable in general. To this end, we consider the classical problem of prediction with expert advice. In this setting, the learner deals with the linear loss f t (x) = f t , x on the d-dimensional probability simplex. Assume that for any t ≥ 1, we have the vector sequence Setting u t , the comparator of round t, to be the minimizer of f t , i.e. u t = x * t , we have according to (3) and (5), respectively. We see that V T is considerably smaller than C T in this scenario. On the other hand, consider prediction with expert advice with two experts. Let f t = (−1/2, 0) on even rounds and f t = (0, 1/2) on odd rounds. Expert 1 remains to be the best throughout the game, and thus C T = O(1), while variation V T = Θ(T ). Therefore, one can see that taking into account only one measure might lead us to suboptimal regret bounds. We show that both measures play a key role in our regret bound. Finally, we note that if , the notion of D T can be related to V T in certain cases, yet we keep the predictable sequence arbitrary and thus as playing a role separate from V T and C T .
A. Optimistic Mirror Descent and Relation to Regularity
We now outline the OMD algorithm previously proposed in [10]. Let R be a 1-strongly convex function with respect to a norm · , and D R (·, ·) represent the Bregman divergence with respect to R. Also, let H t be the set containing all available information to the learner at the beginning of time t. Then, the learner can compute the vector M t : H t → R d , which we call the predictable process. Supposing that the learner has access to the side information M t ∈ R d from the outset of round t, the OMD algorithm is characterized via the following interleaved sequence, where ∇ t ∇f t (x t ), and η t is the step size that can be chosen adaptively to attain low regret. One could observe that for M t = 0, the OMD algorithm amounts to the well-known Mirror Descent algorithm [16], [17]. On the other hand, the special case of M t = ∇ t−1 recovers the scheme proposed in [13]. It is shown in [10] that the static regret satisfies Reg s T ≤ 4R max D T + 1 , using the step size The following lemma extends the result to arbitrary sequence of comparators respect to a norm · , and let · * denote the dual norm. For any L > 0, employing the time-varying step size and running the Optimistic Mirror Descent algorithm for any comparator sequence {u t } T t=1 , yields Lemma 1 underscores the fact that one can get a tighter bound for regret once the learner advances a sequence of conjectures {M t } T t=1 well-aligned with the gradients. Moreover, if the learner has prior knowledge of C T (or an upper bound on it), then the regret bound would be O (D T + 1)C T by tuning L.
Note that when the function R is Lipschitz on X , the Lipschitz condition on the Bregman divergence is automatically satisfied. For the particular case of KL divergence this can be achieved via mixing a uniform distribution to stay away from boundaries (see e.g. section 4.2 of the paper in this regard). In this case, the constant γ is of O(log T ).
B. The Adaptive Optimistic Mirror Descent Algorithm
The main objective of the paper is to develop the Adaptive Optimistic Mirror Descent (AOMD) algorithm. The AOMD algorithm incorporates all notions of variation D T , C T and V T to derive a comprehensive regret bound. The proposed method builds on the OMD algorithm with adaptive step size, combined with a doubling trick applied to a threshold growing non-monotonically (see e.g. [1], [10] for application of doubling trick on monotone quantities).
The scheme is adaptive in the sense that no prior knowledge of D T , C T or V T is necessary.
Observe that the prior knowledge of a variation budget (an upper bound on V T ) does not tell us how the changes between cost functions are distributed throughout the game. For instance, the variation can increase gradually along the time horizon, while it can also take place in the form of discrete switches. The learner does not have any information about the variation pattern. Therefore, she must adopt a flexible strategy that achieves low regret in the benign case of finite switches or shocks, while it is simultaneously able to compete with the worst-case of gradual change. Before describing the algorithm, let us first use Lemma 1 to bound the general dynamic regret in terms of We now describe AOMD algorithm shown in table 1, and prove that it automatically adapts to V T , D T and C T .
The algorithm can be cast as a repeated OMD using different step sizes. The learner sets the parameter L = 3R max in Lemma 1, and runs the OMD algorithm. Along the process, the learner collects deviation, variation and regularity observed so far, and checks the doubling condition in table 1 after each round. Once the condition is satisfied, the learner doubles L, discards the accumulated deviation, variation and regularity, and runs a new OMD algorithm.
Note importantly that the doubling condition results in a non-monotone sequence of step size during the learning process.
Algorithm 1 Adaptive Optimistic Mirror Descent Algorithm
Parameter : R max , some arbitrary x 0 ∈ X % set step-size and perform optimistic mirror descent update
end for
Notice that once we have completed running the algorithm, N is the number of doubling epochs, ∆ i is the number of instances in epoch i, k i and k i+1 − 1 are the start and end points of Also, there is a technical reason for initialization choice of L which shall become clear in the proof of Lemma 2. Theorem 3 shows the bound enjoyed by the proposed AOMD algorithm.
The AOMD algorithm enjoys the following bound on dynamic regret : whereÕ(·) hides a log T factor.
Based on Theorem 3 we can obtain the following table that summarizes bounds on Reg d T for various cases (disregarding the first termÕ √ D T + 1 in the bound above): Regime Rate The following remarks are in order : ∀i ∈ [B] and ∀x, y ∈ X . Set M t = ∇f i (x t−1 ) and note that the gradients are Lipschitz continuous. In this case, the OMD corresponding to each batch can be recognized as the Mirror Prox method [18], which results inÕ(1) regret during each period. Also, since C T = O(1) the bound in Theorem 3 is of O(log T ).
A. Competing with Strategies
So far, we mainly considered dynamic regret Reg d T defined in Equation 2. However, in many scenarios one might want to consider regret against a more specific set of strategies, defined as follows : where each π ∈ Π is a sequence of mappings π = (π 1 , . . . , π T ) and π t : F t−1 → X . Notice that if Π is the set of all mappings then Reg Π T corresponds to dynamic regret Reg d T and if Π corresponds to set of constant history independent mappings, that is, each π ∈ Π is indexed by some x ∈ X and π x 1 (·) = . . . = π x T (·) = x, then Reg Π T corresponds to the static regret Reg s T . We now define and further, for any T and any f 1 , . . . , f T , In this case a simple modification of AOMD algorithm where C (N ) 's are replaced byC ∆N (f kN :kN+1−1 ) leads to the following corollary of Theorem 3.
Corollary 4.
Assume that D R (x, z) − D R (y, z) ≤ γ x − y , ∀x, y, z ∈ X . The AOMD algorithm with the modification mentioned above achieves the following bound on regret The corollary naturally interpolates between the static and dynamic regret. In other words, lettingC T (f 1:T ) = 0 (which holds for constant mappings), we recover the result of [11] (up to logarithmic factors), whereasC T (f 1:T ) = C T simply recovers the regret bound in Theorem 3 corresponding to dynamic regret. The extra log factor is the cost of adaptivity of the algorithm as we assume no prior knowledge about the environment.
B. Switching Zero-sum Games with Uncoupled Dynamics
Consider two players playing T zero sum games defined by matrices A t ∈ [−1, 1] m×n for each t ∈ [T ]. We would like to provide strategies for the two players such that, if both players honestly follow the prescribed strategies, the average payoffs of the players approach the average minimax value for the sequence of games at some fast rate.
Furthermore, we would also like to guarantee that if one of the players (say the second) deviates from the prescribed strategy, then the first player still has small regret against sequence of actions that do not change drastically. To this end, one can use a simple modification of the AOMD algorithm for both players that uses KL divergence as D R , and mixes in a bit of uniform distribution on each round, producing an algorithm similar to the one in [11] for unchanging uncoupled dynamic games. The following theorem provides bounds for when both players follow the strategy and bound on regret for player I when player II deviates from the strategy.
On round t, Player I performs and simultaneously Player II performs Note that in the description of the algorithm as well as the following proposition and its proof, any letter with the prime symbol refers to Player II, and it is used to differentiate the letter from its counterpart for player I. . .
When Player I uses the prescribed strategy, irrespective of the actions of player II, the regret of Player I w.r.t. any sequence of actions u 1 , . . . , u T is bounded as :
Further if both players follow the prescribed strategies then, as long as
we get, At−1 − At ∞ + 32L log(T 2 n)CT + log(T 2 m)C ′ T + 2 log(T 4 nm) A simple consequence of the above proposition is that if for instance the game matrix A t changes at most K times over the T rounds, and we knew this fact a priori, then by letting L = 1 √ log(T 2 n) , we get that regret for Player I w.r.t. any sequence of actions that switches at most K times even when Player II deviates from the prescribed strategy is O (K + 2) log(T 2 n)T . At the same time if both players follow the strategy, then average payoffs of the players converge to the average minimax equilibrium at the rate of O L (K + 2) log(T 4 nm) under the condition on L given in (10). This shows that if the game matrix only changes/switches a constant number of times, then players get log(T )T regret bound against arbitrary sequences and comparator actions that switch at most K times while simultaneously get a convergence rate of O (log(T )) to average equilibrium when both players are honest. Also, when we let K = 0 and set L to some constant, the proposition recovers the rate in static setting [11] where the matrix sequence is time-invariant.
V. CONCLUSION
In this paper, we proposed an online learning algorithm for dynamic environments. We considered time-varying comparators to measure the dynamic regret of the algorithm. Our proposed method is fully adaptive in the sense that the learner needs no prior knowledge of the environment. We derive a comprehensive upper bound on the dynamic regret capturing the interplay of regularity in the function sequence versus the comparator sequence. Interestingly, the regret bound adapts to the smaller quantity among the two, and selects the best of both worlds. As an instance of dynamic regret, we considered drifting zero-sum, two-player games, and characterized the convergence rate to the average minimax equilibrium in terms of variability in the sequence of payoff matrices.
ACKNOWLEDGEMENTS
We gratefully acknowledge the support of ONR BRC Program on Decentralized, Online Optimization, NSF under grants CAREER DMS-0954737 and CCF-1116928, as well as Dean's Research Fund.
APPENDIX : PROOFS
Proof of Lemma 1. For any u t ∈ X , it holds that First, observe that for any primal-dual norm pair we have Any update of the form a * = arg min a∈X a, x + D R (a, c) satisfies for any d ∈ X , This entails Combining the preceding relations and returning to (11), we obtain where in the last step we appealed to strong convexity: D R (x, y) ≥ 1 2 x − y 2 for any x, y ∈ X . Using the simple inequality ab ≤ ρa 2 2 + b 2 2ρ for any ρ > 0 to split the product term, we get Applying the bound and summing over t ∈ [T ] yields , where we used the Lipschitz continuity of D R in the penultimate step. Now let us set Appealing to convexity of {f t } T t=1 , and replacing C T (3) and D T (4) in above, completes the proof .
Our choice of L > 2R max guarantees that any sequence of fixed comparators u t = u for t ∈ [T ] belongs to U T , and hence, (u * 1 , ..., u * T ) exists. Noting that (u * 1 , ..., u * T ) is an element of U T , we have γ T t=1 u * t − u * t−1 +4R 2 max ≤ L 2 . We now apply Lemma 1 to {u * t } T t=1 to bound the dynamic regret for arbitrary comparator sequence {u t } T t=1 as follows, where the last step follows from the fact that Given the definition of R 2 max , by strong convexity of D R (x, y), we get that x − y ≤ √ 2R max , for any x, y ∈ X .
This entails that once we divide the horizon into B number of batches and use a single, fixed point as a comparator along each batch, we have since there are at most B number of changes in the comparator sequence along the horizon. Now let B = , and for ease of notation, assume that T is divisible by B. Noting that f t (x * t ) ≤ f t (u t ), we use an argument similar to that of [14] to get for any fixed Note that x * ti is fixed for each batch i. Substituting our choice of B = belongs to U T , and (17) follows by optimality of (u * 1 , ..., u * T ). We now claim that for any t ∈ [(i − 1)(T /B) + 1, i(T /B)], we have, Assuming otherwise, there must exist at i ∈ [(i − 1)(T /B) + 1, i(T /B)] such that which results in The preceding relation for t = t i violates the optimality of x * ti , which is a contradiction. Therefore, Equation (19) holds for any t ∈ [(i − 1)(T /B) + 1, i(T /B)] Combining (16), (18) and (19) we have (20) Using the above in Equation (14) we conclude the following upper bound thereby completing the proof.
Proof of Theorem 3.
For the sake of clarity in presentation, we stick to the following notation for the proof for any doubling epoch i = 1, ..., N , where we recall that k i+1 − 1 is the last instance of epoch i. Therefore, any symbol with lower bar refers to its corresponding quantity removing only the value of the last instance of that interval.
Let the AOMD algorithm run with the step size given by Lemma 1 in the following form and let L i be tuned with a doubling condition explained in the algorithm. Once the condition stated in the algorithm fails, the following pair of identities must hold Observe that the algorithm doubles L i only after the condition fails, so at violation points we suffer at most 2G by boundedness (6). Then, under purview of Lemma 2, it holds that where the last step follows directly from (21) and the fact that D (i) ≤ D (i) . Bounding D (i) L i in above, using the second inequality in (21), we get Plugging the bound above into (22) and noting that by Jensen's inequality, we obtain where we used the first inequality in (21) to bound the last term. Given the condition in the indicator function 1 {·}, we can simplify above to derive, Given the fact that we return to (23) to derive where we bounded the sums using the following fact about the summands To bound the number of batches N , we recall that L i = 3R max 2 i−1 , and use the second inequality in (21) to bound L N −1 as follows In view of the preceding relation and (24), we have where κ 4 + log 2 2γR max T + 4R 2 max − 2 log 2 (3R max ), thereby completing the proof.
Proof of Proposition 5.
Assume that the player I uses the prescribed strategy. This corresponds to using the optimistic mirror descent update with R(x) = n i=1 x i log(x i ) as the function that is strongly convex w.r.t. · 1 .
Following the line of proof in Lemma 1, in particular, using Equation 12 for the specific case with D R as KL divergence, we get that for any t and any u t ∈ ∆ n , . Now let us bound for some i the term, log xt[i] Using this we can conclude that : Summing over t ∈ [T ] we obtain that : Note that 1 ηt ≤ O √ T and so assuming T is large enough, 1 Now note that we can rewrite the first sum in the above bound and get : Since by definition ofx ′ t−1 , we are mixing in 1/T 2 of the uniform distribution we have that for any i,x ′ t−1 [i] > 1 T 2 n and, since η t 's are non-increasing, we continue bounding above as using the above in Equation 25 we get Notice that our choice of step size given by, guarantees that product term, we see that where we used the bound √ c ≤ c + 1 for any c ≥ 0 in the penultimate line. Similar bounds as Equations (31) At−1 − At ∞ + 32L log(T 2 n)CT + log(T 2 m)C ′ T + 2 log(T 4 nm) + CT + C ′ T + 4 xt − xt 2 1 + 2
|
2018-05-31T22:29:53.343Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "07a9ce4fcc8f989a65169b6de85027035ea66388",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "07a9ce4fcc8f989a65169b6de85027035ea66388",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
258522477
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of the Recycling Behavior of a Polypropylene Sample Aged in Air and in Marine Water
During the processing and during their lifetime, polymers are subjected to several environmental stresses—thermomechanical, photo-oxidative, etc.—that can strongly modify their chemical and molecular structure and, consequently, their morphology. Reduction of the molecular weight and formation of double bonds and oxygenated groups are the main changes observed as a consequence of the degradation. As a result of these changes, the macroscopic properties are dramatically modified. These changes can have a relevant effect if the post-consumer plastic manufacts are recycled. In this work, a sample of polypropylene subjected to two different degradation histories—photo-oxidation in air and in marine water—is reprocessed two times in a mini twin-screw extruder in the same processing conditions. The effect of the thermomechanical degradation during the reprocessing is different. Indeed, the less severe degraded sample shows a higher degradation level during reprocessing because the shear stress is larger. This means that the thermomechanical degradation kinetics is larger in the less degraded samples. Nevertheless, the final properties of the recycled polymers are different because the properties of the photo-oxidized samples before reprocessing were very different.
Introduction
The recycling of post-consumer polymers depends mainly on their chemical and molecular structure [1][2][3][4][5] and on their morphology and composition in case of blends or composite [6][7][8][9]. The post-consumer polymers have been subjected to thermomechanical stress during the processing in the melt state and to different environmental stresses (temperature, ultraviolet rays, etc.) during their lifetime. On the other hand, the degradation undergone during the first processing can give rise to a more rapid degradation if the polymer is subjected to external stress such as the photo-oxidation or different processing conditions [10][11][12]. In this case, indeed, the presence of oxygenated groups or double bonds accelerates the degradation kinetics. The degradation gives rise to the reduction of molecular weight, the formation of oxygenated groups and double bonds and other structural and molecular changes. Moreover, changes of the morphology in semicrystalline polymers can also occur [6,7]. As the degradation depends on the initial chemical and molecular structure, due to these changes, the answer to the reprocessing processes in molten state should be different with respect to that of the same undegraded or a differently degraded polymer. As the degradation gives rise to polymers with lower molecular weight and with oxygenated groups, a study about the effect of the degradation on the recycling should investigate the influence of these two parameters on the processability and on the final properties of the reprocessed polymers. If temperature and shear stress are the driving force of the thermomechanical degradation during processing, the recycling in the same processing conditions of the same polymer but with different molecular weight should give different degradation kinetics as the viscosity of the different samples gives rise to different shear stress acting on the melt and then to a different driving force able to break the macromolecules. Moreover, the presence of oxygenated groups or vinyl bonds changing the energy necessary to break these bonds with respect to the -C-C-and -C-H-bonds present in the virgin polyolefins, such as polyethylene of polypropylene, can, in its turn, modify the degradation kinetics.
Papers investigating the effect of the molecular weight on the thermal degradation [13,14] and on the ultrasonic degradation [15,16] indicate a moderate effect of the molecular weight and, in particular, the cleavage of the macromolecules increases with increasing the molecular weight. In the papers [17][18][19][20][21], it was demonstrated that the polymer degraded more rapidly than for samples subjected only to processing or aging separately. Moreover, it was reported that reprocessing after aging caused a degradation greater than for those samples that were processed and then aged. An interesting way to evaluate the effect of the previous "life", and then of the molecular structure on the recycling of post-consumer plastics, is the recycling of ocean-bound plastics. Indeed, the degradation of the polymers is certainly different than that undergone in air [22][23][24]. In particular, at the same degradation time, the level of degradation is lower than that observed in the same conditions, but in air; however, the previous papers discuss only the effect of the degradation in the sea water, without any comparison with the recycling of the same polymers degraded in air.
To our best knowledge, the effect of the degradation undergone during the lifetime in different environments but in the same conditions of external driving forces (temperature, UV irradiation, etc.) on the recycling of polymers, and then the effect of the molecular weight and of the presence of oxygenated groups on the thermomechanical degradation, has not been investigated.
In this work, the recycling behavior of a sample of polypropylene that degraded, at the same fixed time in two different conditions, has been investigated. The degradation has been carried out by photo-oxidizing the polypropylene in two different environmentsair and marine water-as reported in our previous paper [25]. As reported in this last work, the photo-oxidation in marine water is less severe than in air under the same irradiation conditions because of the less availability of oxygen in marine water. The two degraded samples show different molecular weights and the presence of different number of oxygenated groups and this different chemical and molecular structure gives rise to different behaviors when they are reprocessed. In particular, the degradation during processing is more severe for the less degraded sample. However, the two-degradation kinetics are not so different, although the molecular weight and then the viscosities are very different. This behavior has been interpreted considering that the less degraded sample shows higher viscosity and then a higher value of the shear stress acting on the melt; however, on the other side, the presence of more labile bonds in the more severe degraded sample can compensate, at least in part, for the lower shear stress acting on this last sample.
This investigation can give many useful pieces of information when the recycling is carried out on post-consumer plastic manufacts with different level of degradation, including, for example, bottles and films degraded in different environments such as in air or in the sea.
Material, Degradation and Reprocessing
The polypropylene (PP) used in this work is a random polypropylene copolymer Moplen RP34OH manufactured by LyondellBasell (LyondellBasell, Ferrara, Italy), having a melt flow index (MFI) of 1.8 g/10 min (230 • C/2.16 kg) and a density of 0.90 g/cm 3 . The photo-oxidation was carried out in a QUV at 40 • C for 192 h in air and in marine water, following the same procedure reported in [25]. The lamps were UVB 313 nm with a UV irradiation peak at 313 nm. As for the samples irradiated in marine water, the sheets were kept in aluminum trays immersed in water and covered by a film of PVC. The same arrangement was used for the samples irradiated in air. The trays were kept on the bottom a melt flow index (MFI) of 1.8 g/10 min (230 °C/2.16 kg) and a density of 0.90 g/cm 3 . The photo-oxidation was carried out in a QUV at 40 °C for 192 hr in air and in marine water, following the same procedure reported in [25]. The lamps were UVB 313 nm with a UV irradiation peak at 313 nm. As for the samples irradiated in marine water, the sheets were kept in aluminum trays immersed in water and covered by a film of PVC. The same arrangement was used for the samples irradiated in air. The trays were kept on the bottom of the QUV under the lamps. The distance of the samples from the lams was about 5 cm from the bottom lamp to about 22 cm from the top lamp. A picture of the experimental setup is reported in Figure 1. The reprocessing of the samples photo-oxidized both in air and in marine water was performed in a laboratory conical mini-twin screw extruder (Minilab, Thermo Haake, Karlsruhe, Germany) at the temperature of 240 °C and at a rotational speed of 60 rpm. The degraded samples were extruded up to two times in the same conditions.
Characterizations
The FTIR-ATR spectra were recorded by using a Spectrum One spectrometer (Perkin-Elmer, Norwalk, CT, USA), equipped with integrated Spectrum One software. The spectra were obtained through 8 scans in the range 500-4000 cm −1 . The spectra resolution was 4 cm −1 . The specimens for the FTIR-ATR spectra were carefully wiped and dried before the measurement.
The rheological characterization was performed by using a rotational rheometer ARES G2 (TA Instruments, New Castle, DE, USA) with parallel plate, at the temperature of 190 °C in the frequency range of 0.1-100 rad/s. The strain was 5% for all the tests. The diameter of the specimens was 25 mm.
The mechanical characterization was carried out in tensile mode using an Instron (Instron, High Wycombe, PA, USA) mod. 3365 universal machine at a crosshead speed of 1 mm/min until a deformation of 3%, and then at a crosshead speed of to 100 mm/min until final rupture. The dimensions of the specimens for the tensile tests were 90 × 10 mm. Seven replicates for each measurement were performed, in order to obtain statistically relevant results. The reproducibility of the results was good (max ± 8%). The reprocessing of the samples photo-oxidized both in air and in marine water was performed in a laboratory conical mini-twin screw extruder (Minilab, Thermo Haake, Karlsruhe, Germany) at the temperature of 240 • C and at a rotational speed of 60 rpm. The degraded samples were extruded up to two times in the same conditions.
Characterizations
The FTIR-ATR spectra were recorded by using a Spectrum One spectrometer (Perkin-Elmer, Norwalk, CT, USA), equipped with integrated Spectrum One software. The spectra were obtained through 8 scans in the range 500-4000 cm −1 . The spectra resolution was 4 cm −1 . The specimens for the FTIR-ATR spectra were carefully wiped and dried before the measurement.
The rheological characterization was performed by using a rotational rheometer ARES G2 (TA Instruments, New Castle, DE, USA) with parallel plate, at the temperature of 190 • C in the frequency range of 0.1-100 rad/s. The strain was 5% for all the tests. The diameter of the specimens was 25 mm.
The mechanical characterization was carried out in tensile mode using an Instron (Instron, High Wycombe, PA, USA) mod. 3365 universal machine at a crosshead speed of 1 mm/min until a deformation of 3%, and then at a crosshead speed of to 100 mm/min until final rupture. The dimensions of the specimens for the tensile tests were 90 × 10 mm. Seven replicates for each measurement were performed, in order to obtain statistically relevant results. The reproducibility of the results was good (max ± 8%).
The samples used for all the tests, about 0.7 mm thick, were obtained by compression molding in a laboratory Carver press (Carver, Wabash, IN, USA) at 190 • C and at a mold pressure of 300 psi for about 2 min. spectra of the virgin polymer. The aged polymers show the formation of oxygenated groups mainly in two different regions centered at about 1720 and 3340 cm −1 . The first band is attributed to the formation of ketone groups, while the second band is attributed to the formation of hydroxyl groups. Moreover, a slight rise of the spectra in the range of 1600-1700 cm −1 , attributable to the formation of vinyl bonds and carboxyl groups [26], was also observed. The more oxygenated sample was the sample photo-oxidized in air. This behavior was interpreted [25] as a result of the lower content of oxygen available in the marine water.
Characterization of the Degraded Samples
molding in a laboratory Carver press (Carver, Wabash, IN, USA) at 190 °C and at a mold pressure of 300 psi for about 2 min. Figure 2 reports the ATR spectra of the samples investigated in this work, PP degraded 192 h in air (PP-A) and PP degraded 192 h in marine water (PP-SW) compared with the spectra of the virgin polymer. The aged polymers show the formation of oxygenated groups mainly in two different regions centered at about 1720 and 3340 cm −1 . The first band is attributed to the formation of ketone groups, while the second band is attributed to the formation of hydroxyl groups. Moreover, a slight rise of the spectra in the range of 1600-1700 cm −1 , attributable to the formation of vinyl bonds and carboxyl groups [26], was also observed. The more oxygenated sample was the sample photo-oxidized in air. This behavior was interpreted [25] as a result of the lower content of oxygen available in the marine water. The reduction of the molecular weight can be evaluated by considering that the Newtonian viscosity is [27]:
Characterization of the Degraded Samples
where η0 is the Newtonian viscosity, K a constant and Mw the weight average molecular weight. The dimensionless molecular weight, ����� , of the sample at a given irradiation time is: The reduction of the molecular weight can be evaluated by considering that the Newtonian viscosity is [27]: η 0 = kMw 3.4 (1) where η 0 is the Newtonian viscosity, K a constant and Mw the weight average molecular weight. The dimensionless molecular weight, Mw, of the sample at a given irradiation time is: where η 0 (t) is the Newtonian viscosity at a given irradiation time, t, and η 0 (0) is the Newtonian viscosity of the virgin sample. The molecular weight is reduced only to about 34% of the initial value for the sample irradiated in air and to about 76% of the initial value for the sample irradiated in marine water. The lower degradation kinetic for the sample irradiated in marine water is due, as already reported [24], to the lower content of oxygen available in marine water.
In Table 1 the ultimate mechanical (tensile) properties, tensile strength (TS) and elongation at break (EB) of the same investigated samples are reported. As expected, the decrease in both tensile strength and elongation at break is dramatic only for the more degraded sample and quite modest for the less degraded sample. In particular, the elongation at break that is very sensible to the changes of the molecular structure and morphology is strongly reduced only in the sample photo-oxidized in air.
Characterization of the Reprocessed Samples
In Figure 4a,b the flow curves of the two recycled samples are reported. Of course, the viscosity decreases with the number of extrusions for both samples, showing the action of the thermomechanical stress that is able to break the macromolecules during both the two passages in the mini twin-screw extruder. The decrease in the Newtonian viscosity is larger for the PP-SW sample, which is clear evidence of a more severe thermomechanical degradation for this last sample. Of course, this is due to the higher viscosity of this sample that generates larger shear stress acting on the melt for this last sample with respect to the PP-A. Moreover, the non-Newtonian effect becomes less pronounced with increasing the reprocessing steps. The decrease in the viscosity is certainly relevant due to the high values of the shear stress in the twin-screw extruder. Although these high shear stress values are not typical of the stress encountered in single-screw extruders, these processing conditions are useful to magnify the thermomechanical degradation and the influence of the different molecular and chemical structure on the recycling. Although these high shear stress values are not typical of the stress encountered in single-screw extruders, these processing conditions are useful to magnify the thermomechanical degradation and the influence of the different molecular and chemical structure on the recycling.
The values of the tensile strength and elongation at break of the sample reprocessed one and two times in the twin-screw extruder are reported in Table 2. Additionally, for tensile strength and elongation at break a decrease is observed with increasing the number of extrusions. A remarkable reduction of elongation at break and tensile strength is mainly observed for the sample PP-SW. However, the decrease in the tensile strength is about 43% after two extrusions for the PP-A and about 47% for PP-SW, and the decrease in the elongation at break for PP-A is about 85% and about 88% for the sample PP-SW. It is worth mentioning, then, that the kinetic of degradation seems similar for both samples, as it will be discussed in the following, but the absolute values of the mechanical properties are very different because the properties of the aged samples are very different. Figure 5 shows the stress-strain curves. The ATR spectra of all samples are reported in Figure 6a,b. The spectral bands at 3300-3400 and 1600-1700 cm −1 increase with the number of extrusions. The spectral band is centered at about 1720 cm −1 , but a significant increase is also observed in the spectral range of 1600 and 1700 cm −1 relative to the formation of vinyl and carboxylic acid. However, the formation of new oxygenated groups is quite low for both bands due, presumably, to the low presence of oxygen in the mini-extruder and the low residence times. A small increase of the spectra at about 888 cm −1 is also observed, as represented in Figure 6a,b, due to the formation of vinylidene compounds. The ATR spectra of all samples are reported in Figure 6a,b. The spectral bands at 3300-3400 and 1600-1700 cm −1 increase with the number of extrusions. The spectral band is centered at about 1720 cm −1 , but a significant increase is also observed in the spectral range of 1600 and 1700 cm −1 relative to the formation of vinyl and carboxylic acid. However, the formation of new oxygenated groups is quite low for both bands due, presumably, to the low presence of oxygen in the mini-extruder and the low residence times. A small increase of the spectra at about 888 cm −1 is also observed, as represented in Figure 6a,b, due to the formation of vinylidene compounds. 3300-3400 and 1600-1700 cm −1 increase with the number of extrusions. The spectral band is centered at about 1720 cm −1 , but a significant increase is also observed in the spectral range of 1600 and 1700 cm −1 relative to the formation of vinyl and carboxylic acid. However, the formation of new oxygenated groups is quite low for both bands due, presumably, to the low presence of oxygen in the mini-extruder and the low residence times. A small increase of the spectra at about 888 cm −1 is also observed, as represented in Figure 6a,b, due to the formation of vinylidene compounds.
Discussion
In order to investigate the effect of the different molecular structure on the degradation kinetic, the rheological and mechanical results have been normalized to put in evidence the kinetic of degradation. In Figure 7, the dimensionless values of the Newtonian viscosity of the reprocessed samples are reported as a function of the number of extrusions. The dimensionless values are calculated as the ratio between the value of the Newtonian viscosity after one and two extrusions by the value of the photo-oxidized, unprocessed sample. In Figure 7, 0 means the unprocessed samples, while 1 and 2 refer to one and two reprocessing steps.
Discussion
In order to investigate the effect of the different molecular structure on the degradation kinetic, the rheological and mechanical results have been normalized to put in evidence the kinetic of degradation. In Figure 7, the dimensionless values of the Newtonian viscosity of the reprocessed samples are reported as a function of the number of extrusions. The dimensionless values are calculated as the ratio between the value of the Newtonian viscosity after one and two extrusions by the value of the photo-oxidized, unprocessed sample. In Figure 7, 0 means the unprocessed samples, while 1 and 2 refer to one and two reprocessing steps.
Discussion
In order to investigate the effect of the different molecular structure on the degradation kinetic, the rheological and mechanical results have been normalized to put in evidence the kinetic of degradation. In Figure 7, the dimensionless values of the Newtonian viscosity of the reprocessed samples are reported as a function of the number of extrusions. The dimensionless values are calculated as the ratio between the value of the Newtonian viscosity after one and two extrusions by the value of the photo-oxidized, unprocessed sample. In Figure 7, 0 means the unprocessed samples, while 1 and 2 refer to one and two reprocessing steps. It is evident that the decrease in the viscosity, and then in the molecular weight, is faster for the sample degraded in marine water. It is, however, worth mentioning that the velocity of the decay of the viscosity is high in the first extrusion, and then the slope of the curve decreases for both samples and the slopes of the curves of the two samples become similar. This behavior can be interpreted considering that the driving force of the thermomechanical degradation is the shear stress proportional to the viscosity that decreases, and the viscosity of the PP-SW sample approaches that of the sample degraded in air. The same comments can be made for the dimensionless values of the elongation at break reported in the same figure. The elongation at break is the mechanical property more dependent on the variations of molecular structure of the polymer.
The mechanical stress acting on the melt is the shear stress in the processing conditions, i.e., the shear rate, γ, by the viscosity in the processing conditions, η, The shear rate is, of course, the same in all the tests because the screw speed is the same, while the viscosity is different for the two samples (see Figure 3): higher for the sample PP-SW and lower for the sample degraded in air. This means that the thermomechanical stress is low for PP-A and higher for PP-SW and the consequent degradation is higher for PP-SW and low for PP-A. However, the difference in the degradation kinetic is not dramatically different and seems lower than that expected on the basis of the very different viscosities of the two unprocessed samples and then on the basis of the different shear stress at which the polymers are subjected. A possible interpretation of this behavior can be correlated with the different chemical structure of PP-A and PP-SW. Indeed, the first sample presents an initial number of oxygenated groups certainly higher than that observed in the PP-SW sample. The energy bonds of the generic C-C=O bonds are lower than that of the other C-C bonds and, in analogy with the photo-oxidation, Norrish reactions can be invoked to interpret this phenomenon. The degradation in the PP-SW sample is mainly due to the high thermomechanical stress. However, in the PP-A sample the lower shear stress is efficient because of the Norrish reactions, Figure 8, due to the highest presence of oxygenated groups. The two samples show similar degradation kinetics, although they are subjected to different driving thermomechanical forces because of the different energy necessary for the cleavage of the different carbon-carbon links.
first sample presents an initial number of oxygenated groups certainly higher than that observed in the PP-SW sample. The energy bonds of the generic C-C=O bonds are lower than that of the other C-C bonds and, in analogy with the photo-oxidation, Norrish reactions can be invoked to interpret this phenomenon. The degradation in the PP-SW sample is mainly due to the high thermomechanical stress. However, in the PP-A sample the lower shear stress is efficient because of the Norrish reactions, Figure 8, due to the highest presence of oxygenated groups. The two samples show similar degradation kinetics, although they are subjected to different driving thermomechanical forces because of the different energy necessary for the cleavage of the different carbon-carbon links.
Conclusions
The properties of recycled polymers depend mainly on the molecular and chemical structure of the reclaimed post-consumer plastic manufacts. To our best knowledge, no
Conclusions
The properties of recycled polymers depend mainly on the molecular and chemical structure of the reclaimed post-consumer plastic manufacts. To our best knowledge, no specific papers have dealt with the effect of the chemical and molecular structure of the postconsumer plastics on the recycling operations and on the final properties of these polymers. In this work a polypropylene sample photo-oxidized in two different environments, air and marine water, have been reprocessed in the same processing conditions of temperature and UV irradiation in order to evaluate the effect of the level of the initial degradation, and then of the previous life, on the thermomechanical degradation during reprocessing and on the final properties of the recycled material. The less degraded sample, PP-SW, having larger molecular weight and viscosity, shows a higher degradation kinetic and a larger level of thermomechanical degradation because of the higher viscosity of this sample that gives rise to higher stress on the melt that is able to break the macromolecules. However, the least amount of energy needed to break lower energy bonds of the carbon bonds with oxygenated carbon groups in the more degraded sample implies an increase of the degradation kinetics of this last PP-SW sample. In short, the lower molecular weight decreases the thermomechanical degradation but, on the contrary, the presence of oxygenated groups increases, through the Norrish reactions, the thermomechanical degradation. Finally, while the degradation kinetic during processing seems similar for both samples, the absolute values of the viscosity and of the ultimate mechanical properties are, instead, very different because the same properties were very different in the aged samples before processing. In this investigated case, the polymers aged in marine water shows better properties after the recycling processing than those shown by the polymers aged in air. This behavior allows us to consider, for example, that ocean-bound post-consumer plastics can be mechanically recycled in apparatuses and in reprocessing conditions similar to those used for the mechanical recycling of all the other post-consumer plastic manufacts.
|
2023-05-06T15:13:05.438Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9e00acbb4068e2d1e1f028eae03bb39346e180dc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8ffe1e073f153928265c1c210bd9cd000bde90e6",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
5030152
|
pes2o/s2orc
|
v3-fos-license
|
The Development of a Design and Construction Process Protocol to Support the Home Modification Process Delivered by Occupational Therapists
Modifying the home environments of older people as they age in place is a well-established health and social care intervention. Using design and construction methods to redress any imbalance caused by the ageing process or disability within the home environment, occupational therapists are seen as the experts in this field of practice. However, the process used by occupational therapists when modifying home environments has been criticised for being disorganised and not founded on theoretical principles and concepts underpinning the profession. To address this issue, research was conducted to develop a design and construction process protocol specifically for home modifications. A three-stage approach was taken for the analysis of qualitative data generated from an online survey, completed by 135 occupational therapists in the UK. Using both the existing occupational therapy intervention process model and the design and construction process protocol as the theoretical frameworks, a 4-phase, 9-subphase design and construction process protocol for home modifications was developed. Overall, the study is innovative in developing the first process protocol for home modifications, potentially providing occupational therapists with a systematic and effective approach to the design and delivery of home modification services for older and disabled people.
Introduction
Current government policy within the UK [1] is encouraging the design and construction industry to build new mainstream housing that supports people to successfully age in place and to reduce the architectural barriers previous design standards have caused since the majority of older and disabled people live in homes that are not designed to meet their needs [2][3][4]. However, current policy recognises the social and economic bene ts of enabling older and disabled people to remain in their own homes by making it a statutory obligation [5,6] for the assessment and provision of social care services to achieve this. Home modi cations are one such service. Whilst home modi cations can involve the removal of hazardous features, such as worn rugs, or changing the behaviour in how activities of daily living are performed [7], home modi cation services in the UK focus on providing "structural changes to a person's home so they can continue to live and move, or be moved, safely" (p. 410) [8]. Occupational therapists make an important contribution to the home modi cation process, as their professional skills in "problem solving, enablement, prevention and environmental adaptations" (p. 11) [9] are being used to help health and social care departments within local authorities deliver their legislative responsibilities for the assessment and provision of home modi cations for older and disabled people.
Despite the perceived positive role of the occupational therapist in this eld of practice [10] and the fact that home modi cations improve the health and well-being of older people [11][12][13], evidence suggests that some home modications fail to meet the client's needs [14][15][16] and expectations [10] and that failing to involve the client (who is usually the older person but may also be the caregiver or relative) in the decision-making process is a further cause of dissatisfaction [17,18]. Questions have also been raised about the complexity and coordination of the home modi cation process because of the number of agencies and professionals involved [19][20][21], with the use of the analogy of a "patchwork of services," which are relatively "unplanned and uncoordinated" in nature (p. 4) [20].
It is further suggested that people's experience of the process and satisfaction with the home modi cation would improve if occupational therapists had a greater understanding of their role [20,22,23], and the lack of available guidance and standardised assessment tools is seen as a contributing factor [16,21,22,24]. is issue is further exacerbated by a lack of design and construction knowledge [8,20,25], leading to occupational therapists making the assumption that the modi cation process is simple [26]. Interestingly, evidence suggests that occupational therapists want a more standardised approach to the whole modi cation process [21] and that the profession should consider ways to amalgamate the occupational therapy process into the wider design and construction process [22,23,27]. us, given that occupational therapists use the principles of design and construction in interventions involving modifying the home environment in their everyday practice, the aim of this study was to develop an occupational therapy design and construction protocol for modifying home environments.
Learning from the Design and Construction Industry
Interestingly, in the 1990s, the UK design and construction industry faced similar criticisms to those discussed above, and three key factors were identi ed [28]. e rst factor is the di cult nature of coordinating a building project requiring the careful planning, management, and coordination of a number of phases and subphases [28] and coordinating a large number of highly specialised professional groups who do not typically work alongside each other and only have a broad understanding of each other's role [29]. e second factor is the ow of information through the various sequential phases of the process [28] such that it was seen as important that each professional group understood the value of information they produced to the other professionals involved in the project and that they were aware of what information needed to ow through to the next phase and also the timing of their information such that subsequent phases were not delayed [30]. irdly, the involvement of end users was identi ed, thus ensuring that information necessary to design and construct a building to meet their needs and requirements was appropriately captured throughout the project [31,32]. ese criticisms led to the development of the generic design and construction process protocol (GDCPP) [33]. In describing the process, Cooper et al. [34] explain that the GDCPP breaks down the design and construction process into four phases, and within each phase, there are subphases; each phase and each subphase are associated with speci c actions, and these actions are linked to di erent elements of design and construction. Each phase should be complete before moving on to the next phase. Whilst there have been no longitudinal follow-up studies investigating the long-term bene ts gained from using the GDCPP, it is reported [35] that the case study sites involved in the original research continued to use the GDCPP after the formal research project was concluded.
The Need for an Occupational Therapy Design and Construction Process Protocol
When providing interventions, the College of Occupational erapy states that "any advice or intervention provided should be based upon the most recent evidence available, best practice, or local/national guidelines and protocol" (p. 17) [36]. e occupational therapy profession has a number of generic process frameworks [37][38][39], and as with the design and construction industry, these processes help occupational therapists to structure the evaluation, diagnosis, treatment, and reevaluation phases of therapy. However, the occupational therapy process is generic and applied to the full range of interventions such that there is no published process which makes visible the process required for housing modi cations.
is should be a concern for the profession as practitioners have an ethical and professional requirement to make visible their practice such that they can demonstrate that the interventions they are providing are e ective and that the person receiving the intervention is able to understand and consent to all aspects of the treatment that they are receiving [40,41]. e assessment for, and the identi cation of, what home modi cations are required is a complex part of occupational therapy practice, and practitioners use conceptual models as "an organising tool" to help structure and "make sense" of this process (p. 57) [42]. ere is general agreement in the literature [42][43][44] that the Person Environment Occupation (PEO) models are the most relevant conceptual model to practitioners in this eld of practice. However, there has been criticism that the traditional PEO models [45][46][47] do not fully capture the concepts occupational therapists require to guide e ective home modi cation practice [48]. e Occupational erapy Intervention Process Model [38] is used in the research reported here, and as such, these criticisms are addressed in three key ways. Firstly, the OTIPM [38] uses similar terms associated with the built environment literature such as "required space," "required tools," and "required actions" and similar terms used in the built environment [49] when describing the space, equipment, and objects people use to perform an activity. Secondly, unlike other PEO models [49], the OTIPM separately operationalises the process for delivering interventions. irdly, as with GDCPP [34], the OTIPM [38] encourages occupational therapists not to proceed to the next phase of the process until they have all the necessary information to continue, thereby reducing the risk of planning ine ective interventions.
Despite the professional [41] and ethical requirements [40] to make visible the core reasoning skills and process used within occupational therapy professional practice within the UK, there are concerns [50,51] that very few research studies have evaluated or attempted to describe the home modi cation process and make visible the practice involved. Protocols have been used successfully to improve the interventions provided by occupational therapists, for example, to improve the clinical reasoning of novice practitioners using a speci c 2 Journal of Aging Research assessment to identify appropriate interventions to reduce upper limb hypertonia [52]. e purpose of this study, therefore, is to develop an occupational therapy design and construction process protocol speci cally for home modications because protocols ". . . help clinicians focus on what is important, specify intervention procedures, delineate the theoretical rationale behind treatment, and contribute to the evolution of the intervention by explicating the reasoning process necessary to solve clinical dilemmas" (p. 712) [53].
Methodology
A survey strategy [54] was used for this study so that the home modi cation processes used by occupational therapists could be understood by analysing the situation in which occupational therapists undertake the process of modifying the home environment. e speci c technique used to collect the survey data was an online questionnaire, as this approach provides an e ective method of generating knowledge and the most e cient way of delivering the survey to a larger sample of respondents [54]. e questionnaires were designed to include both open and closed questions, capturing quantitative data about respondent attitudes and experience of the home modi cation process and qualitative data to capture fact-based information.
Respondents were asked to consider their answers in relation to bathroom modi cations as they are the most common modi cation [55]. A pilot study involving ve experienced occupational therapists was conducted [56] to ensure the validity and reliability of the data generated, as well as ensuring that the questions could be understood by the respondents.
For the main study, purposeful sampling was chosen as an e ective way to identify a sample of respondents with speci c attributes necessary to generate data [57]. Inclusion criteria, alongside the rationale, are presented in Table 1. e online questionnaire was advertised through the UK College of Occupational erapy monthly e-newsletter to all members (approximately 250 members) of the specialist section for housing. Whilst 232 questionnaires were received, only 135 met the inclusion criteria. Reasons for exclusion included the following: (1) Respondent retired from practice (2) Respondent worked outside of the UK (3) Respondent not a quali ed occupational therapist (4) Respondent's main role no longer involved using home modi cations as an intervention Data analysis involved three separate stages. Firstly, a directed content analysis technique was used. Directed content analysis is a useful form of thematic analysis when validating or extending a conceptual theoretical framework, such as the occupational therapy process [58]. e OTIPM [38] acted as a theoretical framework to analyse the data. Data generated from the question "describe your role in the process of designing a bathroom modi cation" were downloaded into NVivo 10. Using the software, each statement from individual respondents was read and reread. Once familiar with the range of statements, the initial coding of the data involved separating the response statements into individual activities or actions performed by the respondents in their role and matching responses to one of the three phases of the OTIPM [38]. ese three phases of the OTIPM [38] became the separate themes for this step of the data analysis. When using a directed content analysis, [59] states that it is important to "remember to stay grounded in the data and remain open to the possibility that, ultimately, the data and the framework may be incompatible" [59]. erefore, codes not matched to one of the three themes were reviewed. e second stage of the data analysis involved conceptualising the activities and actions of the respondents during the main phases of the occupational therapy process, as a home modi cation process. NVivo 10 software was used to produce four separate code books. Each book represented one of the themes identi ed from Step 1 of the directed content analysis and contained the data coded under each theme. Once familiar with the content of each book, activities and actions in each code book were matched with similar actions and activities in each of the 10 subphases of the GDCPP [33]. As with the previous stage of analysis, thematic codes not matched to the subphases were reviewed at the end of the process. e outcome of this stage of the analysis was a 4-phase, 10-subphase process used by the occupational therapist to design and construct home modi cation.
A third stage of the analysis was required to create an embryonic home modi cation process protocol framework. An iterative approach was required to generate the protocol, and a brief description of this process is given below. A framework was developed; along the top of the framework, the headings were used from the 4 phases and 10 subphases of the occupational therapy design and construction process. Running down the far left-hand side were the following principles taken from the GDCPP [33]: (i) Description of the phase (ii) Key question (iii) Action needed at each phase (iv) Outcome of the phase en, using the actions and activities described by respondents in the code books generated at the second stage of the data analysis, the framework was populated. Gaps in the framework were populated by referring to An Occupational erapist's Guide to Home Modi cation Practice [60] and the researcher's knowledge of this eld of practice. To improve Table 1: Respondent inclusion criteria.
Inclusion criteria
Rationale for criteria Occupational therapy e study is interested in occupational therapy and the use of home modi cations Involved in using home modi cations as an intervention For respondents to be able to comment of the home modi cation process, they need to have relevant knowledge of using this as an intervention
UK-based
Di erent countries use di erent terms for describing concepts within occupational therapy, so UK knowledge was important the trustworthiness of the data included in the framework, the principal researcher was challenged by 2 researchers not involved in this stage of the data analysis and adjustments were made accordingly.
4.1.
Step 1 Findings. During the thematic analysis, it became evident that an additional phase not captured by the OTIPM [38] existed within the codes. is additional phase occurred between the assessment and the goal setting phase and the intervention phase. Because the respondents performed a number of actions or tasks that were not associated with the initial assessment of occupational need and the setting of goals for the intervention, nor were they related to the intervention itself. Instead, respondents performed a series of activities associated with planning the intervention; thus, the term "intervention planning phase" was developed to code these responses into a theme.
As an intervention, the home modi cation is not installed by the occupational therapist; however, from the responses, it was evident that a number of occupational therapy practitioners were involved in supporting the installation of the modi cation. Firstly, their support appeared to be essential for ensuring the health and safety of the person, for example, making the builder aware of any medical conditions which could be exacerbated by the construction methods being used to install the modi cation, for instance, dust exacerbating the person's respiratory condition. Secondly, some of the respondents (n � 13) indicated that they were involved in giving advice on the position of equipment or in purchasing specialist equipment to be installed as part of the modi cation. irdly, some respondents (n � 9) indicated that they had a role in providing the person with emotional support during the installation or acted as an intermediary if issues arose between the person and the builder. erefore, using the term "intervention implementation" makes distinct that the invention is not the nal installed modi cation alone; it involves a series of activities the occupational therapist is involved with during the phase of installing the intervention. Table 2 presents examples of responses coded under each of the phases of the OTIPM.
4.2.
Step 2 Findings. In Step 2 of the data analysis, NVivo 10 software was used to produce four separate code books. Each book represented one of the themes identi ed from Step 1 of the analysis and contained responses coded under each theme. ematic analysis was initially attempted by looking for similarities between activities in the four main phases of the GDCPP [33]. However, it became apparent that the activities within the four main phases of the GDCPP [33] were not congruent with the activities within the four main phases of the OTIPM [38]. To overcome this issue, the activities were coded using the descriptions of the subphase of the GDCPP [33] looking for similarities in the responses in each of the four code books.
Using the abovementioned approach to the analysis, it became evident that two additional phases not captured by the GDCPP [33] existed in the responses. ese two subphases occurred between subphases 1 and 2 of the GDCPP [33]. In these phases, respondents indicated a number of actions or tasks involved in analysing how the person was performing the activity in the existing environment as well as professionally reasoning what the person required in the nal design. e themes "conduct an occupational performance analysis to identify the person(s) PET requirements" and "develop occupational-focused home modi cation goals and PET based on the person's PET requirements" were developed to capture these codes. Similarly, there were three activities described in the GDCPP [33] where no similar activity could be found in the code books; thus, no data were coded under the following themes: (i) Outline feasibility (ii) Outline conceptual design (iii) Production information e ndings of this analysis are presented in Table 3 with example of responses.
To be able to compare the subphases of the GDCPP [33] and the subphases of the home modi cation process, the results are displayed in Table 4. e four main phases of the GDCPP [33] were di erentiated by colour. By doing this, it became evident where the lack of congruence occurs between the four main phases of the GDCPP [33] and the four main phases of the home modi cation process. As the aim of this stage of the analysis was to conceptualise the occupational therapy practice as a design and construction process, it was necessary to resolve the issue with the lack of congruence between the four main phases so that parallels between the four main phases of the GDCPP [33] and the OTIPM [38] could be visualised, as illustrated in Table 4.
The Development of the Home Modification Process Protocol
Step 3 involved the development of a single framework based on the GDCPP [33] and the OTIPM [38]. Across the top of the framework, the 9 subphases developed from Step 2 of the analysis of the data were used to label the headings of individual columns. Populating the framework with content was an iterative process. NVivo 10 software was used to create a code book for each individual subphase of the home modi cation process, with each book containing the written responses coded under each of the subphases. e GDCPP Book [33] and the OTIPM Manual [38] guided the development of the content for the description of each phase, key questions needing to be asked at each subphase, and the outcome of each subphase. As such, the framework has nine subphases (0 to 8), and each of these is presented separately.
Subphase 0.
Subphase 0, shown in Table 5, has used the GDCPP principle that a prospective client may not want to proceed with a project following an initial discussion of their need with the building professional such that the purpose of this subphase is to gather data on what has prompted the person to contact the service and whether involvement from an occupational therapist will improve the person's health and well-being.
A further principle of the GDCPP [33] is that the project manager is aware of which professionals should be involved in the process and when. us, taking this concept and the OTIPM [38] concept of identifying who else is involved in the person's situation, subphase 0 gathers data on who the practitioner may need to involve in later subphases of the process.
Subphase 0 has also captured the OTIPM [38] concept of making the person aware of the limitations within the practitioner's eld of practice. It appeared to be important to Table 2: Example of responses for the main phases of the OTIPM [38].
Main phase of the OTIPM [38] Direct quote taken from di erent respondents Assessment and goal setting "Assessing with the person what their needs are in relation to home environment" (R2) "My role rstly involves an OT assessment which takes into account the goals of the individual as regards achieving the best bathroom facility for them and/or their care requirements" (R48) "Carry out an assessment of need, and if the assessed need results in the provision of a bathroom adaptation, would proceed to the next phase of the adaptation process" (R63) Intervention planning "I work with the client and technician to agree on the best possible layout to meet a person's long-term needs. is is a joint agreement with client OT, technician and builders all giving input. However, it is my role to advice on installations that may be bene cial and that the client is not aware of existing" (R3) "Following a functional assessment of needs, my role is to design and plan the layout and facilities in the bathroom to meet the individual's current to long-term needs" (R14) "Using a plan see if intended adaptation ts exploring options, i.e., shape dimensions how the client intends to use it" (R42) Intervention implementation "Remaining available through alterations, for site visits and answering questions as and when they arise" (R10) "Communicating any special needs (e.g. re dust inhalation) to surveyor/contractor" (R56) "Availability for consultation during the building work" (R72) Reevaluation "When work completed to ensure modi cations are safe for client, that the work speci ed has been completed to a high standard and to ensure client completely happy. If not, to assist client to ensure all changes are made to ensure clients safety and ability to enjoy their new facility. Finally, there is a key role in evaluating the provision with the client and or care sta " (R6) "Visiting tenant once work completed to check suitability, demonstrate use of shower and other equipment and to check the adaptations meet the need" (R24) Conduct an occupational performance analysis to identify the person(s) PET requirements "Do an initial assessment of the person and their environment looking at their functional ability and/or the needs of their carer" (R46) Develop collaborative goal(s) and identify person, environment, and task (PET) requirements for the home modi cation "Following the assessment OT recommendations discussed with the person" (R72) Conduct substantive feasibility study for achieving the PET requirement (including funding route) "I work with the client and technician to agree on the best possible layout to meet a person's long-term needs. is is a joint agreement with client OT, technician and builders all giving input. However, it is my role to advice on installations that may be bene cial and that the client is not aware of existing" (R3) Obtain agreement on the full detailed design of the home modi cation "Approval from service user then written options proposal, speci cation and CAD diagrams" (R8) Coordinate and support procurement of the occupation-focused home modi cation "Referral to District Council or RSL for DFG/minor works funding" (R100) Construct the occupation-focused home modi cation "Once work is on site, deal with any queries regarding change of layout due to unforeseen problems" (R57) Conduct site visit to check the operation and maintenance of the occupational-focused home modi cation "When work completed to ensure modi cations are safe for client, that the work speci ed has been completed to a high standard and to ensure client completely happy. If not, to assist client to ensure all changes are made to ensure clients safety and ability to enjoy their new facility" (R6) ask this question at this phase, given the theme in the literature and the data gathered from respondents, on the in uence departmental policies and resources have on the role of the practitioner. As the GDCPP [33] is concerned with ensuring that all information is available to support the next phase of the process, the outcome subphase 0 also ensures that the practitioner has all relevant information for the next phase, in particular that the person has given consent. As consent to an assessment is an ethical and professional requirement, it appeared appropriate to include it in this phase so that when the person is rst visited, they have already consented to a visit and the start of the assessment process. Table 6, captures the values the OTIPM [33] places on collaborative practice through the occupational therapy process such that the person, in collaboration with the practitioner, identi es the occupation(s) impacting upon their health and wellbeing.
Subphase 1. Subphase 1, shown in
Since the literature was critical of occupational therapists focusing on safety and function and identifying the need based on eligibility criteria, the outcome of subphase 1 assists the practitioner to identify what occupation they need to observe in the next subphase of the process.
is re ects ethical practice, as the person is not arbitrarily made to perform unnecessary activities based on home-grown assessments designed to focus on safety and independence or what can or cannot be funded by the practice setting. Instead, the in uence of funding arrangements is considered in subphase 4 and the feasibility study. Similarly, as the practitioner builds a collaborative relationship with the person and new data provide insights into the person's situation, subphase 1 ensures that due consideration is given to the appropriateness of the intervention in providing the person with the appropriate solution to improve their health and well-being. Table 7, has been in uenced by the OTIPM [38] description of how practitioners should analyse occupational performance and participation since it is recommended that the practitioner should initially observe the person performing or participating in the occupation, identifying the strengths and weaknesses in the person's performance. Once the practitioner has these data, the OTIPM [38] describes how the practitioner can then analyse the cause of the problem based on the transaction of the person, environment, and task.
Subphase 2. Subphase 2, shown in
is is a two-pronged approach to analysing performance and participation because it prevents the occupational therapist making assumptions about the cause of the problem. e conceptual model developed as part of the OTIPM [38] guides the type of person, environment, and occupation data the practitioner needs to collect. It should Journal of Aging Research be noted that the OTIPM [38] uses the term "task" and not "occupation" in the conceptual model, thereby acknowledging that a practitioner does not objectively observe an occupation; they observe the task part of the transaction between the person and the environment. is is because only the person can experience an occupation, since it only has meaning and value to them.
Subphase 3.
Goals are an important part of the occupational therapy process since they provide the benchmark on which the occupational therapist and person establish if the intervention has been successful. us, the purpose of subphase 3, shown in Table 8, is to identify those goals. Given that one of the principles of the GDCPP [33] is to collect data relevant for the success of later subphases, subphase 3 makes the distinction as to how the modi cation is improving health and well-being and whether it is being designed to restore, maintain, or acquire performance/participation in the person's occupation. us, this question prompts the practitioner to consider what impact this decision would have on the nal subphase of the process. Table 9, is to conduct a feasibility study to identify how the home can be modi ed to improve the person's performance or participation in the occupation for which it was necessary to ensure that the protocol could accommodate a range of regional, policy, and regulatory di erences between practice settings. To achieve this, the principles of the GDCPP [33] were used to develop the question of how contextual issues within the practice setting will in uence the choice of design. Similarly, it was important to ensure that design decisions were made explicit to the person and documented, thus overcoming the di culty of people not always being aware as to why certain decisions have been made. e development of the content from subphase 5, shown in Table 10, arose from the professional and ethical requirement of practitioners needing to ensure that the person has a full understanding of the intervention so that they are able to give informed consent to proceed with the intervention, and the questions make overt the need for the person to have a full understanding of the design before giving informed consent to proceed with the intervention.
Subphase 4. e purpose of subphase 4, shown in
One of the principles of the GDCPP [33] is that it provides an audit trail of the reason why decisions were made at particular subphases of the process.
us, subphase 5 enables the occupational therapist and person to be accountable for the decisions made during the process, and it makes the information readily available if the outcomes of this subphase, or other subphases, are called into question. participate in a new occupation? Identify, with the person(s), how the abovementioned approach will impact on the evaluation phases Identify the speci c "person factors/body functions" design requirements Identify the speci c "environmental" design requirements Identify the speci c "task" design requirements Identify any occupations(s) that cannot be addressed through an occupation-focused home modi cation Outcomes Person(s) has collaborated on the goals of the home modi cation Goals for home modi cation documented PET design requirements to achieve the goal(s) documented Reablement, rehabilitation, and/or training requirements following the completion of the home modi cation documented Table 9: Subphase 4 of home modi cation process protocol.
Intervention planning phase Subphase 4 Description Conduct a substantive feasibility study for achieving the PET requirements (including funding route)
Key questions
What design options are there for meeting the PET requirements? What other factors in the person's occupational context will a ect the choice of design solutions?
Does the design proposal meet the PET requirements outlined in subphase 3? Should a home modi cation approach be taken?
Actions
Identify that the design has addressed all the requirements identi ed in subphase 3 Identify that the design meets any other occupational performance context requirements Identify any practice setting contextual issues that will in uence the person(s) choice of design solution Identify any potential built environment issues, in the existing space, that will impact on the PET requirements being accommodated Identify funding requirements for the home modi cation
Outcomes
Professional reasoning on the modi cation design solution process Document any issues related to the practice setting or built environment that prevent the optimum design solution being provided e speci cation related to space, space layout, and tools documented Does the full detailed design provide the solution to address the occupational performance requirements of the person? Do the detailed design plans and speci cations provide the person with the information they need to give informed consent? Should a home modi cation approach be taken?
Actions
Ensure that the person(s) understands how the design solution addresses their occupational performance requirements Identify how any unmet requirements will impact on the occupational performance of the modi cation Con rm that the person(s) agrees to proceed with the design solution Outcomes Informed consent documented 8 Journal of Aging Research 5.7. Subphase 6. As with subphase 5, it was necessary to allow the questions to re ect the di erent ways modi cations are funded and for the building professionals to have appropriate information to help them understand why the speci c layout and requirement contained in the design plan are important in achieving the person's goals. erefore, subphase 6, Table 11, places a duty on the occupational therapist to provide this information, thereby improving communication. Also, at subphase 6, the occupational therapist is no longer given the option to consider if a home modication approach should be taken because issues that could make a home modi cation inappropriate would have been identi ed by the person and occupational therapist earlier in the process.
Subphase 7.
By using the principles of the GDCPP [33], subphase 7, shown in Table 12, re ects the tasks identi ed by respondents in the questionnaire, where their involvement was required to ensure that the person and builder were both supported during the physical construction phase of the modi cation.
Subphase 7 also ensures that the practitioner provides any specialist equipment that is required once the modi cation is installed and which could prevent the nal modi cation from being used immediately by the person if not provided. Table 13, is an important part of the occupational therapy design and construction process. e content of subphase 8 was in uenced by the requirement a number of respondents identi ed in ensuring that the standard of workmanship met the standards expected from the housing authority. In the GDCPP [33], the nal subphase ensures that the building is handed over ensuring that the end users have an understanding of how the building operates and needs to be maintained; thus, this section ensures that the person has Coordinate and support procurement of the occupation-focused home modi cation Key questions
Subphase 8. Subphase 8, shown in
What information and action are required to procure the home modi cation? Has all the information been obtained for the builder/contractor/others to construct the home modi cation?
Actions
Identify and communicate information required for the procurement of the home modi cation Identity and communicate the information required for the builder/contractor/others to proceed with the construction of the home modi cation Identify and communicate what ongoing support will be required of the occupational therapist/service during construction phase Outcomes Funding application/support completed Plans, speci cations, product information, and health and safety information provided to the builder and/or those involved in construction of the modi cation Agree with person and builder support being provided by the occupational therapist during construction Is the appropriate support being provided to the person(s) and building professional during the construction phase of the home modi cation?
Actions
Provide ongoing support during the construction of the home modi cation Provide and/or supply tools not part of the construction process Provide advice on nal positioning of tools Outcomes Modi cation completed [38] and the occupational therapy process in general, the questions and outcomes of subphase 8 re ect the need to evaluate whether the goals identi ed in the earlier subphases have been achieved. Also, subphase 8 provides opportunity for the occupational therapist to re ect on their practice.
Discussion
As a problem-solving profession, the occupational therapy process provides the logical route that the practitioner should follow in order to provide e ective interventions [61] such that practitioners are able to operationalise their professional practice [62]. From the ndings of Step 1 of the data analysis, it appears that the occupational therapy process was assisting respondents to articulate their role in home modi cations. For example, the quotes from R6 and R56, presented in "Findings" (although their answers di ered considerably in terms of the detail provided by each respondent) still provide evidence of assessment, goal setting, and intervention phases, and in the case of R6, an evaluation phase. e thematic analysis also raised theoretical challenges about what constitutes an intervention? e intervention has been traditionally viewed as the completed home modi cation [8,63]. However, it is the skills and knowledge of the occupational therapist during all aspects of the occupational therapy process that are essential in the nal design and performance of the modi cation, and this raises the question as to whether the occupational therapy profession should place greater emphasis on the process being the intervention rather than the completed modi cation. Indeed, if the process becomes the intervention, then it would be more evident as to what the intervention is and what training is required to gain the skills to carry out the intervention. By developing outcome measures that evaluate the process as the intervention, it also allows practitioners to identify which phases of the intervention were more or less e ective and how the process has contributed to the person's health and well-being.
It has been possible to use the OTIPM [38] and GDCPP [33] to describe the occupational therapy process used by respondents in this area of practice. However, the outcome of this does not re ect the actual practice described by respondents, and it appears to di er in one important way, namely, the way respondents combine departmental processes with the occupational therapy process. As an example, it can be seen that respondent R29 using both phrases that are associated with the occupational therapy process (words in red) and the phrases that seem to suggest the in uence of the systems, structures, and policies within the respondent's practice setting (words in blue).
As an OT I complete an overview assessment with the service user in their home environment to identify their needs. To address these assessed needs ( e actions of respondent R29 may not directly lead to a poorly designed modi cation, but previous ndings [64][65][66] have noted how departmental policies enacted by occupational therapists have been associated with dissatisfaction with the modi cation. us, this nding raises the question as to whether practitioners are aware of how departmental structures and guidance in uence their professional practice and the design options presented to the person. Again, this is an important question to answer, given the professional and ethical responsibility professionals have in ensuring that the intervention they provide has been fully explained and explored with the person, so the occupational therapist needs to be able to describe to the person how the intervention they are providing is being in uenced by the practice setting. Another important nding from the second stage of the analysis was the use of the term "assessment of need" in which respondents used their professional reasoning skills to identify occupations (activity) the person is having di culty performing or participating in, identifying and analysing why the person is having di culty, and analysing and identifying if a home modi cation will address the occupational need. From the data collected, it is not possible to establish whether in everyday practice respondents make a distinction between the di erent types of professional reasoning necessary to support each aspect involved in the "assessment of need" and what the consequence might be if they do not make the distinction. However, given that one principle of the GDCPP [33] is to ensure that, where possible, a subphase does not progress to the next phase until the outcome of the previous phase is achieved, the research suggests that occupational therapists are prematurely progressing through the process without all relevant data being collected and analysing as to how it might impact on the subsequent phases. If this is the case, then a process protocol for home modi cations may reduce the risk of this occurring.
Conclusion
e purpose of the study was to develop a home modi cation process protocol by conceptualising the occupational therapy practice involved in home modi cations as a design and construction process, and a number of conclusions can be drawn. Firstly, with data from the questionnaire and guided by the OTIPM [38], it was possible to both visualise and describe this process. Whilst interventions involving home modi cations can be described through the occupational therapy process, it was interesting to note that practitioners have an important role in planning the design of the intervention. Furthermore, the term "intervention implementation" better describes the involvement of the occupational therapist as they are not directly responsible for the installation of the intervention themselves. us, the term "intervention implementation" acknowledges that installing a home modi cation is a dynamic process and one that the practitioner works with building professionals to achieve.
Secondly, by using the occupational therapy process for home modi cations, it was then possible to use the GDCPP [33] to conceptualise the process as a home modi cation as four main phases based on the OTIPM [38] and 9 subphases based on the GDCPP [33]. irdly, using the principles of the GDCPP [33], it was possible to create a framework for the protocol, and by using an iterative process, it was possible to populate the content of this framework, which then became the home modi cation process protocol. is iterative process was an important part of developing the protocol because it allowed for the development of the content based on a conceptual model of practice and for issues identi ed in the literature to be addressed. us, the home modi cation process protocol potentially should (1) provide a systematic approach to the process of modifying the home; (2) ensure that ethical and professional practice is followed by enabling occupational therapists to verbalise and visualise their role in the process; reduce the complexity of the current process by identifying the key questions, actions, and outcome of each phase; (3) improve the e ectiveness and e ciency of practice by ensuring that practitioners collect the right information, at the right time; (4) ensure that the person has choice and control through their involvement in all phases of the process; (5) guide professional reasoning based on a conceptual model of practice; (6) ensure consistency of occupational therapy practice by accommodating regional, legislative, and regulatory di erences between practice settings; (7) ensure that nancial constraints and other contextual issues within practice become a design consideration and not a barrier for accessing funding for a modi cation.
Whilst home modi ications have been a traditional area of practice for occupational therapists, the home modi cation process protocol is the rst time this practice been described as an occupational therapy design and constuction process.
rough the development of the protocol, there is the potential to address the professional [50,51] and ethical need [40,41] for practitioners to better understand the intervention they are providing and to be able to express their role in the design and construction of a home modi cation.
Importantly, this study has also raised the question as to what is the "intervention" within home modi cation practice? In the literature, the intervention appears to be the installed modi cation, and outcome measures designed to evaluate the intervention tend to be focused on how the installed modi cation has improved the person's performance in the occupation. However, the ndings from this research have shown that each phase of the protocol is important because the outcomes from each phase can ultimately in uence the nal performance of, and satisfaction with, the modi cation. erefore, this raises the question as to whether the home modi cation process is what practitioners should be de ning as their intervention?
Crucially, the necessary skills and knowledge to design and construct a home modi cation are not taught in detail or depth at undergraduate level within occupational therapy education. Once quali ed, there are training opportunities for practitioners, but these tend to be based on the knowledge and skills required to design a particular type of modi cation or to design a modi cation for a particular health condition or disability. Building the necessary knowledge of the design and construction process should therefore be reviewed within undergraduate education.
Finally, there is a need to consider how the home modi cation process protocol could be implemented beyond England, which was the boundary of the research reported here. Home modi cation is a complex area of practice, and there is a need to nd ways to implement systematic assessment, intervention, and evaluation strategies within occupational therapy practice [67] e challenge for further research is that it is di cult for the process to be standardised as each country provides and funds home modi cations in di erent ways as well as design standards and regulations also being di erent in each country [68].
Conflicts of Interest
e authors declare that there are no con icts of interest regarding the publication of this paper.
|
2018-04-27T07:21:56.744Z
|
2018-02-28T00:00:00.000
|
{
"year": 2018,
"sha1": "ce9ed83a3b8aeaea0304f918fce92be9eb769ab7",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jar/2018/4904379.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0238a34029d12c978293bd3f431ecaf78b490073",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
236927561
|
pes2o/s2orc
|
v3-fos-license
|
Risk prediction model for deep surgical site infection (DSSI) following open reduction and internal fixation of displaced intra‐articular calcaneal fracture
Abstract Deep surgical site infection (DSSI) is a serious complication affecting the surgical outcome of displaced intra‐articular calcaneal fracture, and a risk prediction model based on the identifiable risk factors will provide great clinical value in prevention and prompt interventions. This study retrospectively identified patients operated for calcaneal fracture between January 2014 and December 2019, with a follow‐up ≥1 year. The data were extracted from electronic medical records, with regard to demographics, comorbidities, injury, surgery and laboratory biomarkers at admission. Univariate and multivariate logistics regression analyses were used to identify the independent factors for DSSI, thereby the risk prediction model was developed. Among 900 patients included, 2.7% developed a DSSI. The multivariate analyses identified five factors independently associated with DSSI, including current smoking (OR, 2.8; 95% confidence interval [CI], 1.3‐6.4; P = .021), BMI ≥ 26.4 kg/m2 (OR, 3.1; 95% CI, 1.6‐8.4; P = .003), ASA ≥II (OR, 1.3; 95% CI, 1.0‐5.1; P = .043), incision level of II (OR, 3.8; 95% CI, 1.3‐12.6; P = .018) and NLR ≥6.4 (OR, 3.2; 95% CI, 1.3‐7.5; P = .008). A score of 14 as the optimal cut‐off value was corresponding to sensitivity of 0.542 and specificity of 0.872 (area, 0.766; P < .001); ≥14 was associated with 8.1‐times increased risk of DSSI; a score of 7 was corresponding sensitivity of 100% and 10 corresponding to sensitivity of 0.875. The risk prediction model exhibited excellent performance in distinguishing the risk of DSSI and could be considered in practice for improvement of wound management, but its validity requires to be verified by better‐design studies.
• a risk prediction model is constructed based on five identified factors and exhibits the excellent performance • patients with a score of ≥14 had a very high risk of DSSI and should be considered as high-risk population • a score of 7 or 10 had a high sensitivity, ranging from 87.5% to 100%, hence providing value in evaluating the risk of DSSI
| INTRODUCTION
Displaced intra-articular calcaneal fracture (DIACF) is prevalent in department of orthopaedic trauma, involving approximately 2% of overall fractures and approximately 30% of foot and ankle fractures. 1,2 By far, open reduction and internal fixation (ORIF) remains the standard modality of operative treatment, but is not without drawbacks. The first concern was the high rate of wound-related complications, especially the deep surgical site infection (DSSI), which affected 2.9% to 15.4% of surgically treated patients. [3][4][5] DSSI was associated with substantial health care burden from repeated wound debridement, revision or implant removal, and social consequences due to the occasional amputation or sequela of limb dysfunction. 6,7 In practice, the importance of preventing DSSI cannot be overemphasised. This is especially true for high-risk patients, who, in other word, may gain most benefits from targeted preventive interventions. More importantly, identification of the high-risk patients via the identified factors and accordingly targeting them with interventions is the most cost-effective method. In fact, in clinical investigations, it is the consistent aim to keep identifying risk factors (especially the modifiable or controllable factors) for the development of a DSSI. 4,[8][9][10][11] However, due to heterogeneity in institutional policy (operating room availability, operation schedule reasonability, prophylactic antibiotic use, wound care), surgeon preference (wound drainage or type of prophylactic antibiotics), patients' conditions (comorbidities, obesity, smoking, non-compliance) and surgery-related factors (inadequate surgical skills, surgical approach), the results available from these studies might not be so generalizable. In addition, the relatively small sample size and no inclusion of so many variables for adjustment made them difficult to obtain conclusive results, and possibly there remain some factors that have never been investigated or noticed.
In this study, we used the large sample of calcaneal fracture surgically treated by ORIF in a tertiary referral institution to address our aims: (1) to investigate the incidence rates of DSSI; (2) to identify the factors independently associated with DSSI; and (3) on basis of these factors, if any, to form a risk model for predicting the DSSI and evaluate its predictive power.
| Inclusion and exclusion criteria
This study retrospectively identified patients aged 18 years or older who underwent surgical treatment of acute closed calcaneal fractures by ORIF in our institution between January 2014 and December 2019, with postoperative follow-up period of a minimum of 12 months. Patients with complete follow-up data were deemed to be eligible for inclusion. The exclusion criteria were age less than 18 years, open calcaneal fracture, bilateral calcaneal fracture, pathological fractures, old fractures (> 21 days from initial injury), multiple fractures, polytrauma, treatments rather than ORIF (conservative, closed reduction, percutaneous fixation, external fixation et al), presence of infections or signs before index fracture operation, wound issues other than DSSIs (superficial infection, wound edge necrosis, erythema), missing data for any variable of interest or incomplete follow-up data. This study was approved by the local ethics committee, which waived the need for informed consent.
| Perioperative management
For ORIF procedure via extended lateral approach or sinus tarsi approach, 1 to 2 g of cefazolin on the basis of weight was administered to all patients within 30 minutes before skin incision, and in case of procedure predicted to last over 3 hours, another dose was given. Within 24 hours after wound closure, prophylactic use of 1 to 2 g of cefazolin was routinely given, and for patients with high-risk infection, the period could be appropriately extended. Pneumatic tourniquets, bone-grafting and postoperative drainage use were left to the discretion of the treating surgeon. Postoperatively, all patients were instructed to follow the same protocol on wound care and physical exercises.
| Data collection
Two investigators (K.L and T.M) independently extracted the data from the inpatient electronic medical records and documented them using the EpiData software (version 3.1, The EpiData Association, Odense, Denmark), and any discrepancies were resolved by consensus. Then, these data were exported into the Excel worksheet (Office version 2016) for the purpose of statistical description and analysis.
The collected data were demographics (age, sex, body weight, height, occupation, education level), lifestyles (current smoking status, alcohol consumption), comorbidities or conditions (hypertension, diabetes mellitus, chronic heart disease), injury-related (injury mechanism and fracture type based on Sander' classification), surgery-related (time from injury to surgery, surgical approach, surgical duration, blood loss, allogeneic blood transfusion, anaesthesia type, American Society of Anesthesiologists [ASA] grade, bone-grafting, postoperative drainage use), and laboratory indexes measured at admission (count of white blood cell, neutrophil, lymphocyte, red blood cell and platelet, level of plasma albumin, total protein, haemoglobin and fasting blood glucose). We also calculated the values of ratio of neutrophil to lymphocyte and platelet to lymphocyte, and investigated whether they are in relation to the development of DSSI, both of which have demonstrated to be associated with multiple adverse outcomes (venous thromboembolism, infection, mortality) across a wide range of specialties (trauma, cancer, cardia-cerebrovascular disease). 8,10,[12][13][14] Occupation was categorised as retirement, office work, manual work and others (students, unemployment). Based on attainment of years, education level was categorised as illiteracy, <6, 6 to 12 and >12 years. Body mass index was calculated by dividing the square of height in metre by the weight in kilogram, and was divided into non-obesity (<28 kg/m 2 ) and obesity (≥28 kg/m 2 ) based on the criteria fit for Chinese people, and further was dichotomised according to the cut-off value determined by algorithm. Current smoking or alcohol consumption was defined as patients' self-reported smoking activity or drinking of wine, beer or any other alcoholic beverage alcohol within 12 months of the index operation. Considering the clinical relevance for explorable analysis, the laboratory indexes were divided into two or three categories, as appropriate. Injury mechanism was dichotomised as high-impact trauma (e.g. traffic accidents, fall from height ≥1 m, and other violent injuries) and low-to medium-impact trauma (fall from height <1 m or standing height).
| Definition and confirmation of DSSI
Definition of DSSI is based on the criteria issued by the US Centers for Disease Control and Prevention, referring to an infection directly related to the wound and involving the tissues surpassing the deep fascia. DSSI was identified and confirmed by checking the descriptions in the electronic medical records on the basis of at least one of the following incision-related signs: pus discharging; dehiscence or separation; various examinations or medication prescriptions and dispensations providing evidence for SSI; and debridement or/and removal of implant performed. Of note, for superficial SSI or other minor wound issues such as wound edge necrosis or erythema that resolved by wound care or oral antibiotics alone, we did not include them in the analysis for the purpose of ruling out potential confounding effects.
For those who were readmitted within the 1 year of index operation, we used their national identification card number rather than the inpatient record number to confirm the potential DSSI cases, because only the former was unique for one person.
| Statistical analysis
The continuous variables were presented as mean ± standard deviation (SD), using Kolmogorov-Smirnov test and Levene's test to evaluate the normality status and homogeneity of variances, respectively. Student t-test or Mann-Whitney U-test was used to detect the difference between groups, as appropriate. Categorical data were presented as a number with a percentage and compared by Chi-square test or Fisher exact test, as appropriate.
Receiver operating characteristic (ROC) curve was constructed to determine the optimal cut-off value for age, BMI, NLR and PLR, when sensitivity +1-specificity (namely, Youden index) was maximised. Based on the cut-off values (BMI, 26.4 kg/m 2 ; NLR, 6.4; PLR, 150; age, 45 years), they were dichotomised and compared between groups in univariate analyses. Variables tested with P < .10 were further entered into the multivariate logistics regression model to determine their independent effects on development of DSSI, using stepwise backward method. The goodness-of-fit of the final model was evaluated using Hosmer-Lemeshow (H-L) test, with P value >.05 considered as the acceptable result. Variables with statistical level of P < .10 were retained in the final model. Odds ratio (OR) and its 95% confidential interval indicated the magnitude of association effect.
For each independent variable, a scoring point (integer value, derived from the rounded-up ORs) was assigned; therefore, the potential assigned point was zero, or otherwise the rounded-up OR value. For any patient, the total score was calculated by totalling the scores from all independent variables existing for him/her. Again, ROC curve was made for determining the rate of DSSI for every possible value for the total score (independent factor), while the optimal cut-off value was calculated with the rate of DSSI as the dependent factor. The validity of the cut-off value was evaluated by means of comparing the area under the ROC curve (AUC), as previously described in detail. 15 The statistical significance was set as P < .05 and all the analyses were performed using SPSS24.0 (IBM Corporation, New York, USA).
| RESULTS
Within the study period, 1407 calcaneal fractures were retrieved and 507 patients were excluded based on our rigorous criteria, leaving 900 for data analysis (details presented in Figure 1). Among them, male patients predominated overwhelmingly (92.8%, 835/900). The age averaged 41.8 ± 11.8 years at injury, with ≤45 years in 62.5% of patients. These ORIF procedures were performed by 32 surgeons. Approximately 40% of patients were operated within 7 days, and 93.8% within 14 days after injury. The predominant surgical approach was extended lateral approach in 71.0% of patients. DSSI was developed in 24 patients, representing an accumulated incidence of 2.7%. The median interval was 27 days (range 6-206 days) between operation DSSI and 70.8% (17/24) occurred within 3 months.
Compared with those without developing a DSSI, patients with a DSSI had a significantly prolonged hospital stay (24.1 ± 24.6 vs 16.1 ± 8.5, P = .001). Twenty patients received reoperation that was debridement alone in eight patients, implant removal in five patients, flap repair procedure in two patients and others in five patients. No amputation was performed. The average number of surgical procedures needed for control of infection was 1.8, 2 or more in three-fifths (12/20) of the patients ( Table 1).
Based on our predefined algorithm, the average score for one patient was 10 (median, 11; range 4-21). The ROC results showed the AUC was 0.766 (95% CI, 0.670-0.863; P < .001) and a score of 14 was the optimal cut-off value, corresponding to sensitivity of 0.542 and specificity of 0.872. One hundred and twenty-five patients had a score of ≥14 among whom 13 (10.4%) developed a DSSI; while 775 patients had a score of <14, and 11 (0.13%) developed a DSSI. This suggested a score of ≥14 was associated with 8.1-times increased risk of DSSI (Chi-square test, 95% CI, 3.5-18.4; P < .001). We also investigated another 2 seconds optimal cut-off values for sensitivity of most clinical relevance, which had a score of 7 (sensitivity, 100%) and a score of 10 (sensitivity, 0.875) (Figure 3). F I G U R E 2 ROC curve for determination of the optimal cut-off values for NLR (6.4), PLR (150), BMI (26.4 kg/m 2 ) and age (45 years). Horizontal axis represents the 1-specificity and vertical axis indicates the sensitivity of each variable in predicting development of DSSI. The lines in different colours represent the different variables. The area under the curve (AUC) represents the ability to discriminate the DSSI cases T A B L E 2 Assigned score for each variable based on their independent association of magnitude with DSSI
| DISCUSSION
DSSI in the field of orthopaedic trauma surgery has been consistently a focus in both clinical practice and scientific research. In this study, we identified several important factors, thereby forming a valuable risk prediction model. The model showed that a score of ≥10 should alert the risk of DSSI (sensitivity, 0.875), a score of 14 or more is strongly predictive of development of DSSI (OR, 8.1), and a score of <7 almost could rule out the possibility of DSSI. Compared with the high variable rates of SSI (5.0%-25.0%) 11,16,17 or DSSI (2.9%-15.4%) [3][4][5] in the literature, we reported a relatively lower incidence rate of 2.7%. This was mainly decided by the predefined rigorous inclusion and exclusion criteria and the narrower definition of DSSI. Some injuries or medical conditions at high risk of DSSI had been excluded, including open fracture, multiple trauma or bilateral calcaneal fractures. 6,11 The rapid improvement in operative techniques, implant material property, standardisation of antibiotics prophylaxis and wound care in orthopaedic trauma during the past 30 years has contributed to decreasing the SSIs overall. 18 Another reason might be that we only inquired index hospitalisation or re-hospitalisation medical records for confirmation of DSSIs, leaving those with DSSI treated in other institutions excluded acquiescently.
Identification of risk factors and accordingly targeting prophylactic measures do help with effective perioperative management of surgical wound. In the present study, we used a thoroughly selected population with a huge amount of sample to address the risk factors of DSSI following calcaneal fracture. It is of note, four out of five factors identified had been repeatedly investigated and their role was well-established across studies with different levels of evidence. 4,[8][9][10]19 The only difference among them might be the determination of cut-off values of BMI, for which we used both the traditional value (28.0 g/m 2 , for definition of obesity for Chinese people) and the current value of 26.4 kg/m 2 to make a dichotomization. As a result, the latter demonstrated to be more sensitive in prediction of DSSI (P < .001). We inferred that the younger age (mean, 41.8 years) and slimmer figure (BMI, mean, 25.1 kg/m 2 ) in this selected group contributed to this discrepancy.
As a novel indicator, NLR was firstly identified as an independent factor for predicting the development of DSSI in this study, which could be explained by it being an indicator of magnitude of body's systemic inflammatory response 20 and further the cumulative effects of persistent acute inflammatory response after surgery. 13 Our finding complemented the recent reports regarding the prediction role in adverse results following orthopaedic surgeries. For example, NLR demonstrated to be capable of predicting the mortality and cardiovascular complications after hip fracture, 13 the risk of venous thromboembolic events (VTE) after total knee arthroplasty, 12 the degree of inflammatory response/infection and onset of myocardial injury in orthogeriatric patients. 21 In addition, in non-orthopaedics specialties, NLR has also demonstrated its relation with adverse outcomes, onset of acute or chronic complications, or poor conditions. 14,22,23 In this study, patients with NLR ≥6.4 are highly vulnerable to the development of a DSSI (OR, 3.2), to whom specific attention should be paid to control those combined modifiable factors (smoking, dose of prophylactic antibiotics).
By far as we know, this is the first report of development of a risk model for predicting DSSI specifically following calcaneal fracture. Its feasibility in clinical application and potential advantages were predictable. First, this model was formed based on four wellestablished risk factors in the literature and one novel biomarker. Particularly, the latter was derived from the blood routine examination results (neutrophil and lymphocyte count), both of which are readily available and hence without adding any extra cost for patients. Second, F I G U R E 3 ROC curve for the summed score to determinate its optimal cut-off for distinguishing the DSSIs from non-DSSIs. The AUC was 0.766 (95% CI, 0.670-0.863; P < .001) and a score of 14 was the optimal cut-off value, corresponding to sensitivity of 0.542 and specificity of 0.872. A score of 7 as cut-off value is corresponding to the sensitivity of 100%, and of 10 responding to sensitivity of 0.875. Horizontal axis represents the 1-specificity and vertical axis indicates the sensitivity of each variable in predicting the development of DSSI this model uses the score of 10 (with a sensitive of 0.875) to stratify patients into high-risk and low-risk categories, which was conducive to targeted prevention of DSSIs. It is of particular note that a score of ≥14 was identified to be associated with 8.1-times increased risk of DSSI, and patients in this category should be paid more attention to and if conditional, specifically protocolised surveillance could be developed. Third, all the DSSI cases occurred in patients with a score of ≥7, and hence a score of <7 contributes to almost completely rule out the possibility of DSSI, possibly avoiding the generalised coverage of infection control resources.
Several limitations to this study should be mentioned. First, the retrospective nature of this study limited the accuracy and precision in data collection, although the doubleentry might partly offset this bias. For some variables (smoking) volume and frequency might be particularly important parameters in infection complications, but detailed data were not available. Second, the volume of surgeon might affect the results, because surgical treatment of calcaneal fracture needs a learning curve, especially for tarsal sinus. 19 As plenty of surgeons perform operations, the sample size of operations for each surgeon in 1 year is too small to obtain effective statistical analyses. Third, as with every multivariate analysis, there remain the unknown or unmeasured confounders to bias the results, despite numerous variables included in this study. Fourth, the singlecentre setting may limit the generalizability of our findings, because patients admitted or transferred commonly have severe injury or poorer comorbid conditions. Fifth, as we discussed previously, we could not identify those who had developed a DSSI but sought treatment in other institutions, resulting in underestimation of such key complication.
In summary, the incidence rate of DSSI following ORIF of calcaneal fractures was 2.7% and five independent factors were identified, most of which were well-established in the literature. The risk prediction model exhibited its excellent capability in distinguishing those with high-and low-risk of DSSI, with different but appropriate cut-off values. Our findings should be looked upon in the context of specific or general limitations, and the validity of the risk prediction model should be verified by prospective and multicentre studies of a larger sample size.
|
2021-08-06T06:17:51.773Z
|
2021-08-05T00:00:00.000
|
{
"year": 2021,
"sha1": "bae8a3e75d3c839c9e2573ecda6de1def38e29b9",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.13663",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a457c9c27815c5f4c0f337e756cb5ff31a7e4561",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
35069263
|
pes2o/s2orc
|
v3-fos-license
|
Genome-Wide Scan of the Gene Expression Kinetics of Salmonella enterica Serovar Typhi during Hyperosmotic Stress.
Salmonella enterica serovar Typhi is a human enteroinvasive pathogen that can overcome the stress caused by the high osmolarity of the human small intestine and cause systemic infection. To investigate the global transcriptional regulations of S. enterica serovar Typhi exposed to a hyperosmotic environment, a genomic oligo-DNA microarray containing 4474 Salmonella genes was prepared. A wild strain of S. enterica serovar Typhi GIFU10007 was grown in LB medium containing 50 mM NaCl to simulate a low osmotic environment. The hyperosmotic stress was simulated by an osmotic up-shift, which increased the concentration of NaCl in the LB from 50 mM to 300 mM. Genome-wide gene expressions of S. enterica serovar Typhi at 15 min, 30 min, 60 min, and 120 min after the osmotic up-shift were investigated by the microarray analysis. Gene expression profiles in somewhat later stage (60 ~120 min) of the stress were quite different from those in the early stage (0 ~ 30 min) of the stress. At 120 min after the osmotic stress, the expression levels of 889 genes were obviously changed. However, expression levels of only 382 genes were significantly changed at 15 min after the osmotic stress. The expression levels of most SPI-1 genes associated with invasion of the pathogen were increased at 120 min after the osmotic up-shift, but were not obviously changed at 15 min or 30 min after the osmotic stress. Expressions of a central regulatory gene, phoP, and sigma factor genes rpoE, rpoD, and rpoS were also changed with different profiles during the osmotic stress. These results indicated that the invasive ability of the pathogen is significantly increased after 2 h of hyperosmotic stress, and regulator PhoP and sigma factors RpoE, RpoD appear to participate in the network regulatory mechanisms that benefit the pathogen to adapt hyperosmotic environmental conditions. The later increased invasive ability of S. enterica serovar Typhi after hyperosmotic stress may be one reason why the pathogen performs invading in the distal ileum of human and not in areas of the upper small intestine.
Introduction
Salmonella enterica serovar Typhi is a gram-negative, enteroinvasive pathogen that may cause typhoid fever. The infection is initiated when Salmonella enter the host from contaminated water or food in the gastrointestinal tract. The bacteria cells reach the distal ileum and enter the specialized intestinal epithelial M cells of Peyer's patches. Following intestinal invasion, the pathogen migrates into the mesenteric lymph nodes and reaches the liver, spleen, and bone marrow though the blood and lymph systems, where they replicate themselves and cause systemic infection [1][2][3]. Intestinal invasion by Salmonella is associated with many pathogen-specific factors, including Type III secretion systems, secretion of invasion related proteins, flagella and motility, and reducing of Vi polysaccharide [3][4][5][6][7].
Most invasion factors of Salmonella are affected by the osmolarity of the surrounding environment [7][8][9]. When S. enterica enters the intestine lumen of host, the bacterium is opposed to an extreme environment, including an increase in osmolarity. Therefore, the regulation of Salmonella in hyperosmotic environment is very important during the invasion through the small intestine.
The cross-regulation of different regulators demonstrates that the mechanisms of response to stress are quite complex [10,11]. At present, the mechanisms by which these bacteria adapt to changes in environment remain unclear. Microarray analysis is an effective strategy for genome-wide screens of gene expression in bacteria. Although some microarray studies described bacterial gene expression changes in response to hyperosmotic stress [12,13], however, kinetics of gene expressions of enteroinvasive pathogens after onset of the osmotic stress is rarely presented.
In the present study, to investigate systemic gene expression of S. enterica, we prepared a Salmonella genomic oligo microarray system that includes 4474 specific 40-mer oligonucleotides as the available genomic sequence information of S. enterica serovar Typhi Ty2 and serovar Typhimurium LT2. We used this system to investigate the kinetics of gene expression of S. enterica serovar Typhi during hyperosmotic stress. Many interesting gene expression profiles associated with invasion, motility and some regulators of the pathogen after hyperosmotic stress were found in this study.
Bacterial cultures
A wild strain of S. enterica serovar Typhi GIFU10007 isolated in Japan [14] was utilized in this study. Bacteria were grown with shaking at 37°C in Luria-Bertani (LB) broth (pH 7.0) containing 50 mM or 300 mM NaCl to simulate a low or high osmolarity environment, respectively. Cultures were incubated overnight, and then grown in fresh LB of same osmolarity to log phase (0.5 OD at 600 nm). Total RNA was extracted for investigation of gene expression under sustained high and low osmotic conditions. To simulate a stress of osmotic up-shift, NaCl was added to a final concentration of 300 mM to the low osmotic bacterial cultures at log phase, and the bacteria were then incubated with shaking at 37°C for 120 min. Total RNAs of bacteria were extracted at 15 min, 30 min, 60 min, and 120 min to investigate the kinetics of genome-wide changes in gene expression in response to hyperosmotic stress.
Preparation of Salmonella oligo microarray and treatment of slides
Salmonella genomic oligo microarray was constructed, which contains 4370 genes of S. enterica serovar Typhi Ty2, 102 genes of plasmid of S. enterica serovar Typhimurium LT2, and 2 fljBA-like genes identified in z66 antigen-positive strains of S. enterica serovar Typhi [15]. Plus-chain-specific oligonucleotides were designed according to genomic information for Salmonella (http://www.ncbi.nlm.nih.gov/genomes/MICROBES/Complete.html.) All 40-mer oligonucleotides were synthesized by TaKaRa company (TaKaRa, Tokyo, Japan), and decorated with an amino group at each 5'-terminus to fix the oligo on chip slides. To make the chips, 50 µM oligonucleotides stock solution was mixed with an equal volume of N6 spotting solution (Toyo Kohan, Tokyo, Japan), and stamped on GENE DIA slides (75 × 25 mm, Toyo Kohan, Tokyo, Japan) with a robotic slide printer MicroGrid II (BioRobotics, UK). According to the manufacture's instructions, the spotted oligonucleotides were fixed on the chips at 80°C for 2 h. Slides were blocked just before hybridization by incubation in blocking buffer (5× SSC, 0.2% SDS) at 95°C for 5 min, rinsed with distilled water and dried by centrifugation (1000 rpm for 10 min) at room temperature.
RNA extraction and cDNA probe labeling
Bacterial cells were cooled on ice for 10 min, harvested by centrifugation (4000 rpm for 10 min at 4°C), and lysed in 100 µl of lysozyme-TE buffer (0.6 mg/ml lysozyme, pH 8.0) within 5 min at 25°C. Total RNA was extracted with an RNeasy mini-column (QIAGEN), according to the manufacturer's instructions. The quantity and quality of the extracted RNA were checked with an ND-1000 Spectrophotometer (NanoDrop Technologies, Wilmington, USA). The extracted total RNA was treated with 1 U of RNase-free DNase I (TaKaRa) at 37°C for 10 min to diminish the trace mixed DNA, and then incubated at 85°C for 15 min to inactivate the DNase. cDNA probes were synthesized with a CyScribe First-Strand cDNA Labeling Kit (Amersham Pharmacia). The reverse transcription and labeling was performed in a 20-µl reaction containing 20 µg total RNA, 50 ng random nonamers, 1 µmol of cy3-or cy5-conjugated dCTP, and others as the manufacturer's instructions. After a 90-min reaction at 42°C, RNA was degraded by addition of 2 µl 2.5 N NaOH and a 15-min incubation at 37°C followed by neutralization with 10 µl 2 M Hepes free-acid. The labeled cDNAs were purified over AutoSeq G-50 columns (Amersham Pharmacia).
Hybridization and scanning
The cy3-and cy5-labeled cDNAs derived from the wild strain of S. enterica serovar Typhi grown under different osmotic conditions were pooled and dried with a speed vacuum, and then dissolved in 40 µl of hybridization buffer (5× SSC, 0.2% SDS, 1 mg dextran sulfate, 80 µg salmon sperm DNA, 40 µg bovine serum albumin, 40 µg ficoll). Each slide was covered with a cover glass (22 × 40 mm, Matusnami, Osaka, Japan), and then hybridized in a small humidified chamber (78 × 28 × 5 mm) at 50°C for 15 h. The cover glass was removed in 0.1× SSC, and the slides were washed for 15 min in pre-warmed (45°C) washing buffer (2× SSC, 0.1% SDS), rinsed with distilled water, and then dried by centrifugation (1000 rpm for 10 min). Slides were scanned with a microarray analysis system ScanArray 4000 (GSI Lumonics, USA) with two channels, the cy3 and cy5 appropriate lasers. Images were exported as TIFF files for digital analysis. Each experiment was performed on duplicate slides and at least three times with different RNA samples.
Data analysis
The TIFF files of data for two channels for each slide were converted to the digital density data with DNasis-Array software (Hitachi, Tokyo, Japan). After a visual check, the intensity of the signal from each spot was normalized with the total intensity in each channel. The digital data were exported and transferred to a Microsoft Excel file. The quality control and subsequent analysis was performed essentially as described previously [16,17]. In brief, 96 negative control spots that only spotted buffer were stamped, were used to correct local backgrounds. After local background subtraction, only signals that showed a two-fold higher than the average of negative controls in each channel were used to calculate the ratio of two channels to view expression differences. The average of ration for different slides was calculated, and a 2-fold difference was necessary for the change in expression to be considered significant. The results were then expressed as log 2 (ratio) on profile plots, and heat maps with the Avadis Explor software (Strand Genomics, Bangalore, India).
RT-PCR
RNA extraction and treatment were performed as described above. Reverse transcription was performed with random hexamers and specific reverse primers by using SuperScript II (Invitrogen), according to the manufacturer's instructions. Specific primers used in this RT-PCR are described in Table 1. Each 20 µl reaction contained 2 µg of RNA, 10 ng random hexamers and 10 nmol specific reverse primers. One microliter of product was subjected to the quantitative PCR assay, which was performed in Mx3000P QPCR Systems (Stratagene) with SYBR green master mix (Applied Biosystems), according to the manufacturer's instructions. Fluorescence was measured in an additional step (80°C for 10 sec) after synthesis each cycle. Serious diluted genomic DNAs were used to make a standard curve at same times to calculate reference mRNA copies in samples. Each experiment was performed with four different samples. Table 1. Specific primers using in the RT-PCR.
Genes
Forward primers Reverse Primers
Profiles of genome-wide expression kinetics under osmotic up-shift conditions
Of 4474 genes on the microarray, approximately 3300 usable digital data was gotten in profiles of genomic transcriptional expression of the wild strain of S. enterica serovar Typhi incubated at low and high osmolarity to log phase. Other genes were not detected in most of experiments of the study, due to the intensities of one or two channel less than 2-flod of the relative negative control after correction for the background.
We used our microarray system to investigate systemic gene expressions of S. enterica serovar Typhi at various time points during 120 min after an osmotic up-shift and the expression in sustained high osmotic condition. Profiles of genome-wide expression are shown in Figure 1 and Figure 2A. Expression of 382 genes was changed at 15 min. Expression levels of 170 genes and 212 genes were decreased and increased, respectively ( Figure 2B). Expression levels of approximately 40% of these genes had been returned to the pre-stress levels by 120 min after the osmotic up-shift stress. At 120 min after the osmotic up-shift, differential expression of 889 genes was observed. More than 700 of these genes had not obvious change in expression at 15 min ( Figure 2C). Most of them were also no obvious change under the sustained high osmolarity conditions. A few of them, however, were reversely expressed under the sustained hyperosmotic conditions ( Figure 2D). When the bacteria were incubated in LB medium containing 300 mM NaCl overnight, only 85 genes increased expression and 112 genes decreased expression.
Genes with altered expression at 15 min and 120 min after the up-shift and under sustained hyperosmotic conditions are listed in table 2. These expressional profiles revealed that the majority of changes in gene expression in S. enterica serovar Typhi appear somewhat later after the osmotic upshift stress. We will describe the differential expression of some particularly interesting genes in following sections. S. enterica serovar Typhi GIFU 10007 was cultured to log phase in LB broth (pH 7.0) containing 50 mM NaCl as a low osmotic environment, and then grown under an osmotic up-shift conditions when NaCl was added to a final concentration of 300 mM. Total RNAs were then extracted at 15, 30, 60, and 120 min after addition of NaCl to investigate the kinetics changes in gene expression in response to hyperosmotic stress. The expression difference between high and low osmolarity is indicated by different colors; colors from green to red indicate changes of -2.6 to 2.6 of the log 2 ratio. In other words, the change in expression from 5-fold repression to 5-fold stimulation is indicated by a color scale. H0, H15, H30, H120, and H-S indicate 0 min, 15 min, 30 min, 60 min, and 120 min after the up-shift and the sustained hyperosmotic conditions, respectively. Expression change of each gene at H0 was set as zero (unchanged). The gene order number as determined by the genomic location is indicated on the side of the figure. Some gene clusters of interest are indicated on the right.
Expression of Vi capsular antigen genes
Vi capsular polysaccharide of S. enterica serovar Typhi is an important factor that allows the bacterium to survive in human macrophages [18,19]. Vi capsular antigen was expressed at relatively high levels under conditions of low osmolarity. This expression was dependent on OmpR, a central regulatory protein that is a part of a two-component regulatory system with the osmosensor EnvZ [20]. After the osmotic stress, expression levels of all of 10 Vi-cluster genes, tviA, tviB, tviC, tviD, tviE, vexA, vexB, vexC, vexD, and vexE were decreased obviously (shown in Figure 3). The expression levels of those genes in the low osmolarity were 3-to 10-fold higher than those at high osmolarity. These results suggest that all Vi cluster genes were immediately and continually down-regulated after the osmotic up-shift.
Expression of flagella and chemotaxis genes
Flagella are necessary for motility of Salmonella and are an important pathogenic factor of Salmonella. Approximately 50 genes are associated with flagellar structure and function, have been divided into four regions on the basis of genomic location, and divided into three classes as expression regulation [21][22][23]. Most S. enterica serovars have two flagellin genes, fliC and fljB. However, S. enterica serovar Typhi is thought to have only fliC. The wild strain of S. enterica serovar Typhi GIFU10007 used in the present study is a z66 antigen-positive strain. The gene encoding z66 antigen was recently identified as an fljB gene, following with a downstream fljA-like gene [15]. The transcriptional regulation of fljB:z66 and fljA-like gene remains unclear.
With the present microarray analysis, most flagella-related and chemotaxis-associated genes were detected. Results are shown in Figure 4. The location of fljBA, which is separated from others in S. enterica serovar Typhimurium, is not yet known in S. enterica serovar Typhi. Expression levels of flagellar and chemotaxis genes of S. enterica serovar Typhi under the osmotic up-shift conditions were quite different from levels under the sustained high osmolarity conditions. Profiles of flagellar and chemotaxis gene expression are described in Figure 4. Expression levels of most region I and region III genes were decreased immediately after the increase in osmolarity. Expression of flhC, a global flagellar transcriptional activator gene, was mildly reduced after the shift, and a class-2 gene fliA, which encodes an RNA polymerase sigma factor for expression of the flagellar operon, was significantly decreased. In contrast, expression of flgM, flgN, cheA, and cheW was not changed, and expression of fliC and fliE was increased slightly. Under the sustained hyperosmotic conditions, expression levels of most flagella and chemotaxis genes were significantly higher than those in low osmolarity conditions. However, expression of some regulator genes, such as fliA and flgM, was shown little change.
Expression of invasion relative genes
High osmolarity could induce the expression of some SPI-1 genes [9]. In the present study, we found that expression of most SPI-1 genes was quite low under conditions of low osmolarity. Expression patterns of 21 SPI-1 genes (from prgK to invH) and several other invasion-related genes are shown in Figure 5. Expression kinetics during the osmotic up-shift showed that expression of most SPI-1 genes, including regulator genes iagA and invF, was greatly increased at 120 min after the onset of hyperosmotic stress, but not increased at 15 min or 30 min of the stress. Expression levels of iagA, invF, invH, and spaM at 15 min and 120 min of the stress were also investigated by RT-PCR. The RT-PCR results were similar to those of microarray analysis (Figure 7). Under sustained hyperosmotic conditions, expression levels of only a few SPI-1 genes, including invH, spaM and prgH, were mildly increased.
Expression of other invasion-related genes was examined (bottom of Figure 5). Expression level of sirA, which regulates SPI-1 genes by inducing iagA expression, was slightly decreased at early stages after the osmotic up-shift, but no change was observed in later stage of the osmotic stress. Expression levels of two oxygen inducing invasion genes, orgAa and orgAb, were not changed under any hyperosmotic conditions. sopE is locating outside of SPI-1 loci and encoding an toxin protein SopE, which is another important invasion factor and secreted into host cells by SPI-1 relative type III secretion system [24,25]. The expression of sopE was identical to the expression of most SPI-1 genes, greatly promoted at 120min but not in early stage of the stress. Expression of other two invasion relative genes sigE and sigD, which were activated by InvF [26], was also induced at same time.
These results suggested that S. enterica serovar Typhi may increase the invasive ability by increasing expression of invasion-related genes at 120 min after entering a high osmotic surrounding. difference between high and low osmolarity is indicated by different colors; colors from green to red indicate changes of -2.6 to 2.6 of the log 2 ratio. In other words, the change in expression from 5-fold repression to 5-fold induction is indicated by a color scale. H0, H15, H30, H120, and H-S indicate 0 min, 15 min, 30 min, 60 min, and 120 min after the up-shift and the sustained high osmotic condition, respectively. Expression change of each gene at H0 was set as zero. Gene names are listed on the left, and classified into two groups, SPI-1 genes and others. Under sustained hyperosmotic conditions, expression of only invH, spaM, and prgH was mildly induced. During the up-shift osmotic conditions, expression of most SPI-1 genes, including regulatory gene iagA and invF was greatly enhanced later but not immediately after the up-shift. Expression of the regulatory gene sirA was repressed slightly during the early stage after the up-shift.
Expression of regulatory genes
All regulatory and putative regulatory genes expressed under hyperosmotic conditions were identified from the effective data acquired in the present study. In addition to the regulatory genes described above, there were some sigma-factor genes, two-component regulatory system genes, and some other putative regulatory genes that appeared expression change during the osmotic up-shift conditions. Expression profiles of forty regulatory or putative transcription regulatory genes are shown in Figure 6. Most expression changes occurred transiently during the osmotic up-shift.
Expression of rpoS, which encodes sigma factor σ s , under sustained hyperosmotic conditions was 2.1-fold higher than that under the low osmolarity, whereas it was not changed during the osmotic upshift. Expression of rpoE, which encodes sigma factor σ 24 , was decreased at 15 min and 30 min after the osmotic up-shift. Similar results were obtained for rseA, which is a negative regulator gene of sigma-24. Expression of rpoD was decreased later after exposure to osmotic stress but was not changed at the early stage of the osmotic stress or under the sustained hyperosmotic conditions. Figure 6. Expression of regulator genes. Differences between high and low osmolarity is indicated by different colors; colors from green to red indicate changes of -2.6 to 2.6 of the log 2 ratio. In other words, the change in expression from 5-fold repression to 5-fold induction is indicated with a color scale. H0, H15, H30, H120, and H-S indicate 0 min, 15 min, 30 min, 60 min, and 120 min after the osmotic shift and sustained high osmotic condition, respectively. Expression change of each gene at H0 was set as zero. Gene names were listed on the left, and the STY numbers are synonymous to ORFs and that proposed functions are based on information contained in the NCBI database.
Virulence-related PhoP-PhoQ is a pleiotropic two-component regulatory system, and phoP is considered as a central regulatory gene [27]. Expression of phoP and phoQ was slightly increased immediately after the onset of osmotic stress and under the sustained hyperosmotic conditions. Expression levels of two-component regulatory system genes pmrA and pmrB were reduced in later stages of the osmotic stress, and this pattern was opposite to that of phoP and phoQ. Expression of prmD was increased during the osmotic up-shift, peaking at 15 min with a level 2.9-fold higher than that under low osmotic conditions. Expression of mig-14 was greatly increased at early stage of the osmotic stress. Increased expression levels of phoP and mig-14 at 15 min of the osmotic stress were verified by RT-PCR (Figure 7). Expression of rcsB and rcsC, which encode two-component regulatory proteins related to osmo-regulation expression of Vi antigen cluster genes [28], was mildly decreased during early osmotic stress, but not changed significantly under the sustained hyperosmotic conditions. Expression of another set of two-component regulatory system genes, uhpA and uhpB, was mildly induced immediately after the shift.
Discussion
Salmonella is one of the most extensively studied bacteria in terms of its genetics, cell structure and physiology, pathogenesis and host interactions, and development. S. enterica serovar Typhi is a human enteroinvasive pathogen. After invasion of the intestinal epithelium, S. enterica serovar Typhi can survive host defenses and cause severe systemic infection [1,3]. During movement from the natural surroundings to the host cell, S. enterica serovar Typhi is subjected to severe environmental stresses, including acidic conditions in the gut, hyperosmotic conditions in the small intestine, and the oxidative attack by host defense cells. The osmolarity surrounding of Salmonella pathogen in food and in the lumen of the small intestine is approximately 50 and 300 mM NaCl, respectively [20]. On the basis of published genomic information, we prepared a Salmonella microarray to investigate genome-wide gene expression under environments of the low osmolarity, the osmotic up-shift, and sustained high osmolarity. The present results indicate that expression of a large number of genes changes in response to the osmotic up-shift, and the expression profile at 120 min after the osmotic up-shift are significantly different from those soon after the shift (15 min to 30 min).
Vi polysaccharide is an important factor in S. enterica serovar Typhi against host defense systems and environmental stresses. Vi gene cluster of S. enterica serovar Typhi includes 10 genes, polysaccharide-biosynthesis related genes tviA, tviB, tviC, tviD, and tviE and polysaccharide-export related genes vexA, vexB, vexC, vexD, and vexE [29]. Previous research found that expression of Vi capsular antigen is OmpR dependent and affected by the osmolarity of environment [20]. In the present study, we found that expression of Vi-cluster genes was rapidly inhibited by the shift to hyperosmotic conditions, and suppressed under sustained hyperosmotic conditions also. Invasion ability of S. enterica serovar Typhi is negatively affected by Vi polyssacrylide [7], and reduced expression of Vi antigen at high osmolarity may promote invasion of S. enterica serovar Typhi in the small intestinal lumen.
Some regulators and factors including SirA, BarA, and RcsC/B influence expression of flagellar genes by regulating FlhDC, which is the global regulator of flagellar and motility-related chemotaxis genes in E. coli and Salmonella [30][31][32]. Sigma factor FliA and anti-sigma factor FlgM form the FliA-FlgM regulatory system in response to FlhDC and can regulate expression of most class-3 flagellar genes [21,33,34]. The expression of flagella-related genes is also affected by environmental factors, such as osmotic or acid stress [35,36]. Our present microarray analysis revealed that expression of fliA and most other class-2 flagellar and chemotaxis genes in regions Ι and ΙΙΙ is repressed immediately after the onset of osmotic stress. At 120 minutes after the up-shift, expression levels of most flagellar genes, including flhC and fliA, have return to those at low osmolarity before the stress. However, expression of most flagellar and chemotaxis genesis is elevated slightly under sustained hyperosmotic conditions. It appears that S. enterica serovar Typhi gradually adapts to hyperosmotic conditions and recovers the motility at 120 min after the high osmotic stress.
FljA is a post-transcriptional repressor of FliC through binding 5'-termini of the mRNA of FliC in S. enterica serovar Typhimurium [37]. Previous studies revealed that expression of SPI-1 genes was optimal under conditions of high osmolarity during late-log phase growth, and promoted by HilA and SirA in S. enterica serovar Typhimurium [38,39]. SirA is a response regulator of the BarA-SirA regulatory system that directly induces expression of the central regulator gene hilA and indirectly reduces FlhD/C through CsrB/A [31,40]. InvF, another important regulator encoded by a SPI-1 gene invF, regulates the expression of most SPI-1 genes [41,42]. Expression of iagA (named as hilA in S. enterica serovar Typhimurium), invF, and most of the detected SPI-1 genes is increased at the later stage after the osmotic up-shift. However, expression of sirA is unchanged at that time. We suspect that change in expression of most SPI-1 genes is not caused directly by the SirA in that case, and some other regulatory factors are likely involved in activation of expression of iagA or invF. It is unclear why expression of SPI-1genes is increased at 120 min after the onset of osmotic stress and what regulatory systems or regulators are involved in this expression.
After checking all expression data from the present study, we found that the expression levels of near 40 regulatory genes, including some sigma factors, two-component regulatory systems, and putative transcriptional regulators were changed with different patterns after the hyperosmotic stress. RpoS is the master regulator of the general stress response, which provides cells with the ability to survive stresses including starvation, acid, high osmolarity and oxidative stress [43][44][45]. In the present study, the expression of rpoS is promoted under sustained hyperosmotic conditions, but is not changed during the osmotic up-shift. We suspect that RpoS is not a major regulator playing direct roles in the early response to hyperosmotic stress. RpoE, an RNA polymerase sigma factor 24 encoded by rpoE, is produced under some conditions of stress, e.g. heat shock, starvation, and oxidative stress, in E. coli and S. enterica serovar Typhimurium [46][47][48][49]. RpoE can be regulated by RseA, RseB, and RseC in E. coli [50]. RpoD, an RNA polymerase sigma factor 70, could affect expression of mer and pan operon that is required for synthesis of pantothenate [51,52]. However the genome-wide function is not understood. Expression of rpoE and rse operon is temporarily reduced immediately after the up-shift, whereas expression of rpoD is repressed at a later stage of the shift. It appears that RpoE and RpoD temporarily affect gene expression in response to hyperosmotic stress in S. enterica serovar Typhi.
Two-component regulatory systems in bacteria mostly transduce signals from the external environment via membrane sensors [53]. PhoP-PhoQ, a two-component regulatory system, regulates numerous cellular functions in several Gram-negative species and is important for virulence of Salmonella [27,[54][55][56]. PhoP-PhoQ is connected with another two-component regulatory system, PmrA-PmrB through PmrD, which is promoted by PhoP and can post-translationally regulate the prmAB operon [11]. Interestingly, expression of phoP, phoQ, and prmD is elevated transiently after the increase in osmolarity; however, expression of pmrA and pmrB is reduced at a later stage after onset of osmotic stress when phoP and prmD are not induced. We suspect that some factors negatively regulate expression of pmrA and pmrB against the activation of PrmD during the osmotic stress. Another regulatory system, RcsC-RcsB that is connected to the PhoP-PhoQ system has been found in E. coli [57]. Expression of rcsC and rcsB is decreased immediately after the shift to hyperosmotic conditions, whereas expression of phoP and phoQ is elevated. The relation between PhoP-PhoQ and RcsC-RcsB in S. enterica requires further research to identify. Expression of mig-14, which is activated by PhoP [58], is increased in the early stage of the osmotic stress. This result also supports the importance of the PhoP in response to hyperosmotic conditions. The fact that many transcriptional regulators appeared expression changes in response to hyperosmotic stress represents the complexity of the osmoregulatory network of S. enterica serovar Typhi.
The OmpR-EnvZ two-component regulatory system is a well-understood osmo-regulatory system, and OmpR is considered as a central regulator [59][60][61][62]. EnvZ and other regulators PhoB, FadD, FliZ, and SirA independently regulate expression of hilA and invasion in S. enterica serovar Typhimurium [63]. Though we did not observe any obvious change in ompR expression in response to hyperosmotic stress, the expression of ompF, a phosphated OmpR negative controlled gene [64], is reduced immediately after onset of the osmotic up-shift stress, and the effect continually exists in all stages of the stress. The result suggests that increasing phosphorylation of OmpR appears in the early stage of the osmotic stress and may be an important initial event in the osmotic regulatory network. A genomewide examination of OmpR-EnvZ function during osmotic stress will be useful to reveal the relation among regulatory proteins and systems.
In conclusion, when S. enterica serovar Typhi encounters an osmotic up-shift environment, regulatory cascade is activated, and appears rapid and later responses. S. enterica serovar Typhi immediately reduce the expression of Vi-cluster genes and some flagellar and chemotaxis genes in early stage, and increasing expression of invasion-related genes and most flagellar and chemotaxis genes gradually in somewhat later stage. Many regulators, e.g. PhoP, RpoE, and RpoD, are perhaps involved in these responses. S. enterica serovar Typhi will increase the invasive ability in somewhat later after enter the hyperosmotic environment of small intestine of host. This may explain why invasion by the pathogen occurs mainly in distal portion of the ileum of the small intestinal. It is also suggests that bacterial infection of the intestine is dependent on expression of pathogenic genes altered in response to hyperosmotic surroundings.
|
2015-03-21T17:44:09.000Z
|
2007-02-01T00:00:00.000
|
{
"year": 2007,
"sha1": "0cc902b30499b33e56c2d14ede850d8868800b77",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/8/2/116/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0cc902b30499b33e56c2d14ede850d8868800b77",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
229497426
|
pes2o/s2orc
|
v3-fos-license
|
Design of Healthy Youth Edition Teens Based Game
In Indonesia, there are many promiscuity cases such as brawl, drug dealers and free sex among students. Therefore, the importance of having the social skills to dare to take a firm stance, to reject the negative offer of the environment needs to be owned by a teenager. In addition to the importance of self-awareness, to keep teenagers in Indonesia is not easy to fall into a healthy association it is also necessary to help socialize and provide education about healthy association for today's teenagers. Through the design of the game Teen Society, adolescents aged 11-24 years provided education about healthy association through mobile games. Keyword— game design, education, healthy association
I. INTRODUCTION
Intercourse is part of the process of social interaction between individuals to their social environment. There are two kinds of intercourse, namely healthy association and unhealthy association. Healthy association is in accordance with social values and norms prevailing in the community and bring a positive influence for one's development. While unhealthy association is harmful to self and others [1]. In adolescence, social intercourse has a great influence, because at that age the strength and importance of friendship and the intensity of time spent with peers more [2].
Basically the tendency of teenagers to do negative actions that ultimately harm the future is due to the influence of peers. Friends can interact with each other even at risk, for example a teenager is more likely to start smoking when a friend is already smoking. The strength and importance of friendship and the intensity of time spent with peers is greater in adolescence [3].
Therefore, for teenagers are not easily affected in a negative environment it is also necessary efforts to help socialize and provide education about healthy association for teenagers today.
In line with the progress of technology and information growing, can be used to create media as an effort to help disseminate and provide education about healthy association for teenagers.
The purpose of this paper is to design educational media about the healthy association of teenagers packed in the game.
A. INTERCOURSE
Interaction is the interaction between people with each other. In the daily social process there has been contact between someone with other people as well as in social interaction occurs the process of mutual influence. Intercourse includes the relationship between individuals and groups directly to give effect to the behavior in life [4].
B. KIND OF INTERCOURSE
Based on the nature of the relationship can be divided into two, namely the association of a positive and negative association:
a. Positive Interaction
Positive interaction is a social based on values, norms and religions and followed by activities that are positive as well. b. Negative Interaction Negative intercourse is a relationship that is deviating from the boundaries of obligations, rules, demands, culture, terms and feelings of shame [5].
Meanwhile according to the theory of sociology, interaction is divided into two, namely healthy association and unhealthy association. Healthy interaction is a social interaction that leads to social norms prevailing in society and bring a positive influence to the development of one's personality. While unhealthy interactions are social interactions that lead to behaviors that are detrimental to themselves or to others [2].
C. YOUTH
Adolescence comes from the Latin language is adolescere which means to grow or grow into adulthood. Psychologically adolescence is a period in which the age of the individual has been able to integrate with the adult society, the age where they feel no longer under the level of the older people but are in the same level, including in terms of intellectual changes that appear from the way of thinking to achieve integration in social relationships with adults, which is a hallmark of adolescence development [6].
III. GAME DESIGN
In the process of designing the game Teen Society as a healthy social education game initially conducted data collection through interviews, literature studies and observations about teenage association. Next game design by determining the storyline, making game assets and the next process is part game programming, then testing game to check error or bug in game. And if there is an error then re-done until the game is ready to play.
IV. DESIGN PROCESS AND RESULT
The process of making the game there are several steps that must be passed, namely: designing, asset creation, programming, and testing. By using these stages game creation will become more structured and efficient.
A. DESIGN
The first stage is the design of the game itself. Game design includes the determination of goals, themes, until the course of game applications. The main problem to be conveyed through Teen Society game is to improve the understanding of the respondents about the importance of aspects in the healthy association. (1) The financial aspect, aims to explain the importance of saving and managing finances in adolescence through a mini game shopping at the store of grandmother's grant money, (2) Social aspect, aims to explain the importance of helping others, through mini-games to grandmother to market, (3) The emotional aspect, aims to teach about the importance of managing emotions in adolescence in the face of problems through mini games quiz manages emotions, (4) Physical aspect, aims to teach about the importance of regulating healthy lifestyle through mini shopping game healthy food. (5) The aspect of sexuality, aims to teach about some sexual distortions to be avoided, through mini-games on the subject of sexuality.
Planning the path of Teen Society game application is to determine the basic concept of the game itself, the following is the basic concept of Teen Society game: This mini game is played when the player earns points on quiz games below average. So after the player plays the mini game points on one of the weak aspects will increase. 3. There is also a menu of information that contains glimpses of information about the financial aspects, social, emotional, physical, and aspects of sexuality.
B. RESULTS
In the making of the game requires the existence of an asset. Assets consist of all materials used in game making, such as background, button and sound. The game design results are as follows:
Game Quiz
In the game the color selection quiz is the dominant color of yellow and orange that shows joy and has a tolerant, investigative and prominent nature [7]. Figure 2 is a quiz game show, which consists of 30 questions and appears randomly. The player's job is to solve the question by balancing the five points above. Points earned should not be below the average of under 30.
Social Games
Social games explain the concept of mutual help attitude, in this game describes the location on the highway. The player's job is to avoid various obstacles on the road by pressing the buttons to the left and right arrows to avoid obstacles. Figure 3 is a social gaming display.
Financial Games
Financial game explains about the concept of saving, on display this game describes the atmosphere was in the store or supermarket. The player's job is to manage the money he has in order to save.
Physical Games
Physical games explain the concept of choosing healthy foods and avoiding fast food. The display of this game describes the atmosphere of players are shopping for food and drinks. The player's job is to avoid fast food, when players press the button to eat healthy then the points will increase, whereas when pressing the fast food button it will pop up the ban and the points will be reduced. Figure 5 is a physical game display.
Emotion Games
Emotion games explain the concept of how players tend to manage their emotions. The display of this game in the form of quiz games that contain about some examples of cases related to teen emotions and display a pop up that contains tips and tricks according to the selected answer. The player's job is to answer all the questions, after which the player will know the judgment through the star he gets. Figure 6 is an emotional game display.
Sexuality Games
The game of sexuality explains the concept of sex in adolescents based on prevailing norms in society. Display this game in the form of quiz games that contain about some examples of cases relating to teenage sexuality and display a pop up that contains tips and tricks according to the selected answer. The player's job is to answer all the questions, after which the player will know the judgment through the star he gets.
V. CONCLUSIONS
Based on this research, it can be concluded that the design of Teen Society game requires design stage which is designing game concept, asset creation, programming, and game testing.
|
2020-11-26T09:07:26.926Z
|
2020-11-23T00:00:00.000
|
{
"year": 2020,
"sha1": "461d2edf6042a623619ac5a30d7bf41447d98566",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24167/sisforma.v7i2.1395",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "04f82ede48b6d6e1630979e86ffd44682005fa7a",
"s2fieldsofstudy": [
"Computer Science",
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
5314195
|
pes2o/s2orc
|
v3-fos-license
|
A nationwide study of acquired C1-inhibitor deficiency in France
Abstract Acquired angioedema (AAE) due to C1-inhibitor (C1INH) deficiency is rare. Treatment options for acute attacks are variable and used off-label. Successful treatment of the associated lymphoma with rituximab seems to prevent acute attacks in subjects with AAE. The aim of this study was to describe AAE manifestations, its associated diseases, and patients’ responses to treatments in a representative cohort. A retrospective nationwide study was conducted in France. The inclusion criteria were recurrent angioedema attacks and an acquired decrease in functional C1INH <50% of the reference value. A total of 92 cases were included, with a median age at onset of 62 years. Facial edema and abdominal pain were the most frequent symptoms. Fifteen patients were hospitalized in the intensive care unit because of laryngeal edema, and 1 patient died. Anti-C1INH antibodies were present in 43 patients. The associated diseases were primarily non-Hodgkin lymphoma (n = 44, with 24 splenic marginal zone lymphomas) and monoclonal gammopathy of undetermined significance (n = 24). Three patients had myeloma, 1 had amyloid light-chain (of immunoglobulin) (AL) amyloidosis, 1 patient had a bronchial adenocarcinoma, and 19 patients had no associated disease. Icatibant relieved the symptoms in all treated patients (n = 26), and plasma-derived C1INH concentrate in 19 of 21 treated patients. Six patients experienced thromboembolic events under tranexamic acid prophylaxis. Rituximab prevented angioedema in 27 of 34 patients as a monotherapy or in association with chemotherapy. Splenectomy controlled AAE in 7 patients treated for splenic marginal zone lymphoma. After a median follow-up of 4.2 years, angioedema was on remission in 52 patients. AAE cases are primarily associated with indolent lymphoma—especially splenic marginal zone lymphoma—and monoclonal gammopathy of undetermined significance but not with autoimmune diseases or other conditions. Icatibant and plasma-derived C1INH concentrate control attacks; splenectomy and immunochemotherapy prevent angioedema in lymphoma setting.
Introduction
Bradykinin-mediated angioedema manifests as recurrent edema of the mucosa, soft tissue skin, and abdominal wall, usually during 2 to 5 days. This condition can be life threatening when the edema occurs in the tongue or laryngeal tract and causes asphyxia. [1] The pathophysiological mechanisms result from the contact phase activation that is associated with kinin production caused by a deficiency in C1-inhibitor (C1INH), a serine protease inhibitor. Kallikrein (which induces bradykinin release from its precursor) and factor XII (which activates plasma kallikrein) are the primary targets for C1INH. In experimental studies, accumulation of kinins (primarily bradykinin) induces capillary vasodilatation via the activation of endothelial B1 and B2 receptors. [2] Bradykinin-mediated angioedema can be hereditary or acquired. The hereditary angioedemas (HAEs) are usually caused by autosomal dominant inheritance of SERPING1 gene mutations. SERPING1 gene mutations result in either low C1INH expression (type I HAE) or normal levels with reduced C1INH function (type II HAE). HAE with normal C1INH levels and function is less frequent and is associated with F12 gene mutations for 25% of the affected patients. [3] Acquired angioedema with normal C1INH levels and function is related to the use of angiotensin-converting enzyme inhibitors. [4] AAE that is associated with C1INH deficiency is rare, approximately 10 times more rare than the hereditary forms, which are estimated to occur in between 1/10,000 and 1/50,000 of the population. [5] The largest series describing AAE represented <50 cases. [6,7] A few clinical characteristics can help to distinguish AAE from HAE: in AAE, the disease typically develops after the fourth decade of life in patients with no familial history of angioedema, and with less frequent abdominal attacks. This condition is primarily associated with lymphoma and monoclonal gammopathy. [6][7][8] Some cases are described with cancer or autoimmune conditions [7,8] ; in approximately 15% of cases, no associated condition was identified. [7] A decrease of the functional C1INH level <50% of the reference value is commonly used to define the disease. [7,9] Decreased levels of C4 and CH50 are regularly observed. C1q is also frequently decreased in AAE but is normal in HAE. A distinction between 2 subtypes has been suggested: one is characterized by C1INH consumption and is frequently associated with lymphoproliferative diseases, whereas the other is characterized by anti-C1INH antibodies and is thought to have an autoimmune mechanism. [10] However, the relevance of this distinction is questioned, as AAE with anti-C1INH antibodies is also associated with monoclonal gammopathy and lymphoma. [7,8] Treatments for angioedema attacks in AAE setting are used offlabel. Plasma-derived C1INH concentrate (pdC1INH) efficiently treats attacks; however, some failures have been noted and suspected to be due to C1INH consumption. [11] Icatibant, a competitive antagonist of the endothelial bradykinin B2 receptor, was reported to be effective in this setting in a small study. [12] For prevention of angioedema attacks, patients with AAE exhibit a better response to antifibrinolytics than those with HAE, [13,14] whereas the efficacy of attenuated androgens seems lower for this indication. [7] Treatment of the underlying lymphoma with rituximab can prevent angioedema, particularly in AAE with anti-C1INH antibodies. [15][16][17][18][19][20][21][22] We conducted this retrospective study to characterize AAE manifestations, to describe its associated diseases, and to observe the responses of angioedema to treatment.
Design and setting
We conducted a retrospective study of AAE in France. All the procedures were performed in accordance with the principles expressed in the Declaration of Helsinki. Our institutional review board (Ile-de-France Committee no. 10) stated that all data collection and processing methods fulfilled these requirements. According to French legislation, no written informed consent of patients was required.
Participant inclusion and exclusion criteria
Our inclusion criteria were as follows: recurrent angioedema attacks, defined as cutaneous or mucosal edema that was resistant to antihistaminic or corticosteroid administration, first occurring after 40 years; and a decrease in functional C1INH <50% of the reference value, with decreased C1q and/or anti-C1INH antibodies. Patients with hereditary forms were excluded. Data concerning patients with asymptomatic decreases in functional C1INH were analyzed separately.
Data collection
Data collection extended from September 2013 to March 2015, and patients were referred through the immunology laboratories of Grenoble University Hospital and Georges Pompidou European Hospital in Paris, which are national reference laboratories for C1INH biology. We then contacted the clinicians to study the medical files.
First attack duration and attack frequency at first medical report were recorded. A cumulative record of different localizations of attacks presented over time by patients was established. If a disease was associated, the clinical, biological, and histological characteristics at diagnosis were recorded. AAE and its associated diseases were considered concomitant if the diagnosis delay was <4 months.
All samples were studied in immunology laboratory of Grenoble University Hospital or in immunology laboratory of Georges Pompidou European Hospital in Paris. The serum protein concentrations of C1INH, C4, and C1q were assayed by nephelometry (Siemens, Marburg, Germany). The complement hemolytic activity (CH50) was determined. Plasma C1INH function was assessed as the residual esterase activity in the plasma samples after incubation with the C1s protease. C1INH function was assayed as described by Drouet et al, [23] or determined by chromogenic assay (Technochrom C1-inhibitor, Technoclone GmbH, Vienna, Austria). To quantify the presence of anti-C1INH autoantibodies, a slightly modified version of a C1INH-binding enzyme-linked immunosorbent assay was used. [24] Isotype of C1INH antibodies was determined, but light chain component is not determined routinely.
Criteria of response to treatment
We considered the response of acute attacks of angioedema to specific treatment-tranexamic acid (TA), icatibant, or pdC1INH-as reported in medical files; we defined response as duration of an acute attack lasting <24 hours after administration of treatment. For specific preventive treatments -TA and danazol-and treatment of associated disease, we defined response as no attacks or a decrease of attack frequency by >50% during the 6 months following administration or Gobert et al. Medicine (2016) 95: 33 Medicine introduction of treatment. The side effects data were also collected. Disease status at the last available visit was assessed. Angioedema remission was defined as no attack in the previous 6 months. Biological remission was defined as a return to normal of C1INH function. The status of associated diseases, assessed by the clinician in the medical file, was recorded as complete remission or active disease-which means stability or progression.
Statistical analysis
We used StatView software (SAS Institute, Inc, Cary, NC, copyright 1992-1998). Median values were reported with the interquartile range (IQR) and mean values were reported with standard deviation. For group comparisons, we used Pearson x 2 test or analysis of variance test, if indicated. For remission survey assessments, we used the Kaplan-Meier and log-rank tests (Mantel-Cox). P values <0.05 were considered statistically significant.
Results
One hundred forty-four medical files were studied: 94 had been referred by the immunology laboratory of Grenoble University Hospital (where 540 patients with HAE are followed) and 50 by the immunology laboratory of Georges Pompidou European Hospital in Paris. Fifty-two patients were excluded: 1 because of SERPING1 mutation, 33 because of lack of clinical information, and 18 patients who had asymptomatic C1INH decrease; however, biological characteristics and associated diseases of those asymptomatic patients were collected and are reported further separately. We thus included 92 patients; 56 were women (60%). The observational period extended from January 1991 to
Characteristics of angioedema
The median age at first manifestation was 62 years (IQR = 18), and AAE diagnoses were made after a median delay of 10 months (IQR = 23).
The localizations of angioedema acute attacks are described in Fig. 1. The most frequent manifestations were facial angioedema (75% of total cohort) and abdominal pain (60%), followed by extremities (48%), larynx (43%), tongue (32%), and genital organs (18%). Three patients had only abdominal attacks without any peripheral angioedema. Fifteen patients were admitted into the intensive care unit because of asphyxia, and 1 patient was administered an unnecessary surgery because of an abdominal attack. Most of the patients (70%) experienced attacks lasting 24 to 72 hours; attacks lasted >72 hours in 18% of patients and <24 hours in 12% of patients. Thirty-nine percent of the patients had >1 attack per month.
The median biological values at diagnosis are presented in Table 1. C1q was low in all but 8 patients. All of the patients were tested for anti-C1INH antibodies, which were present in 43 patients (47%). Isotypes of anti-C1INH antibodies and associated disease are described in Table 2.
We observed significant differences in sex, angioedema localization, and associated conditions according to the presence or absence of anti-C1INH antibodies ( Table 3). AAE without anti-C1INH antibodies was more frequently observed in females, associated with lymphoid malignancy, and the C1q antigen levels were more frequently low. AAE with anti-C1INH antibodies was associated with more frequent attacks in the extremities, genital organs, and larynx, and more frequently associated with monoclonal gammopathy and idiopathic AAE. No statistical difference in clinical characteristics of angioedema was observed according to sex or presence of an associated disease (Fig. 1).
Associated diseases
In 73 patients, an associated disease was identified (Table 4). No significant differences were observed in the occurrence or types of associated diseases according to date of diagnosis (data not shown).
At diagnosis, the lactate dehydrogenase (LDH) value, available for 27 patients, was within the normal range in 16 cases and was 1.5 times above the reference value in 11 cases. The mean b 2microglobulin value was 2.29 mg/L (reference value <2.50 mg/L). Serologic tests for hepatitis C virus and human immunodeficiency virus were negative in all patients.
Lymphoid malignancy and angioedema diagnoses were concomitant in 25 patients, whereas lymphoid malignancy diagnosis preceded angioedema manifestations in 8 patients, with a median delay of 1.1 years (IQR = 2; range, 0.4-6.2), or followed angioedema in 11 patients, after a median delay of 1 year (IQR = 1.6; range, 0.4-5.0). The clinical and biological AAE characteristics in this setting were similar to the general pattern ( Fig. 1; Table 1).
Monoclonal gammopathy.
A monoclonal gammopathy was associated with AAE in 28 patients, which included 24 monoclonal gammopathy of undetermined significance (MGUS), with a mean value of immunoglobulin 2.6 g/L (standard deviation, 1.0-13.0 g/L); 3 immunoglobulin G (IgG) myeloma cases; and 1 AL IgG amyloidosis case. Anti-C1INH antibodies were present in 17 patients ( Monoclonal gammopathy and angioedema diagnoses were concomitant in 11 patients, whereas monoclonal gammopathy was diagnosed first in 6 patients, with a median delay of 1.0 year (IQR = 2.1; range 0.6-8.5), or after angioedema in 11 patients, after a median delay of 3.4 years (IQR = 2.5; range 0.6-8.5). The clinical and biological AAE characteristics in this setting were similar to the general pattern ( Fig. 1; Table 1).
No associated disease.
Nineteen patients never exhibited an associated condition, in spite of a median follow-up of 1.8 years (IQR = 4.4). Among these 19 patients, a total body computed tomography scan had been performed and was normal in 15, and a bone marrow biopsy had been performed and was normal in 4. Anti-C1INH antibodies were detected in 16 cases ( Table 2). The clinical and biological AAE characteristics in this setting were similar to the general pattern, except C1q antigen value, higher than in the total cohort and other subgroups ( Fig. 1; Table 1).
3.3. Treatment 3.3.1. Specific treatment of angioedema. Response of angioedema to treatment of acute attacks and preventive treatment are detailed in Table 6.
A response to TA, which was used to treat moderate attacks, was observed in 63% of patients. pdC1INH, at standard dose of 20 UI/kg of body weight, was efficient in 90% of patients; a failure was observed in 2 patients, 1 of whom had anti-C1INH antibodies. Icatibant consistently reduced the attack durations in all 26 treated patients. No side effects were reported after those treatments.
TA and danazol were effective in preventing acute attacks, with a response occurring in 76% and 75% of the patients, respectively. The TA treatment was complicated by venous thromboembolic events in 6 patients. Among these patients, 3 displayed predisposing factors, which included antithrombin III deficiency, antiphospholipid syndrome, and a progressive diffuse large B-cell lymphoma. The danazol treatment was complicated in 5 patients.
3.3.2. Associated-disease treatments and efficacy on angioedema. Rituximab was administered to 34 patients and was indicated for lymphoma (n = 10), frequent angioedema attacks (n = 14), or both (n = 10). The responses of AAE to rituximab, according to associated diseases and AAE subtypes, are described in Fig. 2. A response was observed in 21 of 25 patients with lymphoid malignancy, in 5 of 7 patients with MGUS, and in 1 of 2 patients with AAE with anti-C1INH antibodies, without associated disease. Overall, a response was observed in 27 of 34 patients (79%), including 19 of 22 (86%) patients with AAE without anti-C1INH antibodies and 8 of 12 (67%) patients with AAE with anti-C1INH antibodies. In the latter subgroup, 1 response to rituximab was observed among 4 subjects with IgG anti-C1INH antibodies AAE, 2 among 2 with immunoglobulin A anti-C1INH antibodies AAE, and 5 among 6 with immunoglobulin M anti-C1INH antibodies AAE ( Table 2). Relapse of angioedema occurred in 9 cases, after a mean delay of 17 months.
Apart from the rituximab treatment, 4 lymphoproliferative disorders were treated with chemotherapy alone, which resulted in a response of AAE in 3 patients, and 7 patients with SMZL were treated with a splenectomy, with a response of AAE in all cases (Table 5). Myeloma was treated with chemotherapy in all 3 patients, and a response of AAE was observed in 2 patients; 1 patient was lost to follow-up. Cyclophosphamide and dexamethasone controlled the amyloidosis progression and reduced the AAE attack frequency. Treatment for rheumatoid arthritis with a combination of corticosteroids, hydroxychloroquine, and methotrexate prevented angioedema attacks in 1 patient, whereas methotrexate and then azathioprine prevented them in another patient. Hydroxychloroquine was used in 3 other patients and prevented angioedema attacks in 1 patient treated for a cutaneous lupus, and in 1 patient with AAE who displayed anti-C1INH antibodies, without any other disease.
Follow-up
Patients were followed for a median duration of 4.2 years (IQR = 6.8) after AAE diagnosis. Only 7 patients were lost to follow-up. Four patients died from the following: laryngeal angioedema revealing AAE with anti-C1INH antibodies (n = 1), lymphoma progression (n = 2), or intracerebral hemorrhage (n = 1).
At the last visit, angioedema was in remission in 52 patients: 31 of them were treatment-free and 21 patients received specific preventive treatment (TA or danazol). The associated disease, identified in 41 of those patients, was in complete remission in 28 patients and active in 13 patients. Twenty-nine patients had an active AAE disease at their last visit: 14 were under specific preventive treatment, 11 received icatibant or pdC1INH for occasional attacks, and 4 were treatment-free. The associated disease, identified in 22 of those patients, was in complete remission in 9 patients, and active in 13. No statistical link was Table 1 Biological characteristics of patients with acquired angioedema among general cohort and according to associated disease. found between the remission of associated conditions and AAE outcomes (data not shown). AAE without anti-C1INH antibodies was associated with a better attack-free survival than AAE with anti-C1INH antibodies (Fig. 3).
The biological data were examined at last visit for 54 patients, and a return to normal C1INH function was observed in only 8 patients, who all were in clinical remission. Among 46 patients with a persistent decrease in C1INH function at last visit, 25 had clinical remission of angioedema, including 12 treatment-free patients. Anti-C1INH antibodies were still present in 26 patients, 12 of whom had clinical remission of angioedema.
Characteristics of asymptomatic C1INH deficiency
Eighteen individuals with decreased C1INH function <50% of the reference value were asymptomatic, despite a median followup of 7.8 years (IQR = 8.7). In those patients, the decrease in C1INH was revealed by complement fraction C4 consumption in the absence of cryoglobulinemia; none of these patients displayed anti-C1INH antibodies. The median age at first observation was 61 years (IQR = 11.5). Incidental discovery was made in the settings of a lymphoma (n = 12), MGUS (n = 3), common variable immunodeficiency (n = 1), breast cancer (n = 1), and breast cancer associated with MGUS (n = 1). Their biological patterns were IgM 0 na 23 None IgM 0 na 39 None IgM 0 na 44 None IgM 0 na 52 None IgM 0 na 73 None IgM 0 na 21 None IgG 0 na 40 None IgG 0 na 55 None IgG 0 na 58 None IgG 0 na 62 None IgG 0 na 71 None IgG 0 No response to rituximab 20 None IgA 0 na 48 None IgA 0 na 59 None IgA 0
Discussion
This retrospective study describes the largest cohort of subjects with AAE due to C1INH deficiency. Our observations confirm previous results from smaller studies and allow for the description of new insights. Although no national database for bradykinin angioedema exists in France, we could estimate the frequency of AAE as 11% of the 610 patients followed in Grenoble immunology laboratory, which confirms previous reports. [7,9] AAE occurs primarily during the sixth decade of age, which clearly differentiates this condition from hereditary forms. [5] Acute attacks of AAE are primarily localized to the face and abdomen. The latter can lead to a misdiagnosis; however, only 3 patients had isolated abdominal attacks, which likely explains the short diagnosis delay compared with HAE. [5,25] Half of our patients experienced potentially life-threatening attacks because of a laryngeal or tongue localization, 1 had unnecessary surgery for abdominal attack, and 1 patient died of laryngeal attack, which confirms the need to diagnose and properly prevent attacks.
Our work allowed us also to identify 18 patients displaying an asymptomatic decrease in C1INH function, which was previous- Table 4 Description of our cohort and comparison with previous series of the literature of acquired angioedema with C1-inhibitor deficiency.
Our cohort
Frémeaux-Bacchi et al [8] Cicardi et al [7] Castelli et al [6] Number Table 5 Clinical, immunological, and biological characteristics of lymphoid malignancies associated with acquired angioedema. Table 5 (continued). ly described in 6 patients with lymphoid disease. [26,27] These cases were of asymptomatic C1INH deficiencies, and our description of clinical remission of angioedema occurrences without normalization of C1INH function raises some questions regarding the mechanisms that induce angioedema. Activation of the contact phase is crucial in angioedema attacks. [28] Even if functional C1INH is very low, the activation of B1 and B2 receptors may be controlled. Angioedema severity depends on those endothelial receptor ligands that are essentially produced by the kinin-forming system. A recent assay was developed to evaluate C1INH function using contact phase proteases. [29] The hydroxychloroquine efficacy in 3 of our patients, as described previously, [30] supports the hypothesis of a high kinin-forming capacity in clinical AAE expression. When comparing subtypes of AAE in our cohort, significant clinical differences were observed. AAE without anti-C1INH antibodies was associated with indolent lymphoid malignancy and with lower C1q values, suggesting that C1INH consumption occurred through C1 activation in the setting of high tumor mass lymphoid malignancies, despite low LDH and b 2 -microglobulin values. Concerning the immunological function of anti-C1INH antibodies, hypothesis can be raised, as inhibition of the enzymatic activity or enhancing the clearance of the molecule; however, to date, no functional tests are available in France.
Diagnosis
We describe a striking association of AAE with SMZL, which was diagnosed in 24 patients in this cohort. The association of AAE with indolent B-cell lymphoma is well known; however, only a few cases of SMZL have been previously reported. [6,8,15,18,[31][32][33][34][35] A recent series describing lymphomas associated with AAE highlights this feature. [36] SMZL is a rare lymphoid malignancy that affects patients in their sixth decade of age. [37] This association seems significant because treatment for SMZL frequently induces an AAE response.
Autoimmune diseases were reported in only 7 patients in our cohort, and all of these patients displayed MGUS or lymphoid malignancy; only 1 patient had cancer. Reports regarding the association of autoimmune diseases and solid tumors with AAE are scarce, [6,8,38,39] and might be fortuitous. Interestingly, we describe 3 cases of AAE associated with myeloma, which is quite rare in the literature. [40] Among this cohort, B-cell lymphoproliferation appeared to be the primary associated feature of AAE, with or without anti-C1INH antibodies. However, the isotype of the monoclonal component and of the anti-C1INH antibodies did not strictly correlate.
Based on these results, we recommend that the diagnosis workup of AAE should include complete blood count, circulating lymphocyte immunophenotyping, LDH, plasmatic and urinary Response of angioedema to specific treatment. Regarding the treatment response, despite a bias due to the retrospective nature of this study, our results allow us to reconsider therapeutic options, and to complete the recent consensus report. [41] The response to TA for nonsevere attacks was moderate; in contrast, icatibant and pdC1INHs were very effective options for severe attacks in AAE. Ecallantide is not allowed for use in France, so we were not able to study its efficacy on AAE attacks. Prevention of attacks with antifibrinolytic agents and attenuated androgen was similar in our cohort, prompting us to recommend providing a preventive treatment to patients but avoiding antifibrinolytics if thromboembolic risk factors are present, since thromboembolic events occurred in 6 patients. Due to the potential severity of attacks, patients should be prescribed icatibant and C1INH concentrates in all cases and should be properly educated on autoinjections.
Treatment of associated diseases, especially the treatment of lymphoid malignancy, prevented acute attacks in most cases, regardless of the treatment options (splenectomy, rituximab, or immunochemotherapy). Efficacy of splenectomy had been reported in 1 case report. [34] Rituximab had already been reported to reduce the frequency of angioedema attacks, particularly in the presence of anti-C1INH antibodies and in lymphoma settings. [15][16][17][18][19][20][21][22]35] In our cohort, rituximab prevented angioedema attacks in 79% of the 34 treated patients, with a slightly better response in patients with lymphoid malignancies and no anti-C1INH antibodies. In patients with SMZL, splenectomy or anti-CD20 monotherapy-especially for patients who are reluctant to undergo surgery-is an interesting option for angioedema prevention. A response to rituximab was also observed in the MGUS-associated AAE, even 2 patients with immunoglobulin A and IgG monoclonal component, which is quite unexpected. These results support the role of B cells in the pathophysiological mechanisms that underlie AAE; anti-C1INH might thus be borne by a monoclonal component or be produced by polyclonal B cells, associated with a clonal lymphoproliferation.
We recommend a regular monitoring of C1INH function, complement parameters, and anti-C1INH antibodies under treatment, in order to better evaluate correlation between clinical and biological remission.
Conclusion
This study confirms the potential severity of AAE with C1INH deficiency, which must be properly managed to prevent lifethreatening or disabling attacks. Considering the good response to icatibant and pdC1INH with few side effects, these treatments could be used in severe attacks. B-cell lymphoid malignancies (particularly SMZL) and MGUS are strongly associated with acquired C1INH deficiency. Treatment of the associated disease controls AAE manifestations. Rituximab could be proposed; however, we must determine the precise therapeutic scheme in prospective studies. Perrotin, Intensive Care Unit, Tours; Jacques Pouchot, Internal
|
2018-04-03T05:29:54.556Z
|
2016-08-01T00:00:00.000
|
{
"year": 2016,
"sha1": "836c68789704546a7b34082339640e9e721086e2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000004363",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "836c68789704546a7b34082339640e9e721086e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234512426
|
pes2o/s2orc
|
v3-fos-license
|
UDC 539.3 PECULIARITIES OF WAVE PROPAGATION PROCESSES IN POROELASTIC MEDIA
In this paper are presented peculiarities of wave propagation processes in porous media; parameters that determine properties of fluid-saturated materials; basic methods for solution of poroelastic problems, one of which is Boundary Integral Equation Method; boundary integral equations and graphs of fundamental solutions functions versus frequency parameter.
Introduction.
Many natural and unnatural materials have pores structure especially fluid-or gas-saturated soils, rocks and also porous building materials: timbers, sandstones, bricks, fillers for light concretes. That's why investigation of wave propagation processes in porous bodies and media has practical interest. Presence of filler changes the behavior of such materials therefore laws of the theory of elasticity can't be used for studying of wave propagations in saturated materials.
1. Basic methods. In the end of XVIII century seriously problems of dams and dikes building and necessity of understanding of cooperation and common work of water and the solid ware a reason for first description of the porous media. Now in civil construction problems of the soil-water processes are described on the basis of the theory of the porous media that consists of the theory of mixes and the conception of volume factions. The theory of mixes was based on the mechanics of continuous media that is consists of multicomponent materials with different physical properties.
Mathematical modeling of the multi-component fluid-or gas-saturated porous media began in thirties of last century. Works by Y.I. Frenkel [2] and M.A. Biot [3,4] were first works in this direction. In their works was given great attention to models of the porous media dissipation and methods for considering it in the equilibrium equations. Works by M.A. Biot are the linear theory of the effective two phase media and are supposed as the basic and classic theory for solving similar problems. In this works for the porous fluidsaturated media the two phase model that is consists from the porous solid and the fluid that fills up pores was proposed. Also additional parameters for considering cooperation of these phases was introduced such as: the porosity, the fluid viscosity, the permeability, the Biot coefficient of effective stress, the mass densities, the shear modulus and the bulk modulus of the porous material.
Procedures for determining these parameters are presented in works [4,5]. Also analysis of porous materials properties are elucidated in works [6,7,8].
During analyzing of porous structures stress-strain stain is assumed that the pores are disseminate uniformly in the body. The fluid-or gas-saturated porous region when it's considered from the point of view by the mechanics of the continuous media is essentially the two-phase continuous media. The porous solid elements are belonging to the first phase and the elements of pores fluid filler are belong to the second phase. It should be taking into account during studying of the peculiarities of the porous media behavior that are foredoomed by differences of both phase mechanical properties. Breaking of all elements to two classes is also needful because the difference of the one phase elements behavior is less significant than of the different phase elements behavior. The assumption remains that the elementary volume space is full of two continuous media that can interact with each other. Also the fundamental characteristic of the porous media is propagation of three different compression waves: the longitudinal fast wave, the second longitudinal slow wave, and the third transversal slow wave.
In nineties of twentieth century began to appear the science works that are dedicated to studying of poroelastic problems and application of the Boundary Elements Method and the Boundary Integral Equation Method for solving these problems. The two-dimensional poroelastic equations ware represented almost at the same time in works [9] і [10]. The equations [9] were written in terms of the solid displacements and stresses and the fluid pressure when the boundary integral equations in [10] were consists of the dynamic and kinematic parameters.
One of the methods that now are used for solving systems of the differential equations is the integral and numerical Laplace transformation. This method was used to obtain the fundamental solutions for poroelastic systems in [11,12] where for solving the problems three phase model was used in which porous skeleton is partially saturated by fluid and partially saturated by gas. In [13] were presented the methods for numerical modeling of the three-dimensional poroelastic bodies dynamic and for solving the model problems about wave propagation in such bodies with different boundary conditions. Solving the problem about elastic wave propagation in the porous region that is not full of the fluid is adducing in [14] with presenting of the differential equations for the not saturated space in three-dimensional transform Laplace region. The work [15] presents the fundamental solutions for the singular boundary integral equations of poroelasticity. Some aspects of linear dynamic poroelasticity in the fluid-saturated bodies are in [16][17][18][19][20]. Despite the fact that now are presented a significant number of the singular boundary integral equations variants are only unitary BE-solutions of the poroelastic problems. That's why questions are actual in this direction.
2. Basic Relations. Whereas the components of the different phases in the porous elastic saturated media have the different densities the total density (the total mass of the fluid-solid aggregate per unit volume) should be considering in calculation. It can be determined by the following expression [3]: ( ) where β is the porosity of the porous solid, the parameters ρ s and ρ f are the mass densities of the solid and fluid, respectively. Should be taking into account the assumption that the relative motion between the solid and the fluid is not exists. Another peculiarity of the porous fluid-saturated media are, proposed by M.A. Biot [3,4] and analyzed in [6], the coefficients of the poroelastic material: Q, R, В і М that are expressed from the porosity β, the Biot coefficient of effective stress α and the drained and undrained bulk modules of elasticity K і K u : where the coefficient α is determined: The bulk modules of elasticity are determined after three types of the laboratory tests (the drained test, the unjacketed test and the undrained test) [6]: where V and V u are the primary volumes of the drained and undrained rock samples; ΔР is the incremental load in time that is applied on the rock as the pressure; ΔV і ΔV u are the volume changes of the drained and undrained samples. The algorithmic bases of the BEM are the boundary analogues of Somiliani's formulas for the solid displacements and the fluid pressure that under zero body conditions can be written [9]: where с is the coefficient that is equal 0.5 for points where the boundary is smooth, u і , U i , t і , τ are the displacements and stresses in the solid and fluid 250 pressure; n is the normal to the boundary; where λ m are the wave numbers that can be obtained as the roots of the characteristic equation; К α (іλ m r) is the modified Bessel functions; When similar fundamental solutions for the elastic region are: where ρ is the density of the elastic material; k C are the velocities of elastic wave propagation. Figures present the graphs of the fundamental solutions functions: the displacements u 11 , u 12 , u 22 ( fig. 1, 2, 3) and the stresses t 11 , t 12 , t 22 ( fig. 4, 5, 6) versus frequency parameter ωr/C 1 . The curves with designation 1 correspond to the graphs of functions for elastic media and the curves with designation 2 correspond to the graphs of functions for poroelastic fluid-saturated media, respectively.
Conclusion. The graphs of the weighting displacements and stresses fields functions for the elastic and poroelastic regions have different characters and different values depending on the frequency parameter because the body with gas-or fluid-saturated pores is differ from the continuous homogeneous elastic media and it should be modeling with applying of the two phase or the three phase model and the poroelastic equations with additional poroelastic parameters. Figures show that the graphs for the poroelastic region may be gradual approximated to the elastic analogues during changing some parameters. On the graphs in the figures 1-3 was changing of the parameter R namely gradual increase of it for the some order (curves 3, 4, 5). When the graphs of generalized derivatives functions on the figures 4-6 had changing of the parameter Q -one gradual increase for one order was enough (curves 3).
PECULIARITIES OF WAVE PROPAGATION PROCESSES IN POROELASTIC MEDIA
During analyzing of wave propagation processes in the fluid-saturated porous media unlike the theory of elasticity should be applied proposed by Biot the two phase model of media in which porous the solid elements are belonging to the first phase and the elements of pores fluid filler are belong to the second phase. Sometimes, for solving problems three phase model are used in which porous skeleton is partially saturated by fluid and partially saturated by gas. For the elastic porous media are introduced parameters such as: the porosity, the fluid viscosity, the permeability, the Biot coefficient of effective stress, the shear modulus and the bulk modulus, the mass densities and the total density of the porous material. Also the fundamental characteristic of the porous media is propagation of three different compression waves: the longitudinal fast wave, the second longitudinal slow wave, and the third transversal slow wave. One of the methods that are used for solving problems of poroelasticity is the Boundary Integral Equation Method. The algorithmic bases of it are the boundary analogues of Somiliani's formulas for the solid displacements and the fluid pressure. The boundary integral equations and the fundamental solutions that are comprised in the poroelastic equations are different from the theory of elasticity analogues because the body with fluid-saturated pores is differ from the continuous homogeneous elastic media. Figures show that the graphs for the poroelastic region may be gradual approximated to the elastic analogues during changing some parameters. The biggest influence for displacements functions has change of the parameter R especially gradual increase of it for the some order. When for changing the functions graphs of the generalized derivatives one gradual increase of the parameter Q for one order is enough.
|
2021-05-16T00:04:03.519Z
|
2020-11-30T00:00:00.000
|
{
"year": 2020,
"sha1": "0bf80f087d84daafb8ce91b31b44de83008d7489",
"oa_license": "CCBY",
"oa_url": "http://omtc.knuba.edu.ua/article/download/226590/226147",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cb3fd6f8acc670fce9f341041bc362c27f740fbf",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
}
|
218941287
|
pes2o/s2orc
|
v3-fos-license
|
Development and researches of vacuum gauges with sensitive elements created by MEMS technology
The article is devoted to the development and research of sensitive elements, transducers and vacuum gauges created by MEMS technology and intended for measuring absolute pressure in a wide range from 10−3 to 105 Pa.
Currently in the sphere of low absolute pressure (vacuum) there is a number of problems, at first instance, connected with commercial unavailability of domestic high precision measuring facilities for measuring low absolute pressure. The following major problems can be outlined.
1. Lack of domestic comparison standard for carrying out collation between state primary standard GET 49-2016 and GET 101-2011, carrying out inter-laboratory comparisons, and also carrying out international collations.
2. Lack of domestic serially produced high precision measuring facilities conforming to the standard of 1 and 2 category by GOST 8.107-81. Since 2008 up to the present moment only two domestic produced vacuum gauges have passed trials for type approval, while their metrological characteristics do not meet requirements for the standards of 1 and 2 category [1].
3. Sanction restrictions, placed by Western countries, do not allow to procure a number of high precision measuring facilities of foreign production.
To solve mentioned problems D.I. Mendeleev Institute for Metrology proposed to develop domestic vacuum gauges, meeting the following specified requirements: − Compact size of sensitive elements; − Low dependency on the type of gas; − Domestically produced components; − Low cost. Analysis showed that those requirements are most fully met by construction of sensitive elements produced by technology of microelectromechanical systems (MEMS). The results of research work, held proactively in the department of state standards in the sphere of pressure measurement [1], in 2017 allowed to start in D.I. Mendeleev Institute for Metrology experimental design work «Research and development of high precision deformation devices for measuring low absolute pressure in the range 1•10 -3 -1•10 4 Pa, created by technology of microelectromechanical systems".
It was decided to develop two types of prototype models: vacuum gauge, based on resonance method of pressure measurement and vacuum gauge, based on membrane-capacitance method using compensation and compression. At the stage of compiling technical design specification, requirements for metrological characteristics of the developed prototype models were formulated as follows (see Table 1).
Name of metrological characteristics
Value of metrological characteristics Prototype model type 1 Range of measurements from 10 Pa to 10 4 Pa Relative measurement error not more than ±(2…1) % Prototype model type 2 Range of measurements from 10 -3 Pa to 10 Pa Relative measurement error not more than ± (10 -2) % Sensitive element of vacuum gauge (picture 1), with the principle of action based on measuring gas spring rigidity (resonance type), its construction represents a plate on the hangers, placed between fixed planes. Picture 1draft of sensitive element of resonance type Gas spring is formed by electrodes plane №1 and №2, and also by siliceous movable plate as shown on picture 2.
where -mechanical rigidity of a plate, -pneumatic rigidity of a gas spring, mweight of a movable plate.
Electrical scheme of pressure transducer of the resonance type is shown on picture 4.
Picture 4 -Electrical scheme of pressure transducer of the resonance type In general case parameters of plate movement are described by the following equation: , where y=a*sin( *t)-functional relation of plate movement, аamplitude of vibration, -circular frequency of vibration, -coefficient of viscous friction power of gas environment, G dielectric permeability of gas environment, dielectric constant, electrical voltage on electrodes.
The dependence of resonance frequency of plate movements on the gas pressure can also be obtained by the method of small displacements, if equate amount of energy, stored in gas spring, to where ρdensity of plate material. Dependence of the pressure on squared frequency of movements is linear: where К= coefficient of conversion, can be calculated from material density and value of geometrical dimensions of a movable plate.
Picture 5 shows an experimental dependence of pressure on squared frequency, received in the process of prototype model trial.
Picture 5 -Experimental dependence of pressure on squared vibration frequency
The functional relation of approximation showing that tangent of inclination angle of the straight line equals 3,18е-4 , is also displayed on the diagram. This value coincides with earlier calculated value of the coefficient of conversion K. The second type of a sensitive element, created by MEMS technology, is based on gas pressure measuring by electrostatic pressure compensation and in its construction represents a membrane with plane-parallel distributed electrodes (picture 8).
Picture 8 -Construction of a compensative transducer
Membrane deformation, created when supplying measured gas pressure, is compensated by electrostatic negative pressure (while providing constant voltage to the measuring electrode). The formula of conversion is as follows: , where -dielectric constant, hvalue of gap between membrane and measuring electrode plane, Uvalue of constant voltage, at which compensation of membrane deformation took place.
The third type of sensitive element, created by MEMS technology, in its construction is similar to the sensitive element of the resonance type, with the difference that not gas spring rigidity is measured, but the value of power loss (work against force of friction on the gas side) with vibratory movement of a plate. The principle of action of the sensitive element of this type is illustrated on picture 9. Composite plate with weight m on mechanical hangings with rigidity к makes forced vibrations z(t), approaching by its form to harmonic. In the process of movement the plate plane experiences power on the side of gas, that is proportional to absolute gas pressure. To maintain certain amplitude of forced vibrations it is required to replenish energy spent for work against power on the side of gas. The energy is replenished by electrostatic actuator y(t). The amount of replenished energy is proportional to absolute gas pressure.
The results of vacuum gauge calibration, using sensitive element, embodying the principle of measuring energy loss, are shown on picture 10.
Picture 10 -Dependency of energy loss on pressure On the x-axis the pressure in Pascal is displayed, on the y-axisenergy loss in conventional units (%) is displayed. It should be mentioned that with the increase of pressure energy loss increases nonlinear.
Picture 11 shows a diagram which allows to estimate sensitivity of the method (the first derivate of calibration curve) depending on the absolute gas pressure. The external appearance of vacuum gauge with transducers, created by MEMS technology, is shown on picture 12.
Picture 12 -Vacuum gauge of the resonance type The results of the research allow to deduce an inference, that developed models satisfy the requirements of technical design specification, and allows to continue work on creation of serially
|
2020-05-07T09:14:06.453Z
|
2020-05-05T00:00:00.000
|
{
"year": 2020,
"sha1": "f65450183db84cb7f05adb15fcfd1f34d6fe7dd3",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/781/1/012002/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b05398abf5c8818a9e59250452da750cf304bfd1",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
}
|
268475396
|
pes2o/s2orc
|
v3-fos-license
|
Biologically active secondary metabolites from white-rot fungi
In recent years, there has been a considerable rise in the production of novel metabolites derived from fungi compared to the ones originating from bacteria. These organic substances are utilized in various sectors such as farming, healthcare, and pharmaceutical. Since all dividing living cells contain primary metabolites, secondary metabolites are synthesized by utilizing intermediate compounds or by-products generated from the primary metabolic pathways. Secondary metabolites are not critical for the growth and development of an organism; however, they exhibit a variety of distinct biological characteristics. White-rot fungi are the only microorganisms able to decompose all wood components. Hence, they play an important role in both the carbon and nitrogen cycles by decomposing non-living organic substrates. They are ubiquitous in nature, particularly in hardwood (e.g., birch and aspen) forests. White-rot fungi, besides ligninolytic enzymes, produce different bioactive substances during their secondary metabolism including some compounds with antimicrobial and anticancer properties. Such properties could be of potential interest for the pharmaceutical industries. Considering the importance of the untapped biologically active secondary metabolites from white-rot fungi, the present paper reviews the secondary metabolites produced by white-rot fungi with different interesting bioactivities.
Introduction
Biologically active compounds are synthesized mostly by fungi, bacteria, archaea, and plants.These compounds possess different properties that make them suitable for various applications including drug development with anti-glycaemic, anticancer, antibiotic, antiviral, anti-inflammatory, enzyme inhibiting, hypercholesteremic, immunomodulator, immunosuppressant, cardiovascular, antithrombotic, antidiabetic, antihypertensive, neuropathic, and anti-infective characteristics for humans (Basit et al., 2021;Conrado et al., 2022;Kijpornyongpan et al., 2022).Starting from the 1940s, microorganisms have played a significant role in uncovering important sources of various natural products used in the agrochemical, cosmetic, pharmaceutical, and food industries (Baltz, 2019;Gakuubi et al., 2022).Because of their exceptional biological activities, these compounds have gained the attention of researchers in various fields, including those investigating natural productderived medicines as well as chemists, biochemists, and microbiologists.In addition, recently, metabolic engineers seek to enlighten regulations, pathways, and gene clusters to produce bioactive compounds efficiently using suitable host organisms with the assistance of genome sequencing to bioinformatics, transcriptomics, metabolomics, and proteomics (Tsukada et al., 2020;Kijpornyongpan et al., 2022).In this sense, microbial secondary metabolites are beneficial for human wellbeing, owing to their extensive usage in various biological processes spanning across agriculture, medical sciences, food technology, and chemical industry (Yadav, 2021).More specifically, among the different existing microorganisms, fungi have emerged as promising candidates for the discovery of novel biologically active compounds because of their varied pharmacological activities.
The advantageous effects of fungi on human health have been mainly associated with the abundance of various bioactive compounds, including carbohydrates, proteins, amino acids, unsaturated fatty acids, vitamins, and minerals (Gebreyohannes and Sbhatu, 2023) together with bioactive secondary metabolites.The therapeutic use of fungal species can be traced back to as early as 3000 BC.Thus, macrofungi such as Ganoderma lucidum (G.lucidum), Lentinus edodes (L.edodes), Fomes fomentarius, and Fomitopsis officinalis were used as remedies for different diseases in some East countries (Wasser, 2002).Medical professionals have been aware for over five thousand years regarding the presence of immune-enhancing and defensive attributes in fungal species (Ribka et al., 2021).In the literature, reports indicate that a significant number of approved medications are sourced from nature, with approximately 25% of the one million natural compounds examined showing biological activities.Among these compounds, around 60% originated from plants, while the remaining is derived from microbes.Notably, fungi contribute to approximately 42% of microbial resources, underscoring their significance in the exploration and identification of novel molecules (Bharatiya et al., 2021).Furthermore, until 2019, approximately 35% of the naturally derived products approved by the US Food and Drug Administration (FDA) contributed to the development of pharmaceuticals (Shankar and Sharma, 2022).Moreover, fungi are also able to synthesize various biologically active compounds such as pigments, dyes, antioxidants, nutraceuticals, dietary supplements, polysaccharides, and industrial enzymes (Al-Mousa et al., 2022a;Al-Mousa et al., 2022b;Hassane et al., 2022;Mohamed et al., 2022).These fungal products are not only crucial for functional food and nutrition but also serve as important sources of pharmacological/medicinal substances (Keller, et al., 2005;Hoeksma et al., 2019;Fukushima-Sakuno, 2020;Elhusseiny et al., 2021;Bhambri et al., 2022;Krupodorova et al., 2022).These intriguing scientific discoveries have garnered significant interest from researchers who are exploring the potential applications of these metabolites.
Metabolites are small and intermediate metabolism products that serve various purposes in organisms.These products are classified into primary and secondary metabolites.In this sense, primary metabolites are produced for growth, development, and survival and consist of amino acids, sugars, vitamins, lipids, nucleotides, and carbohydrates, which have critical duties in different metabolic processes including respiration and nutrient consumption.On the other hand, secondary metabolites are not integral components of the metabolic pathways; instead, they are synthesized as byproducts in terms of defense mechanisms and derived from primary metabolism.These compounds exhibit diverse biological functions and result from metabolic reactions that are non-essential for growth and reproduction of organisms.The production of secondary metabolites provides a competitive advantage to the organism by increasing the tolerance to environmental stresses and extreme conditions, and thereby indirectly influencing ecological dynamics (Devi and Krishnakumari, 2015;Daley et al., 2017;Thirumurugan et al., 2018;Conrado et al., 2022;Rodríguez-Berríos et al., 2023;Sharma et al., 2023)."Secondary metabolites" term was first introduced by the Nobel Prize laureate Albrecht Kossel in 1891 and the botanist Friedrich Czapek further created the term "secondary modifications" in his work related to plant nitrogen metabolism in the 1920s (Henriksen et al., 2022).Secondary metabolites are a diverse group of organic compounds primarily derived from various sources such as plants, fungi, and bacteria.These bioactive molecules are generally low molecular weight compounds (Molecular weight <1,500 Da) (Zerva et al., 2020a).They present scientific interest due to their multiple applications in industries (e.g., textile, functional food innovation, flavoring, glues, oil).Additionally, they hold promising potential for the development of novel pharmaceuticals, antibiotics, insecticides for pest control, and herbicides targeting unwanted plant growth (Devi and Krishnakumari, 2015;Devi et al., 2022;Shankar and Sharma, 2022).
The biosynthesis of fungal secondary bioactive metabolites is typically based on the mevalonic acid pathway, the acetate pathway, and carbohydrate/polysaccharide synthesis (Kundu, 2021).These fungal-derived bioactive compounds can be divided into high and low molecular weight.The former predominantly comprises polysaccharides and enzymes, while the latter encompasses terpenoids, phenols, and indoles, among others (Ziaja-Sołtys et al., 2022).Conrado et al. (2022) reported that out of the 500,000 secondary metabolites, approximately 70,000 are sourced from microorganisms.Among these compounds, around 33,500 exhibit bioactive properties, and about 47% of these bioactive compounds originate from fungal strains.Up to date, numerous fungal species, particularly filamentous fungi found within the basidiomycetes class, can be considered for an extensive range production of secondary metabolites with significant biological activities (Teoh et al., 2011;Patel and Goyal, 2012).Basidiomycetes are known for their ability to produce numerous secondary metabolites that exhibit antioxidant (Jayakumar et al., 2009), antimicrobial (Bala et al., 2011), antiinflammatory (Liu et al., 2015), antifungal (Sidorova and Voronina, 2019), and antiviral (Krupodorova et al., 2014) properties.Moreover, they can also produce cytotoxic compounds with the potential use as anticancer agents and immunomodulating polysaccharides.Additionally, some of these metabolites have hallucinogenic effects while others serve as sources for plant growth regulators or flavors (Prasher and Manju, 2019;Halabura et al., 2023).
In recent years, microorganisms belonging to basidiomycetes have become very promising for red (medical) biotechnology (Mizerska-Dudka et al., 2015) and cosmeceuticals (Zerva et al., 2020b).In this sense, white rot fungi are a prominent group within the phylum basidiomycota, which are saprotrophic organisms in the fungal kingdom, encompassing approximately 30%-32% of fungal diversity with an estimated 30,000 distinct species.These fungi can degrade all components of plant cell wall through various mechanisms including extracellular enzymatic processes as well as non-enzymatic ones such as reactive oxygen species (Kijpornyongpan et al., 2022).Due to the distinct characteristics of white rot fungi, these microorganisms and their extracellular enzymes that primarily include lignin peroxidases (LiPs, EC 1.11.1.14),manganese-dependent peroxidases (MnPs, EC 1.11.1.13),and laccases (benzenediol: oxygen oxidoreductases, EC 1.10.3)along with additional enzymes such as peroxidase-generating oxidases and mycelium-associated dehydrogenases (Martínez et al., 2005) have been recognized for their potential in various biotechnological applications.With the aid of these enzymes, white-rot fungi possess the capacity to break down intricate plant cell wall polymers such as cellulose, hemicellulose, and lignin.White-rot fungi represent the only known group of organisms that have evolved to effectively break down lignin into carbon dioxide (CO 2 ) and water (H 2 O) which contributes significantly to Earth's ecosystem (Floudas et al., 2012;Kundu, 2021;Llanos-López et al., 2023).
White rot fungi possess a significant capacity to produce numerous enzymes and secondary metabolites, exhibiting potential in the fields of nutrition, medicine, and degradation.These secondary metabolites, including terpenoids, polyphenols, sterols, flavonoids, alkaloids, derivatives of benzoic acid, quinolones, anthraquinones, and lactones possess bioactive characteristics (Jaszek et al., 2013;Jaszek et al., 2014;Fernando et al., 2016;Bogale, 2020;Kundu and Khan, 2021;Mahuri et al., 2023).Moreover, considering the economic significance and the intention of applications based on biologically active secondary metabolites, there has been a rising global interest in these compounds produced by white rot fungi.Thus, recognizing the great potential of biologically active secondary metabolites from white-rot fungi, this review focuses on exploring the secondary metabolites generated by them, emphasizing their fascinating bioactive properties.
Biologically active secondary metabolites produced by whiterot fungi
Secondary bioactive metabolites produced by white rot fungi have significant potential of applicability in various sectors such as pharmaceutical production (Zheng et al., 2010), biobleaching processes in pulp and paper industries (Jerusik, 2010), wastewater treatment approaches (Muszyńska et al., 2019), enhancing digestibility of cellulose and lignin in animals (Yilkal, 2015), generation of renewable resources from lignocellulosic materials, and bioremediation technologies (Korcan et al., 2012;Contreras et al., 2023) (Figure 1).In the literature, several works have revealed the production of biologically active secondary metabolites of whiterot fungi (Table 1).Moreover, the chemical structures of some bioactive secondary metabolites from white-rot fungi are illustrated in Table 2. Considering this, the Schizophyllum genus has been an important white-rot fungal genus with the capability of producing bioactive secondary metabolites.The combined treatment of radiotherapy and sizofiran, a polysaccharide extract from the culture broth of Schizophyllum commune (S. commune), resulted in a significantly higher 5-year survival rate with 90 patients compared to 82 patients treated with radiotherapy alone.Sizofiran demonstrated promising potential as an General processing steps of biologically active secondary metabolites production from white-rot fungi.(Miyazaki et al., 1995).Tanimoto et al. (1996) extracted and identified a novel metabolite, schizostatin, from S. commune with the ability to inhibit rat liver microsomal squalene synthase to control cholesterol levels.However, in terms of antimicrobial effects, schizostatin showed no antimicrobial activity at a concentration of 1 mg/mL against many microorganisms including Bacillus subtilis (B.subtilis), Candida albicans (C.albicans), Escherichia coli (E.coli), Mycobacterium smegmatis, Mycoplasma mycoides, Proteus vulgaris, Proteus mirabilis, and Staphylococcus aureus (S. aureus).
Tripathi and Tiwary (2013) investigated the production of bioactive compounds from the solvent extracts (methanol, ethanol, acetone, ethyl acetate, and hot water) of S. commune isolated from the Achanakmar-Amarkantak Biosphere Reserve of Central India.In that work, phenolic compounds (i.e., phenyl benzoate (C 13 H 10 O 2 ) and 4-(phenyl methoxy) phenol (C 13 H 12 O 2 )) with antioxidant activity and the antibacterial
Strain name
Bioactive compound(s) Benefits compound pyrrolo (1, 2-a) piperazine-3, 6-dione (C 7 H 10 O 2 N 2 ) were identified from the ethanolic and methanolic extracts, respectively.Moreover, they also detected gallic acid and L-ascorbic acid as antioxidant metabolites of both S. commune ethanol and methanol extracts.Therefore, it was suggested that S. commune could be used for the production of valuable therapeutic agents having antimicrobial and antioxidant activities.Among several fungal isolates tested for the insecticidal potential against the tobacco cutworm Spodoptera litura (S. litura), ethyl acetate extracts of S. commune, isolated from Aloe vera, showed the strongest insecticidal activity (Kaur et al., 2018).The HPLC analysis of the fungal extract indicated that it contained various phenolic compounds such as gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, rutin quercetin, and kaempferol.Larvae of S. litura treated with that S. commune extract exhibited a notable decrease in the occurrence of living haemocytes with 40.00%-73.33%mortality, as well as an elevated incidence of apoptotic and necrotic cells through the cytotoxic effect of the fungal extract.Moreover, the effect of the fungal extract on tetrazolium dye mammalian viability assay (MTT) on Chinese Hamster Ovary (CHO) cell lines was carried out with a cell viability of 81.82% which was higher than the control (below 60%) consisting of doxorubicin treated cells.Additionally, the evaluation of the genotoxic effect of the S. commune ethyl acetate extract at various exposition times using the comet assay showed that the increasing oxidative stress triggered more DNA damage in haemocytes of S. litura.As a result of all the findings, the S. commune extract was proposed as a potential biocontrol agent.
Water, acetone, and ethanol extracts of S. commune exhibited a free radical scavenging activity of 19.65% at a concentration of 100 μg/mL (Kumar et al., 2018).Moreover, at the same concentration, the superoxide anion scavenging activity and the hydroxyl radical scavenging activity for S. commune extract was determined as 4.84% and 7.50%, respectively, whereas the total antioxidant capacity (TAC) of S. commune extract was found to be 12.15% using ascorbic acid as a reference standard.According to the quantitative analysis of S. commune mycochemicals, phenolics, flavonoids, alkaloids, tannins, and saponins were found to be 10.80 ± 0.76, 4.67 ± 0.23, 4.26 ± 0.54, 1.24 ± 0.16, and 23.83 ± 0.84 (mg/g), respectively.Compared to the edible fungi Tricholoma nudum and Psalliota campestris, a higher number of bioactive compounds (except for the content in phenolics) was obtained from S. commune.Overall, those results indicated that S. commune can serve as a valuable source of antioxidants for human health, and it was proposed that their extracts had the potential to be used for the development of drugs to lower the oxidative stress in the body.
Culture filtrate and bioactive metabolites from chloroform extracts of S. commune were investigated for their antimicrobial properties against different types of plant pathogens (Dutta et al., 2019).In that work, pepper fruits were treated with schizostatin and then infected with Colletotrichum gloeosporioides (C.gloeosporioides) or Botrytis cinerea (B.cinerea).For C. gloeosporioides infection (for anthracnose), significant effect for the control of the disease was observed from the treatment with 10 μg/mL and reached maximum with 97.8% and 100.0%by 100 μg/ mL and 150 μg/mL treatment, respectively.Moreover, the control of the disease for B. cinerea (cause of gray mold disease) was 83.2% and 94.6% by treatment with 100 mg/mL and 150 mg/mL, respectively.The incidence of anthracnose in field conditions showed a decrease when treated with a diluted solution (12.5%) of a culture filtrate derived from S. commune.In that paper, the compound responsible for its antifungal and disease-control activity was identified as schizostatin.On the other hand, the growth of fungal pepper plant pathogens was inhibited by S. commune culture filtrate, while bacterial pathogens Ralstonia solanacearum and Pectobacterium carotovorum were unaffected by schizostatin.Thus, it was proposed that schizostatin had the potential to be utilized as a biochemical pesticide for controlling fungal infections, including anthracnose and gray mold, in various types of vegetables.Alam et al. (2009) discovered that feeding hypercholesterolemic rats with a 5% of fruiting body powders of Pleurotus ostreatus (P.ostreatus), Pleurotus sajor-caju (P.sajor-caju), and Pleurotus florida (P.florida) resulted in substantial reductions in total cholesterol levels (by 37.0%, 21.0%, and 16.0%, respectively) and triglyceride levels (by 45.0%, 24.0%, and 14.0%, respectively) in plasma which were attributed to the content of lovastatin in the fungal powders.Also, they compared the effect of P. sajor-caju on plasma and fecal lipid profiles as well as liver and kidney function in rats with high and normal cholesterol levels.The low-density lipoproteins (LDL)/ high-density lipoproteins (HDL) ratio also exhibited significant decreases of 64.0%, 45.0%, and 41.0% for P. sajor-caju, P. ostreatus, and P. florida-fed rats, respectively.These findings based on mice studies suggested that consumption of the aforementioned Pleurotus species could bring notable health advantages by modulating physiological functions, particularly in addressing various atherogenic lipid profiles in cases of hypercholesterolemia, potentially serving as a nutritious source and a preventative measure against related complications and known risk factors for atherosclerosis.Fagade and Oyelade (2009) identified and assessed 12 different species including Auricularia auricula, Coriolus versicolor (C.versicolor), Daedalea elegans, Fomes lignosus, G. lucidum, Lentinus subnudus, Leptoporus sp., S. commune, Panus fulvus (P.fulvus), P. florida, Trametes saepiara, and Trametes betulina for antibacterial activity.Among them, the ethanol extracts of P. florida and P. fulvus exhibited the strongest antibacterial activity against a range of bacteria including S. aureus, Streptococcus sp., Streptococcus pyogenes (S. pyogenes), E. coli, Klebsiella pneumoniae (K.pneumoniae), Flavobacterium sp., and the yeast C. albicans at a concentration of 1 g/mL for each microorganism.Additionally, P. florida displayed the lowest minimum inhibitory concentrations (MIC) value (0.01 g/mL) when tested against the yeast C. albicans, whereas the highest MIC was observed for P. florida (1 g/mL) against Flavobacterium sp.However, ethanolic extracts of S. commune and C. versicolor showed no inhibition against any of the tested bacteria.
The predominant bioactive component, total phenols, of the methanolic extract of Pleurotus pulmonarius (P.pulmonarius) was found to be 5.79 ± 0.03 mg/mL expressed as milligrams of gallic acid equivalent (GAE) per Gram of fruiting body (Ramesh and Pattar, 2010).The extract also contained flavonoids at a concentration of 1.76 ± 0.06 mg/mL and a minimal amount of ascorbic acid at 0.13 ± 0.00 mg/mL.The radical scavenging activity (RAS) on 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical was measured at 1.62 ± 0.2 mg/mL for P. pulmonarius.Antimicrobial activity against a range of standard pathogenic Gram-positive and Gram-negative bacteria, along with yeast, was assessed demonstrating a notable antibacterial effect.MIC values indicated antimicrobial activity even at low concentrations (from 1 to 5 mg/mL).The methanolic extract of P. pulmonarius showed promising biopharmaceutical potential with its antioxidant and antimicrobial properties.However, additional research is essential to assess its effectiveness as a therapeutic compound.It is also crucial to identify the bioactive compounds and understand their mechanisms of action before considering practical applications.Ghaly et al. (2011) evaluated the antihyperglycemic properties of an ethanol extract from P. ostreatus and its influence on potential DNA damage, chromosome aberrations, and sperm abnormalities in diabetic rats induced by streptozotocin.The research involved five groups of adult male albino rats, with the control group consisting of normal animals and the remaining groups comprising hyperglycemic animals.These hyperglycemic groups were orally administered with the antidiabetic drug Amaryl and different levels of mushroom extract doses: low (100 mg/kg.bodyweight/dL), or high (200 mg/kg.bodyweight/dL) for 30 days.According to their findings, the application of a higher dosage of P. ostreatus extract exhibited superior therapeutic effects compared to the treatment with a lower dosage.Notably, P. ostreatus extract, especially at high concentration, effectively lowered blood glucose levels in hyperglycemic rats, though to a lesser extent than Amaryl treatment.Significantly, mushroom treatments exhibited greater efficacy in reducing genetic alterations and sperm abnormalities in diabetic conditions compared to Amaryl treatment.In conclusion, this study highlighted the potential of P. ostreatus extract, particularly at higher concentration, to alleviate elevated blood glucose levels and mitigate genetic and reproductive abnormalities associated with diabetes, offering a promising alternative to conventional treatments.
Two polysaccharide fractions, namely PSPO-1a (composed of mannose, glucose, galactose, xylose, and rhamnose) and PSPO-4a (composed of rhamnose, mannose, and galactose), both containing protein and uronic acid, were isolated from ethanol extracts of P. ostreatus (Zhang et al., 2012).These fractions demonstrated stronger DPPH and superoxide anion radical scavenging activities, which increased with concentrations up to 2.1 mg/mL and 3.0 mg/mL, respectively.However, their efficacy in scavenging hydroxyl radicals was comparatively lower compared to DPPH and superoxide anion radical scavenging activities.Consequently, PSPO-1a proved to be a more potent free-radical scavenger than PSPO-4a which could be resulted from their different polysaccharide compositions and their varied molar ratios.
A study conducted by Mitra et al. (2013) focused on exploring the antioxidant potential and nitric oxide synthase (NOS) activation properties of water-soluble crude polysaccharides derived from P. ostreatus.Their results indicated that the polysaccharides, primarily composed of carbohydrates, notably β-glucan, displayed good antioxidant activity, demonstrating superiority in free radical scavenging and NOS activation compared to other components including low levels of protein and phenolic compounds.The yield from hot water extraction of dried fruit bodies revealed a total polysaccharide content of 62.67 ± 7.67 mg/100 mg, with the total glucan component as 43.9 ± 1.2 mg/100 mg.Furthermore, the polysaccharides exhibited significant NOS activation properties.Moreover, considering the antioxidant activity, the EC 50 (the concentration required to obtain a 50% antioxidant effect) values for scavenging hydroxyl radicals, superoxide radicals, and chelating ferrous ions were 665, 390, and 370 μg/mL, respectively.That study highlighted the potential of P. ostreatus as a valuable source of bioactive compounds, suggesting that its crude polysaccharides, rich in β-glucan, could serve as an effective antioxidant food additive or find applications in the pharmaceutical industry.Assis et al. (2013) investigated the antitumor activity of the exopolysaccharides (EPSs) and the mycelial biomass (intracellular polysaccharides, IPSs) of P. sajor-caju against Sarcoma 180 (S180) cells.According to that test, the antitumor efficacy of the produced EPS was 86%, and two IPSs from the mycelial biomass showed 80% and 82%.Since the chemical characterization of these bioactive compounds was not determined within that study, the main concern surrounding these exo-and intracellular polysaccharides lies in the lack of identification of bioactive secondary metabolites presented in EPS and IPS produced by P. sajor-caju.Therefore, after further investigation, that findings could aid in the exploration of new bioactive substances, introducing innovative perspectives to the medical and pharmaceutical fields.Corrêa et al. (2015) focused on the chemical characterization and bioactivity of ethanol extracts of fruiting bodies and submerged mycelia from Pleurotus ostreatoroseus (P.ostreatoroseus).The fruiting body and mycelia extracts contained a minimum of five free sugars, four organic acids, four phenolic compounds, and two tocopherols.The culture filtrates from submerged cultivation exhibited superior reducing activity only for fruiting body (1.79 ± 0.01 mg/mL).Furthermore, DPPH scavenging activity (4.78 ± 0.02 mg/mL for fruiting body and 15.62 ± 0.13 mg/mL for mycelia), β-carotene bleaching inhibition (0.40 ± 0.01 mg/mL for fruiting body and 7.62 ± 0.25 for mycelia), and lipid peroxidation inhibition (0.29 ± 0.00 mg/mL for fruiting body and 2.34 ± 0.08 mg/ mL for mycelia) in porcine (Sus scrofa) brain homogenates were detected in terms of bioactivity.Additionally, P. ostreatoroseus demonstrated higher anti-inflammatory and antimicrobial activities, while showing no hepatotoxicity in porcine liver primary cells.These functional responses could be associated with varying levels of bioactive metabolites in the fruiting body extract and the submerged culture filtrate, including phenolic acids, organic acids, and tocopherols.These bioactive compounds can be utilized to create dietary supplements for nutraceutical purposes.
P. sajor caju, a medicinal fungus, was reported as a notable source of secondary metabolites such as phenols (3.35 ± 0.20 mg/g), flavonoids (5.36 ± 0.31 mg/g), tannins (6.84 ± 0.12 mg/g), and alkaloids (2.81 ± 0.61 mg/g), in addition to carbohydrates, protein, amino acids, and vitamins (A, C, and E) (Devi and Krishnakumari, 2015).These secondary metabolites could have significant potential for exhibiting antimicrobial, anticancer, antipyretic, astringent, and antiviral properties.The findings in that work strongly indicated the commercial and pharmaceutical significance of the secondary bioactive compounds found in P. sajor caju.Chowdhury et al. (2015) studied the antimicrobial and antioxidant properties of methanolic extracts from three edible mushrooms (P.ostreatus, Lentinula edodes (formerly Lentinus edodes) (L.edodes), and Hypsizigus tessulatus (H.tessulatus)) isolated from Bangladesh.Antimicrobial activity against 8 microbial strains was evaluated, revealing substantial effectiveness with diameters of inhibition zone (DIZ) ranging from 7 ± 0.2 to 20 ± 0.1 mm.MIC values exhibited notable activity at concentrations ranging from 1 mg/mL to 9 mg/mL, with L. edodes exhibiting the most potent antimicrobial activity.Pseudomonas aeruginosa (P.aeruginosa) showed the maximum resistance, while Saccharomyces cerevisiae (S. cerevisiae) was more sensitive for the three fungal tested extracts.To reveal the antioxidant efficiency, free radical scavenging activity (EC 50, μg/ ml) on DPPH was determined as 100 ± 1.20, 105.0 ± 1.23, and 110.0 ± 1.24 μg/mL respectively for P. ostreatus, H. tessulatus, and L. edodes (with ascorbic acid as a control, 5.25 ± 0.21 μg/mL).The total phenols, a major bioactive component, ranged from 3.20 ± 0.05 to 10.66 ± 0.52 mg/mL expressed as mg of GAE per Gram of fruiting bodies.Furthermore, the flavonoid concentration detected spectrophotometrically in all isolates ranged from 2.50 ± 0.008 mg/mL to 4.76 ± 0.11 mg/mL.The potential of these extracts to serve as effective therapeutic agents requires further investigation and a detailed study of their mechanisms of action prior to application.Koutrotsios et al. (2018) cultivated P. ostreatus, Pleurotus eryngii (P.eryngii), and Pleurotus nebrodensis (P.nebrodensis) on unconventional substrates such as grape marc (GMC) and olive mill byproducts (OMB), with wheat straw (WHS) serving as the control.GMC-based media demonstrated comparable or superior mushroom productivity compared to WHS for P. eryngii and P. nebrodensis, while P. eryngii exhibited enhanced cultivation performance in OMB-based media.Both GMC and OMB substrates led to a substantial increase in the content of fruiting bodies in phenolic acids, resveratrol, triterpenic compounds, and ergosterol.Specifically, P. eryngii methanol extract displayed significantly high total phenolics, showing a substantial 2-to 8fold increase in antioxidant activity based on DPPH and ferric reducing/antioxidant power assays.Moreover, substrates containing GMC or OMB resulted in up to a 27.0% increase in mushroom βglucans.Pleurotus species responded differentially and mostly in a substrate-specific manner, selectively absorbing organic compounds.The phenolics and squalene content in substrates showed a strong correlation with the antioxidant activity of fungi and ergosterol, respectively.Similarly, a comparable correlation was noted between the triterpene content in substrates and fungi.
In a study by Beltrán Delgado et al. (2021), the aqueous extract of mature fruiting bodies of P. ostreatus exhibited higher levels of proteins, reducing sugars, and flavonoids than those in the extract of early-stage fungus.However, carbohydrates and total phenols were higher in the extract from the early stage of fungal development than in mature fruiting body extract.According to that work, the antioxidant characteristics of P. ostreatus aqueous extractions (earliest stage of fungal development and mature fruiting bodies) were influenced by changes in the levels of bioactive compounds, considering the physiological attributes during different growth phases.These findings could be valuable for developing protocols to obtain bioproducts from P. ostreatus with potential applications as antioxidants in food and medical-pharmaceutical industries and for the design and formulation of new related therapeutic products.Ogidi et al. (2020) focused on the production of EPSsby submerged cultures of P. pulmonarius, containing diverse agricultural wastes.The highest EPS yield (5.60 g/L) was achieved by P. pulmonarius submerged cultures supplemented with groundnut shells (20.0 g/L) (EPS-B).The observed zones of inhibition by EPS-A (without agro-waste), EPS-B (groundnut shell), EPS-C (coconut husk), and EPS-D (pineapple peel) against Shigella dysenteriae and E. coli did not show significant differences.All the obtained EPS variants inhibited the growth not only of Gram-positive bacteria, including B. subtilis and S. aureus, but also against C. albicans and Gram-negative bacteria.All the obtained EPS exhibited DIZs (5.00-14.00mm) against different tested microorganisms.The MIC also ranged from 0.25 to 1.00 mg/mL against the tested microorganisms.The EPS-A to D demonstrated scavenging activity within the ranges of 67.80%-81.80%,60.60%-81.20%,70.40%-84.70%,and 78.40%-88.50%against DPPH, OH, Fe 2+ , and NO radicals, respectively.The potential applications of the EPS, obtained from submerged cultures of P. pulmonarius supplemented with different agro-wastes, make it a promising natural product with the possibility of being utilized as a preservative in the food industry.Additionally, the method of generating natural bioactive compounds through fungal submerged culture using agricultural waste offers a potential solution to the unregulated disposal of agricultural waste into the environment.
Aqueous extracts from P. ostreatus and L. edodes (shiitake mushroom) exhibited the expression of 753 and 432 proteins, respectively (Elhusseiny et al., 2021).Common bioactive peptides such as Rab GDP dissociation inhibitor, superoxide dismutase, thioredoxin reductase, serine proteinase, and lectin were identified in both white rot fungal extracts Additionally, P. ostreatus extract contained phenolics and flavonoids, such as catechin, kaempferol, and apigenin, whereas catechin and quercetin were detected in the extract of L. edodes.Vitamins, including ascorbic acid, nicotinic acid, nicotinamide, and pyridoxine, together with various amino acids were also detected in both extracts.The antioxidant impact of both fungi can be ascribed to the existence of numerous bioactive elements, such as flavonoids, phenolics, bioactive peptides, and vitamin C. Notably, both extracts demonstrated significant antiviral activities, particularly P. ostreatus extract exhibited a selectivity index (SI) of 4.5 and 2.0 against adenovirus (Ad7) and herpes simplex virus-II, respectively, while L. edodes extract showed values of 2.7 and 2.5 for the respective viruses.The aqueous extracts from L. edodes and P. ostreatus demonstrated an approximately 20.0% reduction in viability among the tested cancer cell lines LS-513 (cecum carcinoma), HepG2 (hepatocellular carcinoma), DU-145 (prostate cancer), and PC-3 (prostate cancer).Cytotoxicity analysis was conducted on aqueous fungal extracts against leukemia (CCR-CEM, NB-4, THP-1) and lymphoma (U937) cells.The L. edodes extract exhibited a viability decrease of 66.02% in THP1 cells, while the P. ostreatus extract reduced the viability of CCRF-CEM cells to 70.64%.Additionally, minimal cytotoxic effects on normal human peripheral blood mononuclear cells (PBMC) from the extracts with untreated cells and doxorubicin treated cells as negative and positive controls, respectively, was observed.Considering the effects of a wide range of bioactive compounds in the aqueous extracts of the white rot fungi P. ostreatus and L. edodes, the study suggested the potential pharmacological application of these fungal strains.It underscored their minimal cytotoxicity on normal PBMCs, while also emphasizing their beneficial properties in terms of antiviral, antitumor, and antioxidant properties.Oba et al. (2009) assessed the impact of immunochemotherapy using lentinan derived from L. edodes in comparison to chemotherapy alone in individuals with advanced gastric cancer through a meta-analysis of 650 individual patient data.Based on their findings, lentinan demonstrated a potentially higher efficacy in patients with lymph node metastasis in contrast to those without such metastasis.Moreover, the proportion of hepatic metastasis in the group receiving chemotherapy plus lentinan was smaller than that in the group receiving chemotherapy alone, with percentages of 34.5% and 43.1%, respectively.It was indicated that lentinan extended the overall survival period of the patients.In summary, the inclusion of lentinan alongside standard chemotherapy showed a notable and significant advantage over chemotherapy alone in terms of survival for individuals having advanced gastric cancer.Resurreccion et al. (2016) reported the isolation of ergosterol and trilinolein from dichloromethane extracts of L. edodes, obtained from the Mushroom Burger in Tagaytay City, Philippines.Their structures were identified by comparing their NMR data with those of the existing literature.Ergosterol from the water extract of Polyporus showed significant protective properties against bladder tumor promotion in Wistar rats (Yazawa et al., 2000).Previous research also indicated that ergosterol in P. ostreatus extracts had the potential to inhibit lipid peroxidation (Dissanayake et al., 2009).On the other hand, trilinolein exhibited protective effects against cardiovascular disorders, including its ability to inhibit ischemiainduced ventricular arrhythmias and display antioxidant properties (Chan et al., 2002;Chan et al., 2005).In addition, trilinolein from the water extract of Polyporus inhibited the growth of human non-small cell lung carcinoma A549 and induce programmed cell death, with the effects being contingent on both the dosage and duration of exposure (Chou et al., 2011).
Sevindik (2018a) evaluated the antioxidant capacity of the Lentinus tigrinus (L.tigrinus) by determining the total antioxidant status (TAS), the total oxidant status (TOS), and the oxidative stress index (OSI) as 1.748 ± 0.071 mmol/L, 19.294 ± 0.237 μmol/L, and 1.106 ± 0.031, respectively.Additionally, the antimicrobial properties of ethanol, methanol, and dichloromethane extracts of L. tigrinus were investigated against several bacterial and yeast strains, including S. aureus, Enterococcus faecalis (E.faecalis), E. coli, P. aeruginosa, C. albicans, Candida krusei (C.krusei) and Candida glabrata (C.glabrata) in a range from 800 to 100 MIC (µg/mL) values with highest anticandidal activity.In that paper, it was proposed that L. tigrinus could serve as a natural antioxidant and antimicrobial source.On the other hand, since L. tigrinus is an edible mushroom (Mohammadnejad et al., 2019), the restriction of over-consumption of this white-rot fungus could be necessary because of its high level of antioxidants.Moreover, the fungal extracts should be analyzed to determine the responsible bioactive secondary metabolites for its antimicrobial and antioxidant activities.
The antioxidant and antidiabetic properties of mycelium and fruiting body ethanol extracts of Lentinus swartzii (L.swartzii) was examined by Austria et al. (2021).The inhibition of α-amylase, which is an enzyme responsible for breaking down carbohydrates during digestion, has the potential to cause a decrease in blood sugar levels (Tundis et al., 2010).Considering this, the ethanolic extract of L. swartzii mycelium demonstrated a notable α-amylase inhibitory activity of 81.98%, while the fruiting body ethanolic extract exhibited an α-amylase inhibitory activity of 71.08%.The mycelial extract contained essential oils, triterpenes, sugars, tannins, flavonoids, fatty acids, and phenols, while the fruiting body extract presented the same components except for fatty acids and sugars.At a concentration of 1,000 μg/mL, the mycelial ethanolic extract showed scavenging effects against DPPH (35.29%) and NO (36.04%), contained 20.25 mg GAE/g sample, and demonstrated high inhibitory activity against α-amylase (81.98%).Similarly, the fruiting body ethanolic extract, at the same concentration, scavenged 43.69% of DPPH, 31.75% of nitric oxide, contained 16.92 mg GAE/g sample, and exhibited high inhibitory activity against α-amylase (71.08%).Consequently, both L. swartzii mycelia and fruiting body ethanolic extracts held promise as valuable sources of bioactive compounds with antioxidant and antidiabetic activities.A notable observation in that paper was that mycelia grown in coconut water exhibited superior activities compared to the fruiting body cultivated in sawdust and rice straw substrate.This signifies that the chemical properties and biological effects are impacted not just by elements like species, strain type, growth media, and solvents for extraction but also by the specific medium composition used for fungal cultivation.Further steps, including the isolation and characterization of the compounds responsible for these significant bioactivities, are crucial for a comprehensive understanding of the extracts' potential in different applications.Muslihin et al. (2022) reported that the wild mushroom Lentinus squarrosulus (L.squarrosulus) possessed notable characteristics such as rapid mycelium growth, having potential to be a food source, and various other benefits.Notably, it serves as a source of bioactive compounds.The ethyl acetate extract from L. squarrosulus was analyzed at 516.8 nm using a UV-Vis spectrophotometer and it revealed a strong antioxidant activity with an EC 50 of 54.93 mg/L.This highlights its possible utility in various applications.
Lin et al. ( 2008) investigated the impact of Lycium barbarum (L.barbarum) fruit extract on the growth and extracellular polysaccharopeptide (ePSP) production by C. versicolor (now Trametes versicolor (T.versicolor)) in a 20-L fermenter under submerged fermentation conditions.The addition of L. barbarum extract (LBE) into the culture medium led to a notable increase in ePSP production as from 0.61 g/L to 1.66 g/L.Significantly, ePSP from C. versicolor cultured with supplemental L. barbarum extract demonstrated noteworthy immunomodulatory activity, influencing the production of nitric oxide and various cytokines by murine RAW264.7 cells.The approach in that work can open up new possibilities for the future advancement of dietary supplements centered around C. versicolor LH1 polysaccharopeptides.
From a submerged culture of Panus strigellus (P.strigellus), Llanos-López et al. ( 2023) isolated three metabolites, including a new bioactive compound called panapophenanthrin and two known compounds identified as panepophenanthrin and dihydrohypnophilin which are defined in an uncommon group of oligocyclic terpenoidal metabolites, exclusively identified in the Panus genus.While panapophenanthrin and dihydrohypnophilin exhibited not a very strong antimicrobial effect with MIC ranging from 33.3 to 66.6 g/mL on various fungal strains together with Gram-positive and Gram-negative bacteria, panepophenanthrin showed no activity against any of the tested microorganisms.Moreover, panapophenanthrin showed strong cytotoxic effects on mammalian cell lines including mouse fibroblast (L929) and human endocervical adenocarcinoma (KB3.1) with 13.2 and 17.9 EC 50 (µM) values, respectively.Panus species are predominantly found in tropical and subtropical areas.Hence, this discovery emphasized the significance of examining tropical species to uncover new bioactive compounds, but additional research is necessary to thoroughly understand the bioactivity of these compounds and explored their potential uses in different applications.
Cyclocybe cylindracea (C.cylindracea) ethanol extract was investigated to determine its phenolic content, heavy metal content, and antioxidant activity to evaluate possible medical benefits (Sevindik et al., 2018).In that work, TAS, TOS, and OSI values were determined as 4.325 mmol/L, 21.109 μmol/L, and 0.488.Phenolic compounds including gallic acid, hesperidin, catechin, syringic acid, and hydroxybenzoic acid, were detected in the ethanolic extracts of C. cylindracea.These bioactive compounds presented diverse health advantages, encompassing antioxidant properties, anti-inflammatory effects, and potential anti-cancer properties (Sevindik et al., 2018).Despite possible benefits, since C. cylindracea is edible (Landingin et al., 2020), its levels of Pb (16.54 ± 0.93 mg/kg) and TOS values should be considered.Masuda et al. (2008) explored the antimetastatic properties of Grifola frondosa (G.frondosa) (maitake mushroom) extracts, using an experimental mice model of lung metastasis.The observed inhibition of lung metastasis by G. frondosa extract was attributed to the activation of NK cells.Additionally, the G. frondosa extract demonstrated inhibition of ICAM-1 (Intercellular Adhesion Molecule 1) expression in vascular endothelial cells, suggesting that its mechanism of action involved blocking the adhesion of tumor cells to lung tissue, thereby inhibiting metastasis.Their findings suggested that G. frondosa extract was effective for cancer prevention and the inhibition of tumor metastasis when consumed regularly.Masuda et al. (2009) explored the ability of an extract from G. frondosa to enhance the immune system by antitumor and antimetastatic activities together with cisplatin, a well-known anticancer drug.Based on their findings, the increased antitumor and antimetastatic effectiveness observed in the combination of cisplatin with G. frondosa extract was attributed to a synergistic interaction.This synergy was sourced from the dual action of cisplatin's cytotoxic impact on tumor cells and the simultaneous activation of the immune response in antigen-presenting cells (APC) and natural killer (NK) cells by G. frondosa extract.Moreover, this combination not only exhibited antitumor and antimetastatic activity but also caused a decrease in cisplatin-induced myelotoxicity and nephrotoxicity.Consequently, the joint administration of G. frondosa extract with cisplatin holds promise as a beneficial approach to cancer treatment.
The clinical evaluation of the immunological effects of hot water and alcohol extract from the fruit body of G. frondosa at different oral dosage levels was first investigated for a group of 34 eligible study subjects in the work performed by Deng et al. (2009).Based on their results, it was shown that the administration of G. frondosa extract was linked to notable alterations in specific immunologic parameters within the peripheral blood.According to their findings, this extract was recognized for its role as an immunomodulator rather than simply an immune enhancer.Moreover, cancer patients should be aware of that G. frondosa extracts may have complex effects on immune function, and while the clinical impact on cancer prevention or treatment remains uncertain, it is crucial to conduct experimental investigations to clarify its potential anticancer effects.Su et al. (2020) prepared a G. frondosa extract through a process involving hot water extraction from the fruiting body, followed by enzymatic digestion and dialysis, resulting in high and low molecular weight fractions.They examined the water-soluble polysaccharides of G. frondosa to understand their impact on inflammation and receptor interactions using parental RAW264.7 macrophages and Dectin-1-expressing RAW264.7 macrophages.The results of cell-based assays indicated that the high molecular weight fraction (1,260 kDa) as the major bioactive fraction demonstrated inhibitory effects on tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6) production, while also reducing NF-κB (important transcription factor regulating inflammatory responses in eukaryotes) activation in lipopolysaccharide-induced macrophages.That research suggested that the nondigestible β-(1→6)-branched (1→4)-β-Dglucan found in high molecular weight fraction might be responsible for its anti-inflammatory properties by interacting with TLR2 receptors rather than Dectin-1 or CR3 receptors.The discovered polysaccharide was identified as a non-digestible glucan with a β-linked core and side groups.Moreover, the receptorbinding properties and anti-inflammatory activity of G. frondosa polysaccharides may be influenced by their molecular weight and arrangement of linkages.Eckhardt et al. (2000) reported the impact of a bioactive fungal compound on pancreatic cancer in humans.This involved a phase I trial and a pharmacokinetic examination of irofulven (a fungal cytotoxin) (doses ranging from 1.0 to 17.69 mg/m 2 ) and a novel cytotoxin derived from the white-rot fungus Omphalotus olearius (O.olearius), which was conducted on 46 patients (given daily for 5 consecutive days every 4 weeks) with advanced solid malignancies.According to the trial, evidence of antitumor activity was observed in an individual with advanced pancreatic cancer, coupled with the remarkable preclinical antitumor effects demonstrated by irofulven.Chandra et al. (2019) assessed the antioxidant activity of the white-rot fungi Phanerochaete chrysosporium (P.chrysosporium), Phlebia brevispora (P.brevispora), and Phlebia floridensis (P.floridensis) against various free radicals, including DPPH, nitric oxide, ferrous ion, and ferric ion, in addition to their total phenolic content.All the studied fungal strains produced phenolics ranging from 5.2 to 16.7 mg/mL and exhibited diverse free radical and metal ion scavenging activities.The growth medium significantly influenced these activities.Thus, all the studied fungi presented similar antioxidant activity (approximately 72.0%DPPH scavenging) in yeast extract glucose medium, while it was lower in Czapek dox's medium (ranging from 60.0% to 45.0%).The fungal extracts showed no mutagenic or cytotoxic effects, highlighting the fungi's potential as a new source for the rapid production of extracellular antioxidants.These white rot fungi displayed strong antioxidant potential and could serve as a valuable source of natural antioxidant compounds.Further studies are recommended to isolate and characterize the bioactive compounds for potential use in new therapeutic approaches.
H. ernaceus methanolic extracts inhibited the inflammatory activity induced by lipopolysaccharide/interferon-γ in murine RAW264.7 cells, with a maximum decrease in nitric oxide production of 39.6% (Lee et al., 2016).The bioactive metabolites in these methanol extracts were identified as three different hericenones, C, D, and F. Consequently, they suggested that the anti-inflammatory effect of the H. ernaceus extract was likely based on the hericenones F. Also, ethanolic extracts of H. erinaceus myceliaeffectively inhibited glutamate-induced apoptosis in PC12 cells against 20 mM glutamate-induced damage (Chang et al., 2016).The following biochemical parameters glutathione at 2.5 ± 0.7 nmol/mg protein, glutathione peroxidase at 28.2 ± 3.2 mU/ mg protein, glutathione reductase 2.3 ± 0.4 mU/mg protein, calcium influx at 360 ± 23 nmol/L, reactive oxygen species at 140.0% ± 7.0%, superoxide dismutase at 23.2 ± 4.2 U/mg protein, H 2 O 2 at 20.4 ± 3.5 nmol/mg protein, and thiobarbituric acid reactive substances (malondialdehyde) at 13.3 ± 2.5 mmol/mg protein, were affected by glutamate insult.Overall, their findings underlined the potential neuroprotective effect of erinacine A from H. erinaceus ethanolic extracts.
Despite the elucidation of chemical synthesis, the biosynthetic pathway and gene regulation remain unknown.A comparative genome analysis of 42 basidiomycota fungal species, including H. erinaceus, revealed abundant gene clusters related to terpenoid and polyketide biosynthesis (Chen et al., 2017).The genome analysis of H. erinaceus will provide important understanding into the biosynthetic pathways of bioactive secondary compounds, which is crucial for improving the production of these compounds.Zhang et al. (2017) explored the neuroprotective and neuritogenic properties of several secondary metabolites, including 4-chloro-3,5-dimethoxybenzoic methyl ester, 3-(hydroxymethyl)-2-furaldehyde, erinacine A, erinacerin G, herierin III, and herierin IV from the methanol extract of H. erinaceus mycelium.Among them, 4-chloro-3,5dimethoxybenzoic methyl ester and erinacine A metabolites not only enhanced nerve growth factor-induced neurite outgrowth but also protected neuronally differentiated cells against lack of nerve growth factor in PC12 pheochromocytoma cells.Erinacine A additionally stimulated neuritogenesis in primary rat cortex neurons.Their findings suggest that H. erinaceus holds promise as a potential therapeutic agent for reducing the risk of various neurodegenerative diseases.
Erinacine A and S, isolated from of H. erinaceus mycelia, displayed anti-neurodegenerative and neuroprotective effects in the cerebrum of transgenic mice (Tzeng et al., 2018).Thus, 30-days application of erinacine A and S attenuated cerebral plaque loading by inhibiting plaque growth, diminishing glial cell activation, and promoting hippocampal neurogenesis in transgenic mice as Alzheimer's disease model.Additionally, it was showed that erinacine A recovered behavioral deficits in transgenic mice.These findings suggested the possibility that erinacine A may have therapeutic potential for treating Alzheimer's disease.Ratto et al. (2019) reported that ethanol extract of H. erinaceus, containing erinacine A, hericenone C, and hericenone D bioactive metabolites, was able to partially revert the cognitive and locomotor frailty index during physiological aging in a mice model.They observed an increase in proliferating cell nuclear antigen (PCNA) and doublecortin (DCX) levels in the hippocampus and cerebellum of mice supplemented (2 months) with H. erinaceus extract orally, indicating the occurrence of neurogenesis in elderly frail mice.Therefore, it was demonstrated that the supplementation of H. erinaceus extract reversed the age-related decline in recognition memory.Roda et al. (2021) demonstrated that a two-month oral supplementation of ethanol extract from H. erinaceus, which contained erinacine A, hericenone C, hericenone D, and ergothioneine, could reverse age-induced cerebellar alterations in C57BL-6J wild-type male mice.These alterations included volume reduction, molecular layer thickness decrease, and dwindled neurons.Additionally, the supplementation led to a decrease in inflammation, oxidative stress, and reactive gliosis.In another study, they investigated the preventive effects of H. erinaceus ethanol extract, which contained a high amount of ergothioneine, on cognitive and locomotor decline during physiological aging in C57BL-6J mice.The ergothioneine-rich extract exhibited neuroprotective and preventive actions, mitigating age-dependent deficiencies (Roda et al., 2022).Moreover, same extract was shown to reduce oxidative stress and inflammation in the hippocampus, prevent recognition memory decline, and increase the expression of specific receptors crucially involved in glutamatergic neurotransmission in the same mice (Roda et al., 2023).
A tremulane sesquiterpene, named irpexlacte A (yellowish needle crystals), along with three novel furan derivatives, identified as irpexlacte B (yellowish oil), irpexlacte C (yellowish powder), irpexlacte D (brown flaky solid), were obtained from the fungus Irpex lacteus (I.lacteus) isolated from waterlogging tolerant plant Distylium chinense by Duan et al. (2019).Furthermore, they also isolated two known metabolites, irlactin E and 3βhydroxycinnamolide. Irpexlacte A and D demonstrated robust antioxidant activity, with EC 50 values of 2.50 and 5.75 μM, respectively.Moreover, in contrast to gentamicin (0.18 μM) as the positive control, four new compounds, irpexlacte A, B, C and D, demonstrated moderate activity, displaying MIC values of 24. 1, 32.3, 35.5, and 23.8 μM, respectively, against P. aeruginosa.On the other hand, the isolated compounds showed no activity against tested cancer cell lines.However, irpexlacte A-D displayed significant antioxidant activity, underscoring the need for further investigations to evaluate their importance and clarify underlying mechanisms.
Porodaedalea pini (P.pini) has been an esteemed traditional mushroom known for its therapeutic properties against various diseases.In this context, Devi et al. (2022) determined the antioxidant potential of hexane, chloroform, ethyl acetate, and methanol extracts of P. pini using DPPH assay (EC 50 , 253.98 μg/ mL, maximum with hexane extraction), total antioxidant capacity (231.04 ± 1.75 μg ascorbic acid equivalents/g of dried extract, maximum with methanol extraction), total phenolic content (277.67 ± 9.46 μg GAE/g of the sample, maximum with methanol extraction), and total flavonoid content (4.95 ± .013μg rutin equivalent/g of dried extract, maximum with methanol extraction).The presence of 12 polyphenolic metabolites, including gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, umbelliferone, coumaric acid, tert-butyl-hydroquinone, and quercetin was revealed.However, rutin, elagic acid, and kaempferol were not detected.The identified polyphenols of P. pini.could potentially contribute to its antioxidant activity, Moreover, further exploration of P. pini extracts is necessary to unveil its nutraceutical and pharmacological potential.
Ildız et al. ( 2022) employed molecular techniques to analyze the total phenolic compound content, antioxidant activity using the DPPH scavenging method, and antimicrobial activity for Bjerkandera adusta (B.adusta).Ethanol and methanol were used for extraction and the methanolic extract of B. adusta exhibited a total phenolic content of 772.28 μg GAE/mL.The ethanol extract demonstrated a substantial 79.66% scavenging activity against a 0.1 mM DPPH solution.For antimicrobial activity, the ethanolic extract exhibited significant antimicrobial activity, showing the largest DIZ of 28 ± 1 mm against P. aeruginosa.In contrast, the methanol extract displayed the lowest antimicrobial efficacy, with a DIZ of 8.7 ± 1.2 mm against Salmonella typhimurium (S. typhimurium).These findings suggested that both ethanolic and methanolic extracts of B. adusta possess antioxidant and antibacterial properties.More comprehensive investigations into wild-collected fungal strains should be done for an extensive exploration of the bioactive constituents present in fungi, drawing attention to their potential applications in the development of functional foods and other potential uses.
The antioxidant and oxidant potentials of the ethanolic extracts of Hohenbuehelia myxotricha (H.myxotricha) was determined for the first time by Krupodorova et al. (2022).The highest recorded TAS, TOS, and OSI values for H. myxotricha were 5.416 ± 0.150 mmol/L, 1.320 ± 0.156 μmol/L, and 0.024 ± 0.003, respectively.The ethanolic extracts of H. myxotricha exhibited antimicrobial activities with concentrations ranging from 25 to 200 μg/mL against various bacteria and yeasts.The extract demonstrated a better antifungal activity compared to its antibacterial activity.The antioxidant, oxidant, and antimicrobial potentials of H. myxotricha mycelia exhibited variations based on the culture media employed.According to their findings, glucose peptone yeast (GPY) medium was found more suitable for the synthesis of antibacterial bioactive metabolites against E. coli, while Sabouraud dextrose broth (SDB) medium was more proper for the production of antioxidant and antifungal bioactive metabolites.Thus, their findings underscored the importance of identifying an optimal cultivation medium to maximize antimicrobial and antioxidant activities.Overall, the ethanolic extract of H. myxotricha mycelia presented significant pharmacological potential, serving as a natural source of antioxidants and antimicrobials with potential health benefits.Like various studies in the literature, further research is required to isolate and identify the bioactive secondary metabolites responsible for the observed antioxidant and antimicrobial effects, offering potential sources for pharmacological drug designs.Jaszek et al. (2013) isolated bioactive compounds (crude endopolysaccharides -c-EPL, and low molecular secondary metabolites -ex-LMS) extracted from Cerrena unicolor (C.unicolor) submerged cultures that exhibited antioxidant and antibacterial properties., Ex-LMS demonstrated the highest antioxidant capability (39.0%-90.0%for chemiluminometric measurement, 20.0%-90.0%for ABTS, and 10.0%-59.0%for DPPH reduction at 6.25-800 μg/mL).Moreover, c-EPL scavenging abilities ranged from 36.0% to 70.0% for chemiluminometric measurement, 2%-60% for ABTS, and 28.0%-32.0%for DPPH reduction at 6.25-800 μg/mL.Preliminary data for the toxic effect against Vibrio fischeri (V.fischeri) were found to be 85.37% for c-EPL, and 99.8% for ex-LMS.In this sense, c-EPL showed antibacterial activity against S. aureus with an 18.96 ± 0.4 mm DIZ while ex-LMS displayed activity with 11.83 ± 0.2 and 25.86 ± 0.2 mm DIZs, respectively, against E. coli and S. aureus.These compounds have the potential to serve as a novel and easily producible source of effective antioxidants within laboratory-scale conditions.Additionally, further investigation of the aforementioned bioactive secondary metabolites is crucial in terms of applications, as they may play a critical role in new therapies and serve as a natural source of antioxidative molecules.Mizerska-Dudka et al. (2015) explored the antiviral, immunostimulatory, cytotoxic, and antitumor effects of bioactive compounds from C. unicolor, specifically endopolysaccharides (c-EPL) and an extracellular low molecular weight compound (ex-LMS) obtained from the culture filtrate below 10 kDa.The study employed THP-1-derived macrophages to assess immunomodulatory activity, revealing that the fungal c-EPL stimulated the production and secretion of TNF-α and IL-6.Antitumor activity was evaluated using cervical carcinoma cell lines SiHa and CaSki, with SiHa showing cytotoxic EC 50 (µg/mL) value of 1.2 for ex-LM, and CaSki values of 2.3 for ex-LMS.The research highlighted the promising immunomodulatory effect of c-EPL samples and the need for further investigations into these multifaceted bioactive compounds.
The antioxidant and antimicrobial properties of ethanol, methanol, and dichloromethane extracts of C. unicolor were studied by Sevindik (2018b).Considering antioxidant effects, TSS, TOS, and OSI were measured as 6.706 ± 0.059 mmol/L, 19.308 ± 0.114 μmol/L, and 0.288 ± 0.003.Additionally, all the extracts presented antimicrobial efficacy within the concentration range of 25-400 μg/mL, spanning a spectrum of MIC values from 400 to 50 μg/mL against S. aureus, E. faecalis, E. coli, P. aeruginosa, Acinetobacter baumannii, C. albicans, C. glabrata, and C. krusei with higher anticandidal activity.The primary issue about these extracts is their unidentified contents regarding the bioactive secondary metabolites.Matuszewska et al. (2019) explored the anticancer and antioxidant properties of low molecular weight secondary metabolites produced by C. unicolor.These secondary metabolites consisted of protein, sugars, and phenolic compounds.The findings revealed that the low molecular weight compounds displayed inhibitory effects on human colon cancer cells HT-29 within the concentration range of 25-200 μg/mL and demonstrated dose-dependent inhibition of cell proliferation, ranging from 47.5% to 9.2% at the highest concentrations.Microscopic observations indicated that all compounds induced programmed cell death, specifically apoptosis (up to 44.4% for a compound in HT-29 and less than 20.0% for most compounds in CCD 841 CoTr), with minimal or significantly low levels of necrosis observed in both cell lines simultaneously.Romorosa et al. (2017) found that distilled water, aqueous, and acetonitrile extracts of Auricularia fuscosuccinea (A.fuscosuccinea) fruiting bodies contained alkaloids and tannins glycosides, while saponins and flavonoids were absent.The antibacterial properties of the aqueous and acetonitrile extracts were evaluated against S. aureus and E. coli.The results indicated low antibacterial activity against S. aureus for all the fungal extracts with DIZs of 5.0 mm, 13.5 mm, and 5.0 mm, respectively, compared to cotrimoxazole (control) with a 33.54 mm DIZ.For E. coli, the corresponding DIZs were 5.0 mm, 22.98 mm, and 22.41 mm, which were lower than the control with a 32.00 mm zone.Kalaw and Albinto (2014) evaluated the antibacterial properties, phytochemical composition, and antioxidant activity of ethanol and acetone extracts of Coprinus comatus (C.comatus) and Pleurotus cystidiosus (P.cystidiosus).Both ethanol and acetone extracts exhibited antibacterial activity against S. aureus.The ethanol extract of C. comatus displayed a slightly larger DIZ (14.09 ± 4.65 mm) compared to the acetone extract (13.16 ± 3.39 mm).Conversely, in P. cystidiosus, the acetone extract exhibited a larger DIZ (15.25 ± 2.76 mm) than the ethanol extract (13.43 ± 0.15 mm).There was no inhibition against E. coli for both fungal extracts.Moreover, phytochemical screening of the extracts revealed the presence of alkaloids, flavonoids, saponins, and terpenoids in both fungal species.Steroids and cardiac glycosides were absent in P. cystidiosus while tannins were not detected in any of the studied species.P. cystidiosus registered higher DPPH radical scavenging activity (72.97% ± 0.68% to 66.59% ± 0.83%) indicating its potential antioxidant capacity, and lower total phenolic content (3.41 ± 0.12 mg GAE/g) than C. comatus (17.82 ± 0.51 mg GAE/g).
In a research conducted by Stilinović et al. (2020), it was reported that C. comatus methanol extract contained significant amounts of proteins (23.07 ± 0.28 g/100 g dry matter), carbohydrates (40.42 ± 0.48 g/100 g dry matter), dietary fibers (21.13 ± 0.34 g/100 g dry matter) and fats (2.04 ± 0.03 g/100 g dry matter).Furthermore, methanol extract of C. comatus served as a valuable flavonoid content (0.39 ± 0.08 (mg quercetin equivalents (QE)/g dry weight extract) and a total phenolic source (107.02 ± 2.42 mg GAE/g dry weight extract) including 4hydroxybenzoic acid, protocatechuic acid, cinnamic acid, p-coumaric acid, caffeic acid, and quinic acid with concentrations of 11.41 ± 1.17, 0.13 ± 0.03, 4.34 ± 0.27, 10.48 ± 0.94, 0.15 ± 0.02, and 9.10 ± 1.39 μg/g, respectively, based on spectrometric analysis.According to the findings of experiments involving rats having liver damage induced by carbon tetrachloride, the administration of C. comatus orally for 42 days exhibited hepatoprotective effects in oxidative stress-induced liver damage by triggering repair mechanisms.Based on the results indicated in that paper, C. comatus had the potential to be utilized as a readily available food source with high levels of natural antioxidants.It also can be used as an additive or component for producing nutraceuticals and functional foods.Considering the reported work, an important issue arises regarding the complex composition of C. comatus extracts.Additional investigation is needed to ascertain whether the positive effects result from a singular active compound, or the synergistic activities of various metabolites present in the extract.
Phylloporia ribis (P.ribis), traditionally used in China for natural medicine, is recognized for its functional ingredients beneficial in treating conditions like pharyngitis, laryngitis, tonsillitis, and hyperglycemia.Ribka et al. (2021) reported bioactive compounds, and antifungal activity of methanolic (from 5% to 25%) extracts of P. ribis.Such extracts contained a diverse array of bioactive compounds, including carbohydrates, proteins, amino acids, lipids, alkaloids, glycosides, cardiac glycerides, flavonoids, phenols, terpenoids, steroids, sterols, saponins, tannins, and phosphate.The methanolic extract of P. ribis presented superior antifungal activity, particularly against Aspergillus niger (A.niger), causing soft rot in carrots, with 100% inhibition observed in all methanolic extracts except at 5% concentration.The diverse components present in P. ribis hold promise for applications as immunity boosters, food supplements, and in the field of drug discovery.Further investigations are also required to isolate the bioactive compounds responsible for immunity-boosting, drug development, antioxidant, anti-inflammatory, antibiotic, and antimicrobial activities in their pure form.
Polyporus grammocephalus (P.grammocephalus) ethanol extract was studied for its nutraceutical potential considering its bioactive metabolites.Aquino et al. (2018) identified sugars, alkaloids, flavonoids, triterpenes, essential oils, phenols, fatty acids, anthraquinones, coumarins, anthrones, tannins, and steroids, while terpenoids, cardiac glycosides, whereas saponins were not present in the P. grammocephalus ethanol extract.The ethanol extract of P. grammocephalus displayed DPPH radical scavenging activity (26.37%) and total phenolic content of 38.58 mg GAE/g.Brine shrimp toxicity assay indicated high toxicity with an LC 50 value of 73.78 μg/mL These findings suggested that P. grammocephalus extract was rich in bioactive compounds with significant pharmacological activities, including antioxidant properties and cytotoxic effects.
The antioxidant and antimicrobial properties of ethyl acetate extracts from Alternaria alternata (A.alternata) were investigated by Chatterjee et al. (2019).The ethyl acetate extracts showed MIC ranging from 300 to 400 μg/mL against both Gram-positive and Gram-negative bacteria.Moreover, the ethyl acetate extract of A. alternata displayed antibacterial inhibition on B. subtilis, Listeria monocytogenes, S. aureus, E. coli, and S. typhimurium with up to 14 ± 1.5 mm DIZ.Furthermore, a reduction in the activity of key metabolic pathways, including the EMP pathway, TCA cycle, and gluconeogenic enzymes, suggested interference with the central carbohydrate metabolism.Additionally, A. alternata extract demonstrated strong antioxidant potential through DPPH and superoxide radical scavenging assays, with EC 50 values of 38.0 ± 1.7 μg/mL and 11.38 ± 1.2 μg/mL, respectively.Within this analysis, ascorbic acid, used as a positive control, had an EC 50 value of 20.23 ± 2.3 μg/mL.These results suggest that A. alternata showed potential as a source of bioactive compounds with medicinal importance, demonstrating strong antibacterial effects.
Phenylpropanoid (PPPN) compounds are widely utilized in various industries due to their diverse bioactivities, including applications in agriculture, medicine, food, and cosmetics.In this sense, Alternaria sp., which is, a novel natural source of PPPNs, was isolated from grapes by Lu J. et al. (2020).However, starvation is known to stimulate the PPPN pathway in plants, its impact on fungi remains underexplored.In that study, metabolomics analysis revealed that starvation treatment significantly increased the accumulation of shikimate and PPPN compounds in Alternaria sp.Notably, the study also identified additional PPPNs, such as sinapate, 4-hydroxystyrene, piceatannol, and taxifolin, under starvation conditions.These findings indicated that starvation treatment offers an effective strategy to enhance PPPN production and unveil compounds undetectable under non-starvation conditions.Overall, subjecting Alternaria sp. to starvation treatment during cultivation resulted in the robust activation of both the shikimate and PPPN pathways.These findings can shed light on the potential for optimizing the production of PPPN compounds by fungi, offering insights into the genetic resources and secondary metabolite pathways of Alternaria sp. for future functional studies.
Inonotus obliquus (I.obliquus), naturally grows on the trunks of birch wood trees in colder northern climates and is a medicinal fungus that has been used for therapeutic purposes since the 16th century (Ern et al., 2023).To investigate the antihyperglycemic and anti-lipid peroxidative effects of the dry matter of the culture broth (DMCB) of I. obliquus, Sun et al. (2008) utilized normal, glucoseinduced hyperglycemic, and alloxan-induced diabetic mice.The DMCB exhibited a mild hypoglycemic effect in normal mice and achieved euglycemia in glucose-loaded mice after 2 h at a higher dose (1,000 mg/kg compared to 500 mg/kg).In alloxan-induced diabetic mice, the DMCB significantly reduced blood glucose levels, with a notable reduction observed for 21 days.The treatment also decreased serum levels of free fatty acids, total cholesterol, triglycerides, and LDL-cholesterol, while increasing HDL-cholesterol, insulin levels, and hepatic glycogen contents.Additionally, the DMCB enhanced antioxidant enzyme activities and histologically restored pancreas tissues in diabetic mice.Overall, the DMCB of I. obliquus demonstrated significant antihyperglycemic, anti-lipid peroxidative, and antioxidant effects in alloxan-induced diabetic mice.Ma et al. (2013) identified the anti-inflammatory and anticancer compounds present in ethanol, petroleum ether, ethyl acetate, n-butyl alcohol, and water extracts of I. obliquus.Among all extracts, the petroleum ether extract was the most active one against human prostatic carcinoma cells and breast carcinoma cell lines with 64.66% and 63.26% inhibitory percentages, respectively.They also isolated lanosterol, 3β-hydroxy-8,24-dien-21-al, ergosterol, inotodiol, ergosterol peroxide, and trametenolic acid from both petroleum ether and ethyl acetate extracts.Among these metabolites, ergosterol, ergosterol peroxide, and trametenolic acid exhibited anti-inflammatory properties, while ergosterol peroxide and trametenolic acid demonstrated cytotoxic effects on human prostatic carcinoma cells and breast carcinoma cell lines.Additionally, these metabolites significantly inhibited nitric oxide production and nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) luciferase activity in murine macrophage RAW 264.7 cells.Xu et al. (2016) discovered flavonoids from ethanol, chloroform, ethyl acetate and n-buthanol extracts of I. obliquus.In these extracts epicatechin-3-gallate, epigallocatechin-3-gallate, and naringin, as well as phenolic acids, such as ferulic acid and gallic acid were identified.DPPH radical-scavenging abilities were significantly higher in Tween 20 medium (minimum EC 50 ; 40.63 ± 0.89 mg/ L) and linoleic acid medium (minimum EC 50 ; 43.54 ± 0.92 mg/L) than the control medium (minimum EC 50 ; 95.80 ± 1.99).Baek et al. (2018) isolated triterpenoids from the methanol extract of I. obliquus to assess its cytotoxic effects on four human lung adenocarcinoma cell lines, each one with a different p53 tumor protein conditions (human lung adenocarcinoma cell lines A549, H1264, H1299, and Calu-6).They identified several metabolites including 3β-hydroxylanosta-8,24-dien-21, (+)-fuscoporianol C, inonotsutriol E, inotodiol, inonotsutriol A, trametenolic acid, saponaceoic acid I, and a novel lanostane-type triterpenoid, chagabusone A from methanol extracts from I. obliquus.Among these, 3β-hydroxylanosta-8,24-dien-21, trametenolic acid, and chagabusone A exhibited the most potent cytotoxicity against all human lung cancer cell lines tested, with EC 50 values ranging from 75.1 to 227.4 µM.Notably, these compounds reduced the viability of human adenocarcinoma cell lines regardless of p53 mutations or null phenotype.This suggests that the cytotoxic effects observed against human lung cancer cells were independent of p53-related pathways, but rather mediated by apoptosis with caspase-3 activation.
Betulin, betulinic acid, inotodiol, and trametenolic acid, extracted with methanol from inner and outer parts (the sclerotium) of I. obliquus fruiting bodies, were tested against cancer cell lines (HT-29, AGS, MCF-7, and PC3) by Kim et al. (2020).The MTT assay was conducted to test the effect of triterpenoids on cancer cell lines.They found that the triterpenoids from the outer part extract showed significantly higher anti-proliferative activity against AGS, MCF-7, and PC3 cells compared to the inner part extract.
The hypouricemic properties of triterpenoid acids isolated from I. obliquus in mice with hyperuricemia were investigated by Luo et al. (2021).They identified various triterpenoid acids, including 3β,22,oleanolic acid,3β,24 diene,betulin,inotodiol,and lanosterol.Their research demonstrated that triterpenoid acids extracted from ethanol extracts of I. obliquus effectively inhibited xanthine oxidase activity (with an EC 50 of 0.065 ± 0.01 mg/mL), displaying a mixed and reversible inhibition pattern.These triterpenoid acids also significantly reduced uric acid levels, hepatic xanthine oxidase, and serum blood urea nitrogen activities in mice with hyperuricemia.This suggests that triterpenoid acids from I. obliquus may help suppress kidney damage, lower inflammation in hyperuricemic mice, and exhibit inhibitory effects on xanthine oxidase activity.These findings underscore the potential of triterpenoids derived from I. obliquus as a promising dietary or medicinal supplement for managing hyperuricemia.Zhao et al. (2021) identified flavonoids, including gallic acid, ferulic acid, flavonoids epicatechin-3-gallate, epigallocatechin-3gallate, naringin, rutin, naringenin, phelligridin G, inoscavin B, and davallialectone, from the ethanol extract of I. obliquus.Moreover, they showed that cultivating I. obliquus on wheat straw led to increased levels of inoscavin B and davallialectone.In that work, the degradation of lignocellulose boosted the synthesis of flavonoids such as epicatechin-3-gallate, epigallocatechin-3gallate, rutin, and naringin, thereby enhancing the antioxidative capabilities of I. obliquus.The highest antioxidant potential was observed in the extract obtained on day 9 (EC 50 of 30.96 mg/L) against DPPH radicals.
In a study by Kou et al. (2021), seven newly discovered lanostane-type triterpenoids, named inonotusols H to N, were identified from the ethanol extract of I. obliquus.These metabolites exhibited significant inhibition of nitric oxide production in lipopolysaccharide-stimulated BV-2 microglial cells, with EC 50 values ranging from 2.32 to 23.83 µM.At the concentration of 25.0 µM of these metabolites, no cytotoxicity was observed towards lipopolysaccharide-stimulated BV2 cells.Through molecular docking and Western blotting studies, two of the inonotusols showed the most potent inhibitory effects on iNOS and nitric oxide production.These findings suggest that these bioactive metabolites hold promise for development into therapeutic agents for neurodegenerative disorders, including Alzheimer's disease.
Abu-Reidah et al. ( 2021) identified phenolics and flavonoids, including gallic acid, protocatechuic acid, salicylic acid, vanillic acid, 2,3-dihydroxybenzaldehyde, 2,5-dihydroxyterephthalic acid, coumaric acid, caffeic acid, 4-methoxycinnamic acid, hispidin, ferrulic acid, isorhamnetin, myricetin, quercetin, syringic acid, ellagic acid, hispolon, 3,4-dihydroxybenzalacetone, and 3-Omethylellagic acid, from I. obliquus extract using the modified Swiss water method.These metabolites showed hydrophilic, lipophilic, and total antioxidant activities.Li et al. (2021) reported that phelligridin D, extracted from I. obliquus using both petroleum ether and ethyl acetate, presented good antioxidant properties.This metabolite reduced reactive oxygen species and malondialdehyde levels while increasing the activity of superoxide dismutase and catalase in human glomerular mesangial cells under high glucose concentration (30 mM).Additionally, it enhanced the capacity of the nuclear factor erythroid 2-related factor 2 (Nrf2), a master transcription factor that upregulates antioxidant response elements (ARE) (Zhao et al., 2017), to promote the transcription of ARE.It was also shown that phelligridin D activated Nrf2 in mesangial cells exposed to high glucose concentration, contributing to its protective effects.Their findings indicated the potential discovery of novel therapies targeting diabetic nephropathy and the applications of I. obliquus metabolites in clinical practices.Wang et al. (2021) identified polyphenol compounds from ethanol extracts of I. obliquus such as procyanidin, caffeic acid, p-coumaric acid, isorhamnetin-3-O-glucoside, astilbin, tangeretin, gallic acid, kaempferol, quercetin, and catechin.According to the antioxidant activity of these polyphenols, their DPPH radical scavenging activity increased from 45.12% to 85.64% as the concentration increased from 1.0 to 5.0 mg/mL, respectively.As for their hydroxyl radical scavenging activity, it was found to be 38.76% at 1.0 mg/mL.When the concentration of polyphenols increased from 1.0 to 5.0 mg/mL, its stronger ferric-reducing antioxidant power increased from 0.11 to 0.39 mmol/mL.Their findings suggested that polyphenols from I. obliquus possessed promising potential as natural antioxidants.
Chen et al. ( 2021) extracted forty-six triterpenoids, twelve of which were newly discovered, from I. obliquus using ethanol and ethyl acetate stepwise.Among these 46 triterpenoids, thirteen of them showed strong α-glucosidase inhibition, with EC 50 values from 11.5 to 81.8 µM.This study highlighted the significance of triterpenoids in clarifying the hypoglycemic effects associated with I. obliquus.Peng et al. (2022) showed that ethanol extracts from I. obliquus, containing inotodiol, lanosterol, and trametenolic acid, significantly improved lipid accumulation in mouse livers induced by a methionine-choline deficient diet or in human LO2 hepatocyte cells lines induced by oleic acid.These metabolites exhibited protective properties against non-alcoholic fatty liver disease (NAFLD) by mitigating lipid deposition effects, reversing liver weight loss, and reducing liver triglyceride content together with restoring lower levels of alanine transaminase (ALT) and aspartate aminotransferase (AST).Inotodiol specifically demonstrated its anti-NAFLD properties by regulating the lipid metabolism pathway, farnesoid X receptor (FXR)/small heterodimer partner (SHP)/sterol regulatory element-binding protein-1c (SREBP-1c) (Liu et al., 2016).Their findings suggested that these bioactive compounds hold promise as potential drugs for NAFLD treatment.Ogidi et al. (2018) analyzed raw and fermented ethyl acetate and ethanol extracts of Lenzites quercina (L.quercina) for their total phenol and flavonoid contents, alongside assessments of their antioxidant properties.The scavenging efficacy of fungal extracts was found against various free radicals, including DPPH, OH − , nitric oxide, and Fe 2+ ranging from 0.12 to 1.80 mg/mL EC 50 values.Furthermore, petroleum ether, ethyl acetate, and ethanol extracts exhibited EC 50 lower than the positive controls butylated hydroxytoluene (BHT) and ethylenediaminetetraacetic acid (EDTA).The ethyl acetate extract from fermented L. quercina exhibited a higher phenolic content of 67.6 mg GAE/g extract, while the ethyl acetate extract from raw L. quercina displayed the highest flavonoid content of 51.4 mg QE/g extract.The antioxidant property, measured by FeCl 3 reducing power, ranged from 18.1 (fermented L. quercina extracted with petroleum ether) to 127.6 mg (raw L. quercina extracted with petroleum ether) Ascorbic Acid Equivalent (AAE)/g extract for extracts obtained from both raw and fermented L. quercina.Fermented L. quercina demonstrated pronounced scavenging properties against nitric oxide and ferrous ion radicals, and it also exhibited superior inhibition of thiobarbituric acid reactive species (TBARS) with the highest inhibitory effect of 109.3%.The study suggested that the high total phenol and flavonoid content in L. quercina extracts positioned them as effective antioxidant agents, potentially serving as alternative therapy in healthcare.
A study by Prasher and Manju (2019) analyzed the active constituents present in ethyl acetate, methanol, and hexane extracts of Peniophora nuda (P.nuda) isolated from mango twigs.GC-MS chromatograms revealed 60, 9, and 60 major peaks in the ethyl acetate, methanol, and hexane crude extracts, respectively.The ethyl acetate extract exhibited 29 peaks with area percentages greater than one, with 13-docosenamide, (Z)-occupying the highest at 12.88%.In the methanolic extract, all 9 peaks had area percentages exceeding 1%, with tricaproin being the highest at 49.82%.The hexane extract displayed 28 peaks with area percentages greater than 1, and 13-docosenamide, (Z)-was the highest at 14.46%.According to their GC-MS findings, P. nuda was found to contain significant bioactive compounds with known antioxidant, anti-tumor, antibacterial, immunostimulant, lipoxygenase-inhibitor, antiaging, analgesic, antidiabetic, anti-inflammatory, antidermatitic, antileukemic, anticancer, hepatoprotective, hypocholesterolemic, antiulcerogenic, vasodilator, antispasmodic, and antibronchitic properties (Prasher and Manju, 2019).These results could serve for identifying and understanding the nature of various bioactive components, with potential applications in biotechnological processes.Further isolation of individual phytochemicals may lead to the discovery of novel drugs.An extensive study of the pharmacological importance, diversity, and chemical composition can provide valuable insights and advance knowledge in this area.Further isolation of bioactive compounds has the potential to reveal new drugs.
The antioxidant properties of terrestrial Flavodon flavus (F.flavus) and Xylaria feejeensis (X.feejeensis), harvested from the dry zone forest from Sri Lanka were investigated by Fernando et al. (2016).The study also aimed to determine the contribution of phenolic and flavonoid substances to the antioxidant capabilities of these white rot fungi.Both species exhibited strong antioxidant capacity, indicating the presence of an effective antioxidative system.F. flavus demonstrated potent antioxidant activity with an EC 50 of 77.00 ± 0.18 μg/mL based on DPPH radical scavenging capacity, while X. feejeensis exhibited promising antioxidant capacity with an EC 50 value of 98.4 ± 0.28 μg/mL.Additionally, both analyzed species contained high levels of phenolic and flavonoid substances, suggesting their contribution to the prominent antioxidant activity.F. flavus and X. feejeensis showed higher total phenol contents of 55.7 ± 10.89 μg gallic acid/mg and 31.33 ± 8.87 μg gallic acid/mg, respectively.They also exhibited elevated levels of total flavonoids, with values of 82.4 ± 4.0 μg epicatechin/mg and 23.35 ± 7.0 μg epicatechin/mg, respectively.Notably, F. flavus exhibited a higher amount of total phenolics and flavonoids compared to X. feejeensis.
Fuscoporia torulosa (F.torulosa) is a fungus that develops woody fruiting bodies on both living and deceased trees.From methanol extract of F. torulosa fruiting bodies, two distinctive pentacyclic triterpenoids, namely fuscotorunones A and B, were isolated using ethyl acetate and purification by Noji et al. (2021).In vitro antimicrobial testing against B. subtilis, S. aureus, and C. albicans was conducted for fuscotorunones A and B. However, the ethyl acetate extract of F. torulosa demonstrated antimicrobial activity, with a MIC of 25 μg/mL against S. aureus and MIC of 100 μg/mL against B. subtilis, fuscotorunones A and B exhibited no activity against all tested microorganisms.
The Ganoderma genus belongs to the basidiomycota division, agaricomycetes class, polyporales order and ganodermataceae familiy.Among them, the species G. lucidum (Ling-Zhi in Chinese, Reishi in Japanese and Yeongji in Korean) is an outstanding medicinal mushroom having different therapeutical properties.Thus, this fungus is being cultivated worldwide, especially in Southeast Asian countries and many health products are being produced and sold (Bijalwan et al., 2020).Even in Europe there is a biotechnology company named Hifas da Terra (https:// hifasdaterra.com/en/)that grow some white-rot fungi, G. lucidum among them, to extract active biomolecules that are commercialized in different products with diverse benefits for human health (e.g., immune system, oncology, mental health).
Several research works have reported different secondary metabolites from the Ganoderma genus, particularly from the G. lucidum species (Zhou et al., 2012;Sharma et al., 2019;Lu Y. et al., 2020;Wu et al., 2024), with interesting bioactivities.In this context, Baby et al. (2015) reviewed the biologically active secondary metabolites produced by different species belonging to the Ganoderma genus.They stated that phytochemical studies resulted in the isolation of 431 secondary metabolites, from which 240 were isolated from G. lucidum.Most of the isolated biologically active secondary metabolites were triterpenes, steroids, and polysaccharides (Seo et al., 2009;Cör et al., 2018).The latter showed to diminish the levels of serum glucose in normal fasted mice after 3 and 6 h of administration (Zhang and Lin, 2004).Likewise, Yang et al. (2007) observed that the administration of a Ganoderma applanatum (G.applanatum) exopolymer to induced diabetic rats reduced the glucose levels in plasma by 22%.Additionally, it decreased the total levels of cholesterol and triglycerides in plasma by 20.3% and 22.5%, respectively.Also, the activity of alanine transaminase and aspartate transaminase was reduced by 23.2% and 20.7%.Therefore, these compounds could find application to treat diabetes in animals.
Other researchers found that Ganoderma extracts presented interesting anti-cancer activities.Thus, for example, lucidenic acids, isolated from triterpenoids of ethanolic extracts of a new G. lucidum strain, exhibited anti-invasive activity on human hepatoma carcinoma cells (Weng et al., 2007).Similarly, Li et al. (2013) identified a new triterpenoid, named ethyl lucidenate (ethyl 7β-hydroxy-4,4,14α-trimethyl-3,11,15-trioxo-5α-chol-8-en-24oate) from ethyl acetate extracts of G. lucidum with cytotoxicity against the cancer cell lines HL-60 and CA46.Also, exopolysaccharides obtained from G. applanatum presented antitumor activity against carcinoma cell lines (Osińska -Jaroszuk et al., 2014).In addition, G. lucidum ganodermic acid was able to inhibit the proliferation of HeLa and U87 human glioma cells, indicating its potential utilization as an anticancer drug (Upadhyay et al., 2014).Li et al. (2018) found that the polysaccharides from G. lucidum and Ganoderma sinense (G.sinense) presented similar chemical characteristics and tumor suppressive activity in mice, which indicated that polysaccharides from Ganoderma are therapeutic agents.Also, Wang K. et al. (2019) isolated from ethanolic extracts of G. lucidum fruiting bodies 1 new lanostane triterpene and 2 known aromatic meroterpenoids showing high antioxidant and neuroprotective activities.Zhang et al. (2019) showed that ganoderic acids from chloroform extracts of three Ganoderma species presented high antiproliferative activity (inhibition percentages from 70.8% to 80.7%) against three cancer cell lines (i.e., gastric carcinoma, liver carcinoma, and colon carcinoma).Bhat et al. (2021) reviewed the bioactivities of polysaccharides produced by different Ganoderma genera, mainly by G. lucidum, and stated that they presented antitumor, antioxidant, immunomodulatory, antibacterial, neuroprotective, hypoglycemic, and hepatoprotective activities.Therefore, they hold promise for further research to formulate natural efficient drugs to prevent and treat several diseases.More recently, Milhorini et al. (2022) isolated a fucoxylomannan from G. lucidum fruting bodies by alkaline extraction with important antimelanomic properties.
On the other hand, there are many research papers reporting antimicrobial activities of Ganoderma strains.Thus, Hassan et al. (2019) reported the high antibiotic activity of water extracts from G. applanatum against P. aeruginosa, Pseudomonas fluorescens (P.fluorescens), B. subtilis, Staphylococcus epidermidis (S. epidermidis), and Micrococcus luteus (M.luteus) strains.In addition, chloroform extracts of G. lucidum basidiocarp displayed high antibacterial activity against Salmonella typhi (S. typhi) (18 ± 2.1 mm DIZ for 100 µL extract) and B. subtilis (17 ± 1.9 mm DIZ for 100 µL extract) and high antifungal activity against the yeast C. albicans (17 ± 1.7 mm DIZ for 100 µL extract) which was related to their content in polysaccharides and triterpenoids.In addition, G. lucidum chloroform extracts also presented high antioxidant activity (Uma Gowrie et al., 2014).Also, methanolic extracts of G. lucidum exhibited strong antimicrobial activity against the yeast S. cerevisiae (MIC50 value 3 μg/mL) but low antimicrobial activities against Gram-positive bacteria (S. epidermidis and Enterococcus raffinosus (E.raffinosus)) and no activity against Gram-negative bacteria (E. coli and P. aeruginosa) and the yeast C. albicans (Hleba et al., 2014).Also, Ismail et al. (2014) studied the antimicrobial activities of methanol, chloroform, dichloromethane, and hexane extracts of Ganoderma boninense (G.boninense).The methanol and chloroform extracts showed significant antibacterial activities against different food-borne and skin disease bacterial pathogens (i.e., E. coli, B. subtilis Bacillus cereus (B.cereus), P. aeruginosa, S. pyogenes, Streptococcus pneumoniae, S. aureus, and Klebsiella spp.).Further, GC-MS results confirmed that G. boninense contained bioactive compounds such as dodecanoic acid, cyclododecane, octadecanoic acid, 9-octadecenoic acid, hexadecanoic acid, methyl tetradecanoate, 9, 12-octadecadienoic acid, dodecyl acrylate and hexadecanoic acid.In addition, exopolysaccharides from G. applanatum presented antibacterial activity against S. aureus (17.98 ± 0.4 mm DIZ and MIC value 1 mg/mL) and toxicity against V. fischeri (82.8% cell damage) (Osińska-Jaroszuk et al., 2014).Moreover, ganodermic acid from G. lucidum presented antibacterial properties against the Gram-negative bacteria E. coli and P. aeruginosa (MIC 1 mg/mL) and the Gram-positive bacteria S. aureus and S. epidermidis (MIC 0.25 mg/mL), pointing out its potential use as a broad-spectrum antibiotic (Upadhyay et al., 2014).Hoque et al. (2015) investigated the antioxidant, antimicrobial and cytotoxic potential of pet ether, chloroform, and methanol extracts of a G. lucidum strain collected from Bangladesh.Their results revealed that all the extracts presented high antioxidant activity, low to moderate antibacterial activity (DIZs ranging from 7 mm to 21 mm) against different strains of both Gram-positive (Sarcina lutea, Bacillus megaterium, B. subtilis, S. aureus, and B. cereus) and Gram-negative bacteria (P.aeruginosa, S. typhi, E. coli, Vibrio parahemolyticus, Vibrio mimicus, Shigella boydii, and Shigella disenteriae) and weak cytotoxic activity (brine shrimp nauplii bioassay).However, Romorosa et al. (2017) showed that aqueous extracts of G. lucidum, isolated from decaying logs in Isabela State University, Philippines, contained alkaloids, tannins, glycosides and in less extent saponins but not flavonoids.Nevertheless, these extracts had low antimicrobial activity against E. coli and especially against S. aureus.This was likely because they were devoid in flavonoids.According to the review by Ahmad et al. (2021), G. lucidum has a wide range of pharmacological activities, including antiviral activities, due to its content in triterpenoids and polysaccharides.Nonetheless, further studies on the clinical application of the biologically active compounds of this strain are needed.More recently, Chan and Chong (2022) reported the strong antibacterial activity against methicillin-resistant S. aureus (MRSA) (DIZ 41.08 ± 0.04 mm and MIC 0.078 mg/mL) of ethyl acetate extracts of a G. boninense strain collected from Malaysia.This strong antibacterial activity against MRSA was attributed to its content in aristolochic acid and tamoxifen, which are known to be effective against MRSA (Flores et al., 2016;Bartha et al., 2019), as well as its content in other metabolites with reported antimicrobial properties (i.e., aminoimidazole ribotide, lysine sulfonamide, carbocyclic puromycin, fenbendazole, acetylcaranine, and tigecycline) (Vince et al., 1986;Livermore, 2005;Stranix et al., 2006;Kim et al., 2015;Ločárek et al., 2015;Qadir et al., 2016;Miro-Canturri et al., 2019;de Oliveira et al., 2020).Hence, it could be a promising solution to develop drugs able to fight against multi-antibiotic resistant bacteria.In another recent work, it was shown that hot water extracts of Ganoderma neo-japonicum (G.neo-japonicum) exhibited 2-fold higher antioxidant and antimicrobial activities (against S. typhimurium, Salmonella enteritidis, and E. coli) than those of G. lucidum ones.This was presumably related to the higher content in flavonoids of the G. neo-japonicum extracts (Ayimbila et al., 2023).
Additionally, Wang et al. (2017) reported anti-aging activities of G. lucidum extracts which were mainly exerted through antioxidation, immunomodulation, and anti-neurodegeneration.The bioactive compounds responsible for these antiaging effects consisted of polysaccharides, triterpenes, peptides, and polysaccharide peptides.More studies are needed to clarify the mechanisms involved in these antiaging properties.
The Trametes genus belongs to the basidiomycota division, agaricomycetes class, polyporales order and family polyporaceae.Different research studies have reported the production of secondary metabolites with various biological activities by several strains of the Trametes genus.Among them, T. versicolor (also known as C. versicolor) is the most studied species.Thus, methanolic extracts of T. versicolor exhibited strong antimicrobial activity against the yeast S. cerevisiae (MIC50 value 24 μg/mL), low against Gram-positive bacteria (S. epidermidis and E. raffinosus) and no activity against Gram-negative bacteria (E. coli and P. aeruginosa) and the yeast C. albicans (Hleba et al., 2014).Also, acetonitrile and aqueous extracts of Trametes hirsuta (T.hirsuta), isolated from decaying logs in Isabela State University, Philippines, presented strong antimicrobial activity against E. coli (DIZ 26.36 mm) and S. aureus (DIZ 13.87 mm).This could be related to the flavonoid content in the extracts (Romorosa et al., 2017).Furthermore, isolated cerevisterol (ergosta-7, 22E-diene-3β5α, 6β -triol) from methanol extracts of Trametes gibbosa (T.gibbosa) and Trametes elegans (T.elegans), collected from farms and forests in Ghana, exhibited a broad-spectrum antibiotic activity.Thus, the isolated cerevisterol from T. gibbosa and T. elegans inhibited the growth of S. typhi (MIC25 and MIC 50, µg/mL, respectively), S. aureus (MIC25 and 100 μg/mL, respectively), A. niger (MIC25 and 100 μg/mL, respectively) and E. faecalis (MIC50 and 200 μg/mL, respectively) (Appiah et al., 2020).Nanglihan et al. (2018) showed that the ethanol extracts of T. elegans, collected from the Lingap Kalikasan Park of Central Luzon State University, Philippines, contained flavonoids, tannins, phenols, steroids, alkaloids, anthraquinones, anthrones, coumarins, essential oils, and fatty acids.Also, T. elegans extracts presented significant scavenging activity, antibacterial activities against S. aureus (DIZ 8.30 mm) and E. coli (DIZ 8.07 mm) and high cytotoxicity (brine shrimp nauplii bioassay).Gebreyohannes et al. (2019) found that chloroform, ethanol and hot extracts of two wild fungi, collected from National Reserve Forests, in Kenya, and further identified as Trametes spp.showed interesting antimicrobial activities against different test strains (E.coli, K. pneumoniae, P. aeruginosa, S. aureus, MRSA, C. albicans, and Candida parapsilosis), the highest one being obtained for S. aureus (MIC values 0.83 ± 0.29, 0.67 ± 0.29, and 0.67 ± 0.29 for chloroform, ethanol and hot water extracts, In addition, Hassan et al. (2019) reported the high antibiotic activity of water extracts from T. versicolor against P. aeruginosa, P. fluorescens, B. subtilis, S. epidermidis, and M. luteus strains.Bains and Chawla (2020) reported that the methanolic extracts from T. versicolor, collected from the forest of Chail in India, contained phenolics as the main compounds followed by flavonoids, ascorbic acid, β-carotene, and lycopene and presented significant antimicrobial activities against S. aureus, P. aeruginosa, K. pneumonia, and E. coli (DIZs ranging from 24.14 to 30.18 mm).It also showed anti-inflammatory activities presumably due to its content in glycopeptides.Furthermore, Oyetayo and Akingbesote (2022) tested the antimicrobial properties of acetone and methanolic extracts from raw and submerged and solid-state fermented Trametes polyzona (T.polyzona), collected from dead wood in Nigeria, against S. aureus isolated from blood, soil, water, and urine.The methanolic extract from submerged fermented T. polyzona showed the highest antimicrobial activity against blood isolated S. aureus (DIZ 28 mm), probably due to its ability to dissolve the endogenous compounds of the fungus.However, the acetonic extracts presented low antimicrobial activity.GC-MS analysis of T. polyzona methanolic extracts showed the following 14 bioactive compounds: caprylic acid methyl ester, tridecanoic acid methyl ester, myristoleic acid methyl ester, cis-10 pentadecanoic acid methyl ester, palmitoleic acid methyl ester, heptadecanoic acid methyl ester, stearic acid methyl ester, elaidic acid methyl ester, oleic acid methyl ester, linolelaidic acid methyl ester, g-linoleic acid methyl ester, x-linolenic acid methyl ester, heneicosanoic acid methyl ester, and cis-11-14-eicosadienoic acid methyl ester.Recently, Begum et al. (2023) reported that the aqueous extracts of T. hirsuta exhibited antimicrobial activity against S. aureus (DIZ 16.00 ± 0.66 mm for 20 mg/mL extract), K. pneumonia (DIZ 14.66 ± 0.88 mm for 20 mg/mL extract) and Salmonella enterica (DIZ 13.00 ± 0.88 mm for 20 mg/mL extract).They also related that ethanolic extracts of T. hirsuta had significant analgesic, antiinflammatory and antispasmodic activities.Therefore, T. hirsuta could be a valuable source of bioactive compounds to develop new drugs to treat pain, fever and anti-inflammatory disorders, bacterial infections, and gastrointestinal problems.Moreover.Wei et al. (2023) characterized four new sesquiterpenes (three bisabolane sesquiterpenes and one drimane sequisterpene) from T. versicolor, one of them (drimarene sequisterpene) showing antimicrobial activity against S. aureus (MIC 50 value 22.2 µM).
On the other hand, Leliebre-Lara et al. (2015) found that n-hexane, dichloromethane, ethyl acetate, and ethanol extracts of T. versicolor, collected from a dead and dry trunk in Cuba, presented anti-leishmanial activity against the parasite Leishmania amazonensis, being higher in ethyl acetate and ethanol extracts.Also, a partially purified exoproteome of T. versicolor culture filtrates highly inhibited the growth and the T2 toxin production of the cereal pathogen Fusarium langsethiae (Parroni et al., 2019).Wang K. et al. (2019) reported that the bioactive macromolecule polysaccharopeptide from T. versicolor (TPSP), purchased from Fujian Fuzhou Green Valley Biopharmaceutical Technology Research, inhibited the development of morphine addiction in rats.They pointed out that TPSP could be used as an adjunctive therapy approach for the alleviation of morphine resistance in the clinic.
Additionally, a polysaccharide from Trametes orientalis (T.orientalis) presented chemoprotective effects against cyclophosphamideinduced immunosuppression and oxidative stress in mice (Zheng et al., 2017).Furthermore, Roca-Lema et al. ( 2019) assessed the anticancer effects of polysaccharide-rich extracts from T. versicolor on LoVo and HT-29 human colon cancer cells.Their studies showed that T. versicolor extracts inhibited human colon proliferation and cause cytotoxicity.Moreover, blending the extracts with the known anticancer drug 5fluoruoracil boosted cell cytotoxicity.More recently, He et al. (2021) purified a protein named musarin from T. versicolor extract which strongly inhibited the growth of human colorectal cancer cell lines in vitro.Therefore, musarin protein holds promise to develop drugs against colorectal cancers, especially against the chemo-resistant ones.
Concluding remarks
In the evolving landscape of natural bioactive metabolite discovery, white-rot fungi have emerged as prolific sources of novel metabolites, offering versatile applications, including agriculture, healthcare, and pharmaceuticals.These compounds constitute a rich reservoir of bioactive substances, synthesized during secondary metabolism by utilizing intermediate compounds or by-products from primary metabolic pathways.While secondary metabolites are non-essential for an organism's growth, they show diverse biological characteristics, underscoring their potential significance.White-rot fungi exhibit a unique ability to decompose all wood components, contributing to carbon and nitrogen cycles and producing bioactive substances with several effects, such as antioxidant, antimicrobial, and anticancer properties.In light of explanations, this article reviews the potential application of biologically active secondary metabolites from white-rot fungi in different fields like nutrition, medicine, and degradation.As shown, the diversity of these compounds highlights their importance in forthcoming research, advancements, and practical applications across various industries.This underscores the crucial contribution that whiterot fungi can make in influencing the field of biotechnology and sustainable development.Nevertheless, scaling up production on a large scale is necessary to assess the feasibility of commercial applications.
TABLE 1 (
Continued) Bioactive secondary metabolites from white-rot fungi and their benefits.
TABLE 1 (
Continued) Bioactive secondary metabolites from white-rot fungi and their benefits.
TABLE 1 (
Continued) Bioactive secondary metabolites from white-rot fungi and their benefits.
TABLE 1 (
Continued) Bioactive secondary metabolites from white-rot fungi and their benefits.
TABLE 1 (
Continued) Bioactive secondary metabolites from white-rot fungi and their benefits.
TABLE 2 (
Continued) Illustration of chemical structures of some bioactive secondary metabolites from white-rot fungi.
|
2024-03-17T16:02:40.387Z
|
2024-03-13T00:00:00.000
|
{
"year": 2024,
"sha1": "218be3a23f86d4b837e3a619836a035e873891e4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2024.1363354/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c24364337dc12e3772dfa8d211ee6d184bfdf222",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259468989
|
pes2o/s2orc
|
v3-fos-license
|
Manipulating Indigenous Vegetable-Tanned Leather for Use in Crocheting Art
: The study explores techniques and methods used in converting indigenous vegetable-tanned leather into yarns that can serve as an alternative material and convert the locally made yarns into crocheted ladies containers and footwear using different stitches. It embraces a qualitative research of which the approach implements two various approach’s (Descriptive and Studio-based research). The novelty of the study came from sampling, which consisted of Crochet Artisans, Leatherworks Teachers
Introduction
Throughout the ages, from prehistoric times through to the period of the industrial revolution, wild animals were hunted for food and their skins used for clothing and shelter.In ancient times, clothes, footwear, bags, shelter (makeshift tents), and other objects were made from the skins of wild animals and other big domestic animals.Leather was used for making garments, footwear, saddles and chaps, holsters and harnesses, sword sheaths, ropes, rugs, water skins, bottles, bags, and containers [1].The ancient Egyptians, used leather for making tents, quivers, saddles, footwear, harnesses, body armour, and weapon carriers [2].Leathers made from goatskin was used in making maps, while leathers obtained from pigskin was used in making water skins [3].Leather were also used to render a number of items like boots, automotive and aircraft upholstery, fabrics, handbags, book covers, accessories for fashion and furnishings [4].In transportation and mobilization leather has been used for seating due to its longevity and comfort [5].The sailing ships also used leather jackets and patches to protect the rope rigging from mechanical wear [6].Correspondingly, in these modern times, due to the excellent overall properties such as breathability, comfort, wear resistance, and puncture resistance, leather is widely used in civilian and military footwear, automobile interiors, and office furniture.However, as people's living standards rise, so do their expectations of the functionality and performance of leather-based materials, albeit, the process of treatment and preservation has changed drastically [7].Generally, within Africa and most third world countries, when an animal is killed for the extraction of its skin, the skin is removed from the flesh, dried in the sun.It is then hardened by pounding in animal fats and brains and preserved by salting and smoking.Skins obtained from leather go through various processors to make it suitable for production.Leather's hydrothermal stability, mechanical properties, and resistance to chemical and biological degradation allow it to be used in a wide range of applications [8].Again due to its highly flexible and durability nature it makes it easy to be manipulated to produce various items [9].Leather is an alternative to other materials used in art techniques such as macramé, marquetry, painting and weaving, among other art techniques.Vegetable-tanned leather developed in Ghana's northern and southern parts have unique characteristics that make it the best for use in a variety of leather products [10].The common occurrence of animal skin products over time, whether tanned leather, parchment, vellum, oil or fat cured skin or rawhide, attests to the enduring usefulness and desirability of animal skin as a material.Whereas the properties attribute to the enduring presence of leather as a robust and flexible sheet material and its ready availability in cultures where meat got is from a slaughtered animal [11].Series of research studies have examined the use of manipulated vegetable-tanned leather in various art techniques, and findings stimulates that the characteristics of the materials used in macramé knotting had some similarities with Ghanaian indigenous vegetable-tanned leather [10].Also, macramé artefacts produced with leather are attractive, durable, and easy to use.The use of vegetable-tanned leather in marquetry was examined and showed that leather has some properties such as durability, pliability, and affinity to dyes that make it suitable for use in marquetry [12].However, it emerged that marquetry artisans do not use leather for their works because of its awful impactful smell.A study on the use of vegetable-tanned leather in the production of ladies' fashion accessories revealed that consumers embraced items made of leather albeit they preferred imported ones which come with excellent finishing [9].Thus, it appears that the use of leather in crocheting art has rarely been explored in art literature.Crocheting is one of the earliest needlework arts which involves the use of thread and hook to produce a fabric.Thus, its origin comes from the French word "crochet", meaning "hook".Crocheting appeals to many people because of its unique uses to make an extensive range of artefacts such as decorative items, scarves, caps, vests, sweaters, purses, belts, lace, doilies, tablecloths, pillow covers, and bedspreads.Crocheting art has gained much prominence in the Ghanaian art industry over a couple of decades.However, it does appear that crochet artisans have not explored the use of leather yarns in making artefacts.They tend to rely on threads or yarns made of cotton, wool, or nylon for the artefacts, notwithstanding the unique characteristics of leather as a suitable material for making crochets.Over time, artisans have manipulated leather in various ways to produce different artefacts.For instance, leather has been worked out and used in weaving, macramé, carving, and pyrography, among others.Moreover, several studies have examined the use of vegetable-tanned leather in art techniques such as macramé, weaving, and painting, among others [10,9,12].For instance, goat leather is very durable and flexible when used in making strips for macramé knots [10].This is due to the fact that goat leather has better water vapor permeability, greater porosity, wear-comfort property, hygienic property, and cooling mechanism.Furthermore, goat leather has slightly more acceptable stretchiness and elongation properties than cow leather.Because goat skin is primarily composed of collagen, the appearance, thickness, and length of these fiber bundles differ in various organs of the body [13].However, the use of locally made vegetable-tanned leather yarn in crocheting art has rarely been explored in the art literature.Most writers have evidently made it clear in their literature that the use of leather in making a variety of artefacts is for human consumption.The use of leather in crocheting art has rarely been explored.For example, the Ghanaian indigenous vegetable-tanned leather which is readily available on the market has unique features such as flexibility, durability, high-temperature resistance, elasticity, high-pressure resistance, smooth dyeing, and colouring.However, locally made leather yarn has been not used in the production of crochet products.To date, there appears to be a less empirical study on the topic despite the evidence that leather is more durable than other foreign materials used.
Leather Manipulation
Since this study is aiming at finding techniques and methods to manipulate indigenous vegetable-tanned leather into yarns for use in crochet art was found as a purposeful approach to extend the practical importance of locally produced leather to reach contemporary requirements and aesthetic appeal.Manipulation is defined as (a) to be used or modified (numbers, facts, etc.) in a skillful manner or for a specific purpose; (b) to be handled or performed with or as if by hand or by mechanical means, by skillful means; (c) to be altered by artful means to serve one purpose according to [14].In addition, manipulation has been described as the realistic method of turning the leather into a condition that will make it possible for the expected work to be carried out [10].These concepts are in direct line with what the study seeks to accomplish.The manipulation in this study is, therefore, the physical method of converting the condition of indigenous vegetable-tanned leather into a new shape to make it appropriate for use in crocheting craft.
Concept and Properties of the Leather Yarns
The only property appropriate for the manufacture of yarns which are lacking in rawhide is a long, thin length; thus, the rawhide undergoes a complicated cutting procedure, which gives this function after being processed to the regular tannage of fine clothing leather [15].It also shows that the design of the sector, the features of fine clothing/outfit leather, the relatively limited size and the reasonably high precision at which leather yarns must be cut, and the need to extract tanning deposits and natural oils and fibrous particles from cut yarns prior to knitting, are some of the problems faced in the manufacture of commercially acceptable leather.Leather has vast properties which made it impossible for earlier writers to define [16].Some properties of leather includes strength, durability, elastic (and sometimes flexibility and stiffness) [17].He again reviewed that the characteristics that leather comes with, originates from the natural structure of the skin they are tanned from and partly due to the choice of the manufacturing process hence the property of leather vary considerably on the grounds of type and quality of both the raw material and the tanning process employed [16].Thus, there is inconclusiveness about the general properties of leather, and hence the properties may vary with the type of leather in question.The literature on the leather works, and their suitability for specific artefacts appears to be scant and almost non-existent.However, the few that exist follow that the kind of leather suitable differs in their uses.For example, patent points out that macramé leather lace slows down crocheting considerably, attributing it to its heaviness and thickness, which makes bending difficult when involved in needlework and knitting machinery [15].The functionality of the intended product to be made from leather is a prerequisite to the specific applicable properties of the leather [16,18].The five criteria of finished leather yarn needed for needlework, such as crocheting and machine knitting for fine garments to be coupled with softness, firmness of depth, and the strong narrowness of breadth, sufficient strength properties and consistency [15].Softness, which tends to be the most distinguishing factor between leather yarn and leather lace, gives drapability to the knitted fabric made of leather yarn and the "soft side" that attracts the fashion industry.This leather yarn attribute often enables it to be shaped into various shapes throughout the knitting process.Some of the animals whose skin possesses the consistency of softness for knit fashion clothing and foldability required for knitting machines include cattle, sheep, pigs, buffalo, deer, antelopes, horses and goats.With clear depth thinness, this property is based on the efficiency of the splitting machines, as the leather in its natural state is not standardized [15].Similarly, the quality and narrowness of the width are decided by the machine and the device that enables the leather to be cut precisely.This allows the system used in the process to be precisely designed and optimized to preserve the flexibility needed for this material.To achieve precision, since the leather is soft, it is reviewed that a temporary stiffening compound should be added to give make the leather rigid to allow for accurate cutting.To withstand the pressure and stresses exerted by machine knitting and needlework, the yarn for leather works must be of sufficient tensile strength.This property saves the situation of wear and tear on the knitted fabric.Although leather for clothing garments does not require the tensile strength of lace leather, adequate tensile strength is required.The leather kind that satisfies this requirement has a dense grain structure, and this can be traced to young animals such as calves.However the younger animals have small and thin skins which results in smooth and fine grain structure, and comparatively, the female skin usually has a finer grain structure than the male skin and consequently, a softer and more elastic leather [16,18].The continuity factor for both the length of the yarn required for the economical operation of the knitting machine and the consistent representation of the other four properties [15].This study, therefore, seeks to explore different techniques and methods of manipulating the indigenous vegetable-tanned leather to produce yarns which can serve as an alternative material and be used for crocheting artefacts.
Methods and Materials
As the study was to project the novelty as proposed, studio-based research and descriptive research were adopted.The studio-based research was used to explore the various means to find techniques and methods to manipulate the locally tanned leather into yarns for use in crocheting art, while the descriptive research was employed to provide a comprehensive description of the manipulation processes of the locally tanned leather into yarns.
Preparation
The secondary preparation technique was done on leather [11].The goatskin treatment was aimed to prevent the leather surface from mould infestation and to reduce the offensive odour which is associated with local leather.The secondary treatment involved.Sanding: this process was achieved with the help of 60-grade sandpaper had a rough texture.The flesh side of the leather was scrapped with sandpaper to get rid of the excess flesh left after tanning.Liming: the flesh was scraped with a mixture of lime and wood ash.The aim was to remove the smell, blood, and other impurities from the leather.Soaking: the leather was immersed in clean water to wash away the mixture which was applied above and to make it soft.Stretching: the leather was stretched on a clean board, and the edges were pinned.The leather gained the full size of the skin.
Before the final work (crocheting artefact) was executed, the locally made leather yarns had to go through a series of processes to find out if the properties of leather suit the crocheting materials used in the production of crocheting artefacts since the crocheting yarns are noted for, flexibility, durability, construction, texture, and colour.
Considering the Suitability of the Ghanaian Indigenous Vegetable Tanned Leather for Use in Crocheting Art
The study was based on the features of the yarns used for crocheting.Two types of strips (flat strip and rounded strip) were concentrated on all from goatskin.To find out which one was more appropriate for the crocheting yarn, the following qualities with existing crocheting yarns were looked out for, flexibility, strength, composition, construction, texture, and colour.The following practical measures were applied to manipulate the selected leather for use in the study: 1) Cutting method a) Straight Cutting Technique The leather was marked out with a pencil.With the aid of the ruler, the marked point was joined with a straight line.The straight lines were cut out with a sharp scissor as shown in Figure 1 below.
b) Spiral Cutting Technique
A pencil in a compass was used to draw several circles of the same line spaces on the leather surface.The circles connected with a diagonal line were cut with a pair of scissors along the marked lines to the last circle to secure a long strip.Figure 2, demonstrate cutting of leather in spiral form.
Source: Studio Activity
Examining the Suitability of Goat's Strips (Flat and Rounded) for Crocheting Yarn Activity 1: Testing for Flexibility of the Leather Strips
The strips obtained from project one (wet pounding and dry pounding) were pulled, bend and crumpled with the hands continuously to know the malleability of each strip.Figure 5 is an example of crumpled leather.
Source: Studio Activity The eyes, fingers and the skin were tools used to examine the texture.The strips were pressed with fingers then after passing on the skin to feel the texture better as shown in figure 7 below.
Source: Studio Activity
The Application of Dyes on Leather Strips
The aim of testing the strips in various dyes was to know how each strip respond to the dye.The study identified some of the dyes available on the market and tested them on the leather.Liquid dyes were poured into a small clean contain whiles the powdered ones were also mixed and poured into the same clean contain.The containers were labelled according to the dye poured into them.Pieces of strips A (dry pounding) and B (wet pounding) were cut out and immersed in the dyes for 5 minutes.The strips were removed from the dye and placed on a clean surface under the shade for 3 minutes.The strips were soaked in clean water to remove the excess dyes and placed on a clean surface under the shade to dry gradually.
Glitters Dye
Source: Studio Activity
Insoluble Powder Dye
Source: Studio Activity
The Production of Leather Strips for the Leather Yarn
After a series of experiments with the locally tanned strips for crocheting yarns, the following manipulated strips were found suitable for this study.
1) Pounded leather strips soaked in water 2) Strips dyed with a mixture of suede and insoluble dye 3) Strips dyed with vat dye The production of the leather yarns was divided into five projects.These projects are presented below:
Project 1: Production of Bluish Green Leather Yarn
The working process involved in the production of bluish-green leather yarn is as follows: Step 1-Circles drawn on sheets of leathers with the use of a compass and a pen were 450mm in diameter.Spaces between circles were 0.25mm from the centre to the edge of the leather with a ruler and pen as indicated in Figure 17.Step 2 -The measured strips were cut out with a pair of scissors using a spiral cutting technique by connecting each circle with a diagonal line.The strips were soaked in water overnight and pounded for 10 minutes, soaked again for 20 minutes and pounded again.This process was done five times respectively in a mortar with a wooden pestle to loosen up the fibres, making them softer and obtaining a strand-like effect.
Source: Studio Activity
Step 3 -Three tablespoonsful (15 grams) of green suede dye, two tablespoonsful (10 grams) of insoluble dye and twelve tablespoonful (60 grams) of salts mixed with warm water in a bucket.The strips were soaked in the mixture for 10 minutes for the dyes to penetrate well into the fibres.The strips were taken out and left in the shade for 10 minutes and then rinsed in clean water to clear off the excess dyes.The dyed strips were stretched on a metal mesh and left in the shade for it to dry slowly as seen in Figure 18 below.Step 4 -After drying, the strips were trimmed with a scissors at the knotting point of each meeting strip and rolled onto a plastic rod. Figure 19 is an example of leather being trimmed and rolled.
Project 2: Production of Orange Leather Strips
The working process involved in the production of orange leather yarn is as follows: Step 1 -A khaki leather was marked on the flesh side with circles of diameter (450mm) using a compass and pen as seen in Figure 20.Steps 2 -Equal spacing of 0.25mm were measured from the centre of the circles on the sheets to the edges with a metal ruler and pen.
Step 3 -The drawn circles were cut out with a scissors using the spiral technique by creating a slanted line between each spaces.The strips were soaked overnight after it was pounded and repeatedly soaked five times, respectively.
Step 4 -Six tablespoonful (30 grams) of orange vat dye, six tablespoonsful (30 grams) of hydro, 750ml of clean water, and half a teaspoon of caustic were mixed in a bucket following the above arrangement respectively.The strips were soaked in the mixture for 10 minutes.The strips were removed and left in the shade for 10 minutes and rinsed with water to clear the excess dye solution.The strips were stretched on a metal mesh and allowed to dry as illustrated in Figure 21 below.Steps 5 -The dry strips were trimmed at the knotting areas, where two strips met with scissors and rolled onto a plastic rod. Figure 22
Project 3: The Production of Tie and Dye Leather Yarn
The working process involved in the production of tie and dye and golden yellow yarn is as follows: Step 1 -Circles of different diameters were drawn to cover the whole sheets of leathers with the aid of a compass and a pen.Equally spaced lines of 3mm were marked from the centre on the circles on the sheets of leathers to the edge.After marking, the sheets of leathers were cut in a spiral form to achieve a long strip with a pair of scissors.
Step 2 -The cut-out strips were soaked in clean water overnight after which they went through a series of pounding and soaking in a mortar with a wooden pestle to loosen up the fibres, to make them soft and have a strand-like look.
Step 3 -Half of the pounded strips were tied with a leather strip to give it a tie-dye effect.The remaining strips of leather were dyed without tying.Three tablespoonsful (15 grams) of orange vat dye, three tablespoonsful (15 grams) of hydrous, and 750ml of water and a half teaspoon (1 gram) of caustic were mixed in a container.The twisted leather strips were immersed in the dye solution for three minutes to enable it to penetrate the fibres.Three tablespoonful (15 grams) of violet vat dye, three tablespoonful (15 grams) of hydrous, 750ml of water and a half teaspoon (1 gram) of caustic were mixed, and the leather strips were immersed in it for another three minutes.Another vat dye mixture was made (orange, brown and blue-black), and the leather strips were soaked in each for three minutes.Finally, all the vat dye solutions were mixed in one container, and the leather was soaked in it for 10 minutes before left under the shade to dry for five minutes.The strips were then rinsed in clean water to remove excess dye and stretched on a metal mesh.Figure 23 Step 4 -After leather strips were dried.It was trimmed at the knot sections with scissors and rolled onto a plastic rod. Figure 24 bellow is an example of a tie and dye leather and a golden yellow yarn.
Project 4: The Production of Blue-Black Leather Yarn
The working process involved in the production of blueblack leather yarn is as follows: Step 1 -Circles with a diameter of 450mm were drawn on sheets of leathers with the use of a compass and a pen.Each space between the circles was 0.25mm from the centre to the edge of the cut-out circular leather with the help of a metal ruler and pen.
Step 2 -The measured strips were cut out with scissors using a spiral cutting technique by connecting each circle with a slanted line.The strips were soaked in water for a day after which it was pounded for 10 minutes and soaked again overnight and a series of soaking and pounding.This process was done five times respectively in a mortar with a wooden pestle to loosen up the fibres, make it softer and have a tinnier strand look.Figure 25 Step 3 -Six tablespoonful (30 grams) of blue-black vat dye, six tablespoonful (30grams) of hydro, 750ml of clean water and half teaspoon (1 gram) of caustic was mixed in a bucket following the arrangement respectively.The strips were soaked in the mixture for 10 minutes.The strips were removed and left in the shade for 10 minutes and rinsed with water to clear the excess dye solution.The strips were stretched on a metal mesh and allowed to dry as shown in Figure 26 below.Steps 4 -The dried strips were trimmed at the knotting areas, where two strips met with scissors and rolled onto a plastic rod. Figure 27 below is an example of a blue-black yarn rolled.
Project 5: The Production of Violet Leather Yarn
The working process involved in the production of violet leather yarn is as follows: Step 1: The remaining circular leather was cut using 5mm spacing with scissors.They were soaked in water overnight and pounded in a mortar and pestle for 20 minutes to loosen the fibres, make it small and give it a bit of roundness as shown in Figure 28 Step 2 -Three tablespoonful (15 grams) of violet vat dye, three tablespoonful (15 grams) of hydrous, 750ml of water and a half teaspoon (1 gram) of caustic.The leather strips were immersed in the dye solution, stirred and left for twenty minutes to enable it to penetrate the fibres.The strips were then rinsed in clean water to wash off excess dyes and stretched on metal mesh for it to dry gradually under the shade as illustrated in Figure 29 below.Step 3 -The knotted sections were trimmed with a scissor and rolled onto a plastic rod. Figure 30
Crocheting Product Outcome with the Manipulated Leather Yarns
The artefacts were crocheted with combinations of crocheting stitches to create varieties in the products.
Findings and Discussion
The main objective of this study was to explore techniques of manipulating the indigenous vegetable tanned leather into yarns and convert the locally made yarns into ladies containers and foot wears using combination of crocheting stiches.Two cutting techniques were adopted for this study thus the spiral and the straight cutting.When the sheets of leathers were marked in straight lines and cut out, the strips obtained were short and couldn't be used for crocheting.The spiral technique gave a lengthy strip and the strip gotten was determine by the size of the leather sheet.After the pounding techniques Strip A (drying pounding technique) became flat, elongated and flexible while strip B (wet pounding technique) made the leather strip rounded and very flexible due to the series of soaking and pounding.When pressure was exerted on the two strips by crumpling and pulling them at different angles, strip A had few wrinkles on the grain side but had no evidence of tearing, on the other hand, strip B had no wrinkle and had no evidence of tearing.An extreme force of pulling was exerted on strip A, the small size broke, but the bigger ones had no evidence of tearing, strip B had no evidence of tearing no matter the size.After feeling both strips with the fingers, strip A felt a bit rough due to its grain texture while strip B felt smoother to be compared with strip A. Since crocheting is a needle work, both strips were tested with the needle by constructing a piece to know the easiness and outlook.After construction of strip A, the sample showed both sides; thus the grain and flesh side, the strip mostly slipped off the hook, making it a bit difficult to work with it.
On the other hand, strip B was a more comfortable to be constructed.At the end of testing the leather strips with different dyes, it showed that strip A was not able to dye well, especially at the grain side to be compared with strip B. The vat dye glitters, and suede couldn't dye both sides of strip A well to be compared with that of strip B. The insoluble pigment still made the leather moist, but when it was added to the suede plus salt solution, it dyed perfectly with no moisture content in it.At the end of the testing with vat dye, the mixture of suede and insoluble were the best suitable dye solution for the project due to its availability, cost and effect.Burnishing and dyeing of leather strips (cords) makes the end products attractive and appealing to the eye [2].However, the local vegetable tanned leather yarn been able to be dyed in different colours made the artefacts very beautiful, the tiedye effect also gave a plus to the colouring effect since it rare in the existing yarns on the market.The flexibility of the goat strips allowed the yarns to be crocheted into different stitches which gave a detailed structure, and this made the artefact very attractive and appealing to the eye.The continuous pounding of the leather strips gave a different texture to the final work.Since the leather strip was tested and proved to be stronger enough which had no evidence of wear or tear, made the artefact produce stronger.The leather yarns crocheted to ladies products were very comfortable to be used.The lady's office bag, school bag, purses, wedges, slippers, sandals and heels came in different styles, sizes and shapes.The bags and purses served as a containers, to hold different items.The footwear made were to protect the feet against any sharp object on the ground and also to compliment the bags made.The various designs and nature of the work made it possible for the artefacts to the fits-its purpose.
Conclusion
The purpose of this study was to examine the viability of vegetable-tanned leather as an alternative material and convert the locally made yarns into ladies containers and foot wears using combination of crocheting stiches to reduce the overdependence on foreign and other local materials yarns which are limited on the market.In order for the objective to be achieved, places of individuals where crocheting artisans make their products with yarns and buy yarns were visited to have knowledge of their crocheting yarns and products they normally produce using the yarns.The characteristics or physical properties of their yarns were all examined.The study reviewed available related literature similar to the topic.The indigenous vegetable-tanned leather was manipulated to produce suitable yarns which can serve as an alternative material and converted the yarns into artefact by firstly passing the leathers through a secondary process.Practical measures served as criteria in selecting strips (flat and rounded) for the study by physically examining them of the view of getting the best which can serve as an alternative yarn for crocheting art.Physical examination included testing for durability, flexibility and the yarn's ability to be picked by the hook.Finally, the strips were tested with different dyes on the market to know which one was appropriate for dying the leather yarns into different colours.In the study, it was observed that the manipulation of the indigenous vegetable tanned leather in the production of yarns for crocheting art was successfully carried out through the framework of cutting the goat leather into yarns using the spiral cutting method, series of soaking and pounding method, using vat dyes and suede mixed with insoluble dyes for changing the colour of the leather.The study revealed that indigenous vegetable tanned leather can be manipulated in the production of yarns for use in crocheting art using spiral and wet pounding techniques.Strips from wet pounding were seen to be the best to be used as crocheting yarn due to their strength, ability to twist and fold easily, ability to be constructed easily with the hook and ability to show details of stitch structures.Through the various dyes application, leather can be dyed to achieve varieties of colours.Again, it was observed from the fields that most crocheting artisans limit their works to a few yarns on the market and leather artisans and students do not use crocheting techniques for their works and ones the potential of leather is tapped by both Crocheting and Leather it will help to create diversity in crocheting yarns, crocheting artefacts and expand leather usage.
Recommendation
An in-depth study of the current state of knowledge regarding the manipulation of indigenous vegetable tanned leather for use in crocheting art has been conducted.However, it appears that there are some research gaps in the field, resulting in limited leather yarn production for crocheting.In light of this, additional research is still needed in the following areas: i. Assessing and improving the color fastness of various dyes (acid dyes, direct dyes, basic dyes, sulfur dyes) and their effect on leather yarns.ii.Investigating the workability of leather yarns in various techniques such as plastic canvas stitch and knitting.iii.Innovative traditional/foreign finishing techniques to enhance the texture, color, appearance, versatility, and quality of leather yarns.iv.Combining locally produced leather yarns with other materials (wood, fabrics, cords, bamboo, metals, and ceramics) to create a diverse range of artifacts.v. Manipulation of other types of skin (rabbit, pig, snake, etc.) and hide leather (deer, buffalo, etc.) leathers in the production of yarns for use in crocheting art.
Figure 1 .
Figure 1.Cutting the leather sheet in straight lines.
Figure 2 .
Figure 2. Cutting the leather sheet in spiral form.2) Soften Techniques a) Dry Pounding The marked and cut outstrip pounded with a pestle and mortar to open up the fibres to attain flexibility without any liquid substances added to it as shown in Figure 3 below.
Source: Studio ActivityFigure 3 .
Figure 3. Pounding the cut out leather strips.b)Wet PoundingIn this type of pounding process, the leather strip was soaked in water overnight, pounded, soaked in water and pounded repeatedly to get the water to penetrate the leather fibres to cause softness as demonstrated in Figure4below.
Activity 2 :Figure 6 .
Figure 6.Pulling of leather strips.Activity 3: Testing for the Texture (Roughness or Smoothness) of the LeatherThe eyes, fingers and the skin were tools used to examine the texture.The strips were pressed with fingers then after passing on the skin to feel the texture better as shown in figure7below.
Figure 7 .Figure 8 .
Figure 7. Feeling the leather strips with the fingers.Activity 4: Testing for Easy Construction with HookEach of the leather strips tested was to know how well it picked by the hook and how it able to correspond to the stitches made with it as illustrated in Figure8below.
Figure 15 .
Figure 15.Testing leather strip A and B with glitters dye.
Figure 16 .
Figure 16.Testing leather strip A and B with insoluble dye.
Figure 17 .
Figure 17.Drawing of circles of the same distance on the leather sheet.
Figure 18 .
Figure 18.Leather strips soaked in dye and placed under a shade to dry gradually.
Source: Studio ActivityFigure 20 .
Figure 20.Circles drawn on the leather sheet.
Figure 21 .
Figure 21.Strip allowed to dry and stretched on a metal mesh.
Figure 23.Leather strips tied with shorter strips and soaked in dye.
Figure 24 .
Figure 24.Rolled tie and dye leather yarn and golden yellow.
Figure 25.Leather cut into a strip.
Figure 28.Marked leather being cut and soaked in clean water.
Source: Studio ActivityFigure 29 .
Figure 29.Preparation of vat dye and leather yarn stretched on metal sheet.
Figure 31 .
Figure 31.Lady's handbag and a pair of heels made with the violet leather yarn.
Figure 32 .Figure 33 .
Figure 32.Mini traveling bag and a pair of wedge made with the blue black leather yarn.Double and Half Double Crochet Stitches
Figure 34 .
Figure 34.Clutch bag and a pair of sandals heel made with the orange leather yarn.
Figure 35 .
Figure 35.Clutch bag and a pair of slipper made with the bluish-green leather yarn.
|
2023-07-11T01:23:47.487Z
|
2023-02-06T00:00:00.000
|
{
"year": 2023,
"sha1": "1369463ddffd71ebbaed2b3e92c0cbb814a742c3",
"oa_license": "CCBY",
"oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajad.20230801.11.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7b64fcaf56ee78ce90d76184cf75a9269cc24fd8",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
228911440
|
pes2o/s2orc
|
v3-fos-license
|
miR-335-5p suppresses gastric cancer progression by targeting MAPK10
Background Recent studies have established the roles of microRNAs (miRNAs) in cancer progression. The aberrant expression of miR-335-5p has been reported in many cancers, including gastric cancer (GC). In this study, the precise roles of miR-335-5p in GC as well as the molecular mechanisms underlying its effects, including the role of its target MAPK10, were evaluated. Methods Quantitative real-time PCR was used to evaluate miR-335-5p levels in GC cell lines and tissues. MTT and colony formation assays were used to detect cell proliferation, and Transwell and wound-healing assays were used to evaluate the invasion and migration of GC cells. The correlation between levels of miR-335-5p and the cell cycle-related target gene mitogen-activated protein kinase 10 (MAPK10) in GC was analyzed. In addition, the candidate target was evaluated by a luciferase reporter assay, qRT-PCR, and western blotting. Results The levels of miR-335-5p were downregulated in GC tissues and cell lines. Furthermore, miR-335-5p inhibited the proliferation and migration of GC cells and induced apoptosis. Additionally, miR-335-5p arrested the cell cycle at the G1/S phase in GC cells in vitro. Levels of miR-335-5p and the cell cycle-related target gene MAPK10 in GC were correlated, and MAPK10 was directly targeted by miR-335-5p. Conclusions These data suggest that miR-335-5p is a tumor suppressor and acts via MAPK10 to inhibit GC progression.
Background
Gastric cancer (GC) is still a significant public health problem worldwide [1,2]. Over 1,000,000 new cases and an estimated 783,000 deaths were reported in 2018, making it the fifth most frequently diagnosed cancer and the third leading cause of cancer deaths [3]. A wide range of factors, such as lifestyle, Helicobacter pylori infection, polyps, gastric ulcers, genetic factors, and gastric residual tissue, may be involved in gastric tumorigenesis [4]. Although there are many methods for the diagnosis and treatment of GC, 30% of patients are diagnosed at an advanced stage [5]. Thus, useful biomarkers for early screening or detection are essential for improving survival rates [6]. miRNAs of about 22 nucleotides contribute to gastric carcinogenesis by altering the expression of oncogenes and tumor suppressors [10]. For example, miR-181d [11], miR-99a [12], miR-105 [13], and others have suppressive effects on GC development, whereas other miRNAs, including miR-188-5p [14] and miR-221 [15], promote GC growth.
MiR-335-5p is abnormally expressed in many cancers. For instance, miR-335-5p is significantly downregulated and has a vital role in the metastasis of non-small cell lung cancer [16]. miR-335-5p is downregulated in breast cancer cells and is a promising biomarker for breast cancer treatment [17]. Additionally, miR-335-5p is downregulated in renal cell carcinoma and is a candidate therapeutic target [18]. The downregulation of miR-335 in GC has been reported [19]; however, its precise roles in GC cells are not fully understood.
MiRNAs function by binding to the 3′UTR of target mRNAs in a complementary base-pairing manner, thereby contributing to cell apoptosis, proliferation, and differentiation [20][21][22][23]. They are thought to regulate more than 50% of protein-coding genes. Based on a literature review and gene target prediction algorithms, including TargetScan, miRanda, and miRBase, we hypothesized that mitogen-activated protein kinase 10 (MAPK10) is a potential target of miR-335-5p. MAPK10 is a member of the Jun N-terminal kinase subgroup of mitogen-activated protein kinases. MAP kinases act as integration points for multiple biochemical signals and are involved in a wide variety of cellular processes, such as proliferation, differentiation, transcription regulation, and development [24,25]. Expression patterns of MAP kinases differ depending on the tumor type. Zhang et al. showed that MAPK10 is expressed at low levels in cervical cancer tissues and cells [26]. However, the percentage of MAPK10 protein-positive cells is significantly higher in ovarian serous, mucinous, and clear cell carcinomas than in normal tissues [27].
In this study, the effect of miR-335-5p on GC progression via MAPK10 was evaluated. In particular, we compared the expression levels of miR-335 in GC tissues and cells with those in matched normal tissues and MKN-28 and SGC-7901 cell lines. In addition, we used bioinformatics approaches, luciferase assays, qRT-PCR, and western blotting to evaluate its relationship with MAPK10 and further evaluated the functions of miR-335 and MAPK10 in GC cell proliferation, metastasis, and apoptosis. Overall, our results demonstrated that miR-335 suppresses the progression of GCs by targeting MAPK10. Human GC cell lines (AGS, BGC-823, MKN-45, MKN-28, and SGC-7901), a normal gastric epithelial cell line (GES-1), and model cells (HEK-293) were provided by the Biomedical Experiment Center of Xi'an Jiaotong University (China). The use of these cell lines was approved by the Ethics Committee of Yan'an University College of Medicine (China). Human GC cells were cultured in DMEM (PAA Laboratories, Pasching, Australia) containing 10% fetal bovine serum and RPMI1 640 medium (PAA Laboratories) at 37 °C in a 5% CO 2 incubator. The culture medium was changed once every 2-3 days. MKN-28 and SGC-7901 cells in the logarithmic growth phase were collected and subjected to the following experiments.
Cell transfection
GC cells in the logarithmic growth phase were digested and inoculated onto a 6-well culture plate. After cells reached 60-80% confluence, the miR-335-5p-mimics and miR-335-5p-inhibitor (GenePharma, Shanghai, China) were added to the corresponding wells for further culture for 24-48 h.
MTT assay
Cell proliferation was assessed using the MTT Kit (Sigma, St Louis, MO, USA). Cells in the logarithmic growth phase were harvested and seeded on a 96-well plate. At 24, 48, and 72 h after seeding, 10 μL of MTT was added to each well and the cells were incubated for 4 h. Each well was supplemented with 150 μL of DMSO, and the optical density (OD) was recorded at 490 nm.
Colony formation detection
Transfected cells in the logarithmic growth phase were seeded onto a 6-well plate. After 2 weeks of culture, the cells were fixed with 4% paraformaldehyde and stained with crystal violet. Images were obtained and cells were counted.
Flow cytometry
Transfected cells in the logarithmic growth phase were inoculated onto a 6-well plate and cultured for 1 day. Cells were fixed in 70% ethanol for 24 h and treated with propidium iodide and RNase provided with the kit. The cell cycle distribution was detected by flow cytometry.
Cell invasion assay
Transwell chambers (8-μm pore size; Millipore, Billerica, MA, USA) were coated with Matrigel (15 μg/filter; BD Biosciences, Franklin Lakes, NJ, USA). Cells (2.0 × 10 4 ) in serum-free medium were added to the upper chamber, and the bottom wells were filled with complete medium. The cells were allowed to cross the Matrigel-coated membrane for 48 h.
Wound-healing assay
A wound-healing assay was performed to examine metastasis. Briefly, after cells reached 90% confluence in 12-well plates, a single scratch wound was generated with a 200-μL disposable pipette tip. The extent of wound closure was measured after 48 h.
Statistical analysis
Results are shown as means ± SEM of at least three different experiments. The SPSS22.0 was used for statistical analyses. Bioinformatics analyses were performed using the ggstatsplot package in R. Experimental data were processed using GraphPad Prism7.0. Comparisons were conducted with the independent t-test. P < 0.05 was considered statistically significant.
miR-335-5p inhibits GC cell proliferation in vitro
To investigate the role and function of miR-335-5p in GC cells, we analyzed its expression in 22 pairs of GC tissues and matched adjacent non-cancerous tissue samples by qRT-PCR. miR-335-5p levels were significantly lower in GC samples than in non-cancerous tissue samples (Fig. 1a). These results were validated in five GC cell lines. miR-335-5p levels was lower in the BGC-823, SGC-7901, MKN-45, MKN-28, and AGS cell lines than in the GES-1 cell line (Fig. 1b). To clarify the function of miR-335-5p in GCs, MKN-28 and SGC-7901 cells were selected for further analyses. As determined by qRT-PCR, miR-335-5p mimics successfully elevated miR-335-5p expression in two cell lines; the effect of the inhibitor was moderate due to the low expression of endogenous miR-335-5p in MKN-28 and SGC-7901 cells (Fig. 1c). Thus, miR-335-5p may act as a tumor suppressor in GC.
miR-335-5p induces cell cycle arrest and apoptosis in GC
Gain-and loss-of-function analyses were conducted by transfecting MKN-28 and SGC-7901 cells with miR-335-5p inhibitor-ctrl, inhibitor, miR-ctrl, and mimics. MTT and colony formation assays showed that the upregulation of miR-335-5p in MKN-28 and SGC-7901 cells inhibited cell growth and colony formation, while the inhibition of miR-335-5p exerted moderate adverse effects on GC cells, which may be explained by the low levels of miR-335-5p in MKN-28 and SGC-7901 cells (Fig. 2a, b). Consistent with these results, a flow cytometry analysis revealed that the upregulation of miR-335-5p arrested cells in the G0/G1 phase and inhibited the transition to the G2/M phase; similar effects were not observed in the miR-ctrl-transfected cells (Fig. 2c). c Expression levels of miR-335-5p were determined by qRT-PCR in GCs transfected with miR-335-5p mimics, inhibitor, or respective controls (*P < 0.05, **P < 0.01, and ***P < 0.005) Furthermore, flow cytometry confirmed that the upregulation of miR-335-5p induces apoptosis in GC cells. However, the miR-335-5p inhibitor resulted in a slight but non-significant difference in apoptosis compared to that in cells transfected with the negative control, which may be explained by the low expression level and low inhibitory efficiency in MKN-28 and SGC-7901 cells (Fig. 2d). The inhibition of MiR-335-5p promoted proliferation and inhibited apoptosis in GC cells, while the inverse results were obtained in the miR-335-5p mimic group.
Inhibition of miR-335-5p induces the migration and invasion of gastric cancer cells
To further confirm that miR-335-5p acts as a tumor suppressor, its effects on the invasion of MKN-28 and SGC-7901 cells were evaluated by a Transwell invasion assay and wound-healing assay. In the wound-healing assay, migration was slower in the miR-335-5p-transfected cells than in un-transfected cells. Over time, the difference in the metastasis rate between the two groups increased (Fig. 3a). In the Transwell invasion assay, the transfection of cells with miR-335-5p mimics significantly impaired invasion compared to that in the miR-335-5p-ctrl group in MKN-28 and SGC-7901 cells. In contrast, the knockdown of miR-335-5p enhanced GC cell invasion. When transfected with the mir-335-5p inhibitor, the MKN-28 and SGC-7901 cell invasion rates increased significantly (Fig. 3b). These results support the hypothesis that miR-335-5p contributes to the suppression of invasion and metastasis. To investigate the mechanisms underlying the roles of miR-335-5p in apoptosis and cell cycle progression, we measured the expression levels of apoptosis-and cell cycle-related proteins in GC cells. The transfection of MKN-28/SGC-7901 cells with MiR-335 with miR-335-5p inhibitor-ctrl, inhibitor, miR-335-5p ctrl, and mimics. Apoptosis was evaluated as the percentage of apoptotic cells (*P < 0.05, **P < 0.01, and ***P < 0.005) downregulated CDK6, CDK4, CyclinD1, and BCL-2 and upregulated the expression of BAX. The overexpression of miR-335-5p reduced the expression levels of vimentin and β-catenin, and significantly increased E-cadherin levels in MKN-28 and SGC-7901 cells. Our results showed that the silencing of miR-355-5p significantly increased the relative expression levels of vimentin and β-catenin and decreased E-cadherin expression, comparable with the effects of miR-355-5p overexpression in MKN-28 and SGC-7901 cells (Fig. 3c). These results suggest that mir-335-5p is involved in the progression, migration, and invasion of GCs.
Bioinformatics analysis of MAPK10 in gastric cancer
The TCGA database was used to elucidate the effect of MAPK10 in GC tissues by a bioinformatics approach. The expression of MAPK10 was higher in GC tissues than in healthy counterparts, and its expression was associated with the histologic and pathologic stages of GC (Fig. 5a-c). The expression of MAPK10 was associated with the DFI (disease-free interval, P = 0.033), PFI (progression-free interval, P = 0.013), DSS (disease-specific survival, P = 0.0068), and OS (overall survival, P = 0.017) in GC (Fig. 5d-g), suggesting that MAPK10 plays a key role as an oncogene in GC.
Knockdown of MAPK10 reduces GC progression
We knocked down MAPK10 expression by RNA interference [small interfering RNA (siRNA)] to confirm that MAPK10 mediates the antitumor effects of miR-335-5p. MAPK10 expression levels were higher in GC cells than in GES-1 cells (Fig. 6a) and were highest in MKN-28 and SGC-7901 cells. Western blotting indicated that the MAPK10 was obviously up-regulated in GC tissues than in their counterparts at the protein level (Fig. 6b). MAPK10 was successfully knocked down by siRNA, as verified by analyses of both at the mRNA levels (Fig. 6c). Similar to miR-335-5p-overexpressing cells, the downregulation of MAPK10 significantly inhibited proliferation and slightly inhibited colony formation in MKN-28 and SGC-7901 cells (Fig. 6d, e). Moreover, the influence of MAPK10 siRNA on the cell cycle was similar to the effect of miR-335-5p upregulation (Fig. 6f ). Consistent with the effect of miR-335-5p on GC cell apoptosis, MAPK10 knockdown induced apoptosis in MKN28/ SGC-7901 cells (Fig. 6g), suggesting that MAPK10 is involved in the progression of GC.
Knockdown of MAPK10 reduces the migration and invasion of GC cells
We silenced MAPK10 expression by RNA interference (RNAi) to evaluate whether it contributes to the effects of miR-335-5p on invasion and metastasis using MKN-28 and SGC-7901 cells. Based on a wound-healing assay, the group with low MAPK10 expression showed reduced rates of migration (Fig. 7a). Transwell assays demonstrated that MAPK10 silencing inhibited the invasion and migration ability of GC cells (Fig. 7b). Based on a western blot analysis, silencing MAPK10 significantly increased the relative expression levels of E-cadherin and decreased vimentin and β-catenin expression. These results were consistent with the effects of miR-355-5p overexpression in MKN-28 and SGC-7901 cells (Fig. 7c), suggesting that MAPK10 functions as an oncogene in GC. We concluded that miR-335-5p suppresses GC progression by targeting MAPK10 (Fig. 7d).
Discussion
The occurrence of GC is a complex process involving multiple factors, including genetic and epigenetic events. Many miRNAs have been demonstrated to contribute to GC tumorigenesis and development, and these serve as a valid diagnostic and therapeutic targets [28]. In addition, some specific microRNAs able to regulate the expression of genes in GC cells at the post-transcriptional level have been identified as diagnostic biomarkers for GC [29][30][31]. Therefore, studies of potential miRNAs associated with the development of GC may provide opportunities for improvements in diagnosis, treatment, and prognosis. In this study, the roles of mir-335-5p in GC were evaluated. Recent studies have shown that miR-335-5p acts as a tumor suppressor and inducer in several cancer types [32][33][34][35]. For example, miR-335-5p is downregulated in thyroid cancer cells and inhibits proliferation [36]. miR-335-5p functions via lactate dehydrogenase B to exert tumor inhibitory effects in colorectal cancer [37]. In the present study, based on expression analyses of miR-335-5p in tissues, we found that miR-335-5p acts as a tumor suppressor in GC, and the overexpression of miR-335-5p inhibits proliferation, invasion, and metastasis and induces apoptosis in vitro. Additionally, miR-335-5p could induce cell cycle arrest at the G1 phase and downregulated the G0 and G1 phase cycle-associated proteins Cyclin D1, CDK 6, and CDK4. The abnormal activation of CDK and its modulators has been reported in many tumors [38,39]. Furthermore, miRNAs participate in the regulation of the cell cycle [40,41]. For example, the miR-15a/16 family regulates G0/G1 cell cycle progression by targeting cyclin D1 (CCND1) [42] addition, miR-16 regulates various mRNA targets, including CDK6, CDC27, and G1-related cyclins, which jointly control cell cycle progression [43]. These studies strongly support our observation that miR-335-5p plays a key role in cell proliferation in GC. Migration and invasion are closely related to the occurrence and development of tumors. Moreover, many migration-and invasion-related proteins, including E-cadherin, vimentin, and β-catenin, are involved. For instance, p0071 interacts with E-cadherin in the cytoplasm and promotes invasion and metastasis in non-small cell lung cancer [44]. Furthermore, ubiquitin specific peptidase 20 (USP20) regulates the deubiquitination of β-catenin to control the invasion and migration of cancer cells [45]. Our results showed that the overexpression of mir-335-5p decreases the expression of vimentin and β-catenin and increases E-cadherin in MKN-28 and SGC-7901 cells. Our data suggest that mir-335-5p is involved in the migration and invasion of GCs.
In addition, our results showed that the upregulation of miR-335-5p inhibits the expression of MAPK10 in MKN-28/SGC-7901 cancer cells at the RNA and protein levels. Using bioinformatic analyses and a dual-luciferase reporter assay, we demonstrated that miR-335-5p directly targets MAPK10 by binding to its 3′-UTR and inhibiting translation. To further clarify the tumor suppressive effect of miR-335-5p via MAPK10, siRNA was used. MAPK10 silencing inhibited cell proliferation and migration and induced cell apoptosis, similar to the observed effects of miR-335-5p overexpression in GC cells in vitro. Accordingly, the expression levels of related proteins, including CDK6, CDK4, CyclinD1, BCL-2, BAX, E-cadherin, vimentin, and β-catenin were also altered by siMAPK10. MAPK10 is a member of the Jun N-terminal kinase subgroup of the mitogen-activated protein kinases, which are implicated in important physiological processes [46]. MAPK10 regulates the occurrence and development of several types of cancer. The downregulation of MAPK10 contributes to the suppression of ovarian cancer [47]. miR-27a-3p promotes the growth and invasion of NPC cells by targeting Mapk10 [48]. These protein expression in GC tissues vs counterparts' tissues was confirmed by using western blotting. c Expression levels of MAPK10 were measured by qRT-PCR in MKN-28 and SGC-7901 cells transfected with siMAPK10. d An MTT assay was performed to determine the growth of gastric cancer cells treated with siMAPK10 or a negative control (si-ctrl). e A colony formation assay was performed several days after the transfection of gastric cancer cells with siMAPK10 or a negative control (si-ctrl). f The cell cycle distribution was determined in gastric cancer cells 48 h after transfection with siMAPK10 by propidium iodide staining and flow cytometry. The histogram indicates the percentages of cells in G0/G1, S, and G2/M cell cycle phases. g Apoptosis was determined in gastric cancer cells at 48 h after transfection with siMAPK10 (*P < 0.05, **P < 0.01, and ***P < 0.005) results robustly suggested that the downregulation of MAPK10 induced by miR-335-5p could inhibit GC progression. Our findings highlight that mir-335-5p or MAPK10 may be considered as potential targets for GC therapy in the near future.
Conclusion
We obtained the following new findings.
|
2020-11-19T09:12:55.752Z
|
2020-08-14T00:00:00.000
|
{
"year": 2021,
"sha1": "366156f4400de071b7cfb04da64796dccd5f1516",
"oa_license": "CCBY",
"oa_url": "https://cancerci.biomedcentral.com/track/pdf/10.1186/s12935-020-01684-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1605485e480006f8b44019c679035a2fc7e1e28",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209922538
|
pes2o/s2orc
|
v3-fos-license
|
Modification of triangular member’s function (TMF) based on firefly-chen fuzzy time series (FTS) method
Usually the applications of fuzzy forecasting methods use triangular fuzzy number for calculating the value of member’s function.In many previous researches on forecasting, FTS (FTS) methods were implemented using symmetric triangles in the fuzzification step. In the present paper, we study on how hypotenuse of the triangle can affect forecasting accuracy rates. We modify the TMF to be non-simetric triangle with different line of orientation (left and right leaning triangles) to find out the effect of FTS forecasting results. This modification isapplying on Firefly-Chen FTS method to forecast the Indonesian Composite Stock Price Index (IHSG). Its performance verified through simulation by using Matlab.
Introduction
Fuzzy Time Series (FTS) is a method based on Fuzzy Linguistic (FL) that considered as statistical time series analysis applying on fuzzy sets [1]. In general, FTS methods are composed by three steps, i.e., divide the universe of discourse into several clusters, fuzzification, and defuzzification or forecasting. At the last research, we modified the universe of discourse into dynamic length of intervals by using Firefly Algorithm (FA) [2]. The error value shows that the modification outperforms the original Chen method in the case forecasting IHSG. The combination of Chen method and FA named Firefly-chen method. Further, in this study we modify the fuzzification process to be different shape of member's functions.
Fuzzification is a process for converting an input from a crisp value to be FL that usually presented in the form of fuzzy sets with its respective member's functions. In principle, member's function can be of different shape [3], but in practice, Triangular Member's Function (TMF) are most frequently used [2][3][4][5][6][7][8][9][10][11]. TMF is one of the major components and it is more natural than a precise value in simulating uncertainty [4]. To quantify the vagueness, the expert faithfully expresses the lowerpoint of the preference intensity, the upper point, and the most probable point respectively [5].
In this paper, we modify the symmetric TMF to be different shape of triangle, i.e., left and right leaning triangle. We use IHSG data for demonstration. The structure of the article is arranged as follow, Section 1 Introduction, Section 2 discussed basic concept of FTS, Section 3 Modification of TMF, Section 4 Triangular Member's Function (TMF) modifications, and Section 5 Conclusion.
FTS
The definition of fuzzy sets is given at Definition 1.
Definition 1 [2]
Let be the universe of discourse, a fuzzy sets of with member's function is a fuzzy set which expressed by this following form: (1) where the member's function attribute every to every real number in the interval . The value of shows the degree of membership of at fuzzy set . The notation represent element which has a membership degree Definition 2 [8] Let are the fuzzy sets of linguistic variable where is member's function of cluster , and . Fuzzy set is also can be expressed by this following equation: (2) with indicate interval which has member's function degree and be subset of universe of discourse .
For example:
Definition 3 [8]
A fuzzy member's function is called a TMF if it has 3 parameters, i.e., where , denoted by Triangular with the following rules: The following Figure 1 is an example of . Figure 1.
Representation of symmetric TMF
The TMF can also be expressed by the following formula : If the symmetric TMF is used, parameter is the midpoint of the interval , so that the definition of symmetric TMF can use 2 parameters i.e., parameters and or can be expressed by the triangular formula . It cancertainly be profitable in computing process. (3)
Modification of Triangular Fuzzy Member's Function (TMF)
In this paper, we modify the value of linguistic variable at fuzzification step. Example of a fuzzy set is given as follows.
The fuzzy set can be written in the fuzzification matrix , with is membership degree values denotes as follows : (6) From matrix (6), we conclude that the fuzzification used symmetrical TMF. Then, we define the maximum value of member's function degree for every with . If the maximum membership degree of is located at , then the fuzzification is . Figure 1 shows that the maximum degree of membership occurs at the midpoint of an interval. It will affect the value of forecasting calculations orthe forecasting value is the middle value of the interval . In this section we will discuss the comparison of member's function simulations, i.e., the member's function of symmetrical triangles and non-symmetrical triangles. The non-symmetrical triangle is a triangle which height line is not an axis of symmetry, as represented in the following Figure 2 and Figure 3.
Triangular Member's Function (TMF) Modifications
The aim of TMF modification is to find out the difference between the use of the member's function of symmetrical triangles and non-symmetrical triangles. Some forecasting algorithms with TMFs usually use symmetrical triangular shapes, because it is easier in calculating forecasting. Forecasting based on FTS with TMF predicts a data based on the maximum value of membership degree, or in other words the forecasting value is the value with the degree of membership equal to one or . If the triangle member's function used is a symmetrical triangle, the forecasting value is the middle or midpoint value of the cluster. However, this is not entirely appropriate depending on the characteristics of the data. The use of TMFs is generally only to simplify the forecasting calculation process, because it only uses the midpoint value in the cluster specified in the fuzzification process. Therefore, this paper modifies the fuzzification process where the TMF used is a non-symmetrical triangle for the composite stock price index (IHSG) forecasting case.
Modification of member's functions is done in step 3, namely step fuzzification. The value of linguistic variables with the left-leaning non-symmetrical triangle member's functioncan be written in the fuzzification matrix , with being the value of the membership degree as follows.
Second, the value of linguistic variables with right-leaning non-symmetrical triangle member's functions is also can be written in the fuzzification matrix , with being the value of the membership degree as follows.
The simulation use Chen method and Firefly-Chen method [2,5]. The following graphic shows the comparison of RMSE value on member's function modification. Figure 5 and Figure 6, we compare the RMSE value of the non-symmetrical triangle member's function. The results show that the modification is not too influential to reduce the error value. It can be seen that the error value for the symmetrical triangle member's function is still better if compared with the error value for the non-symmetrical triangle member's function, both left and right leaning triangles. This is due to the lack of criteria to determine how large the non-symmetrical triangle is. Modifying the member's function in this paper is only limited to seeing the effect on numerical forecasting results. The use of TMFs in most forecasting method based on FTS is reasonable because certainly be profitable in computing process, it turns out that the results of forecasting with non-symmetrical TMFs are not better compared to forecasting using symmetrical TMFs.
Conclusion
Based on Figure 4 and Figure5, when compared to the RMSE value for modification of the nonsymmetrical TMF, it is not too influential to reduce the error value. It can be seen that the error value for the symmetrical TMF is still better compared to the error value for the non-symmetrical triangle member's function, both left and right leaning triangles. This is due to the lack of criteria to determine how large the non-symmetrical triangle is. To modify the member's function in this paper is only limited to seeing the effect on numerical forecasting results. The use of TMFs in most forecasting algorithms based on FTS is reasonable because it certainly be profitable in computing process. In the case of forecasting IHSG, symmetrical triangle member's functions better than forecasting using nonsymmetrical triangle member's functions. Because of the maximum degree of membership occurs at the midpoint of an interval, we can reduce the parameter to be just 2 parameters. It is very profitable in computing process, so the time needed in the calculation process is not too long.
|
2019-11-22T01:31:27.773Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "3577159f39b5edc6a0322d81fefb5bdc9003c31b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1321/2/022077",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "71ef08bd1f4dd1a1f4569bb47a4acfe9ea9c2bc7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
10541218
|
pes2o/s2orc
|
v3-fos-license
|
Oleanolic Acid, a Compound Present in Grapes and Olives, Protects against Genotoxicity in Human Mammary Epithelial Cells
Oleanolic acid (AO) and maslinic acid (MA) are constituents of the skins of different fruits, including olives and white or red grapes. Although both compounds are known to have beneficial properties against different types of cancers, thus far, there are no studies about their chemopreventive effects in human breast cancer. Thus, we sought to elucidate whether both compounds possess chemopreventive activity. Two cell lines of human breast cancer cells and one noncancerous human mammary epithelial cells were used to determine the effects of OA and MA. The results showed that OA inhibited the proliferation and increased the oxidative stress of highly invasive cells. Additionally, OA decreased oxidative stress and oxidative damage to the DNA in human mammary epithelial cells. These results suggest that OA could act as a chemopreventive agent in human breast cancer and could inhibit the proliferation of highly invasive breast cancer cells.
Introduction
The triterpenoids are natural compounds that are widely distributed in the skin and seeds of different edible fruits, such as olives and grapes from Vitis vinifera. Oleanolic acid (OA) and maslinic acid (MA) are two of the main triterpenes found in these fruits; in addition, they are also present in both virgin olive oils and wine, especially red wine [1][2][3][4][5][6][7].
The traditional Mediterranean diet, characterized by the consumption of foods such as grapes, wine, must, raisins, olives and virgin olive oil, has been associated with a low incidence of breast cancer [8]. Current knowledge highlights the role of triterpenes in the prevention of certain cancers, including breast cancer [9][10][11][12][13]. Previously, it has been described that oleanolic acid and maslinic acid possess cardioprotective effects [14,15], anti-inflammatory effects [16,17], and antitumor properties in human prostate cancer cells [18], hepatocellular carcinoma cells [19], human pancreatic cells [20], and colon cancer cells, among others [21,22]. However, there are no studies about the potential chemopreventive effects of oleanolic and maslinic acids in human breast cells. We hypothesized that the chemopreventive effects of Mediterranean diet consumption against breast cancer may be due, at least in part, to the biological actions exerted by these compounds. To demonstrate this hypothesis, we have used the following well-characterized human breast cell lines: MCF10A human mammary epithelial cells, highly invasive MDA-MB-231 human breast cancer cells, and finally, minimally invasive MCF7 human breast cancer cells.
Cytotoxicity Effects
The results are expressed as the percentage of cell survival with respect to the untreated control, which was set as 100%. For MCF10A cells, both OA and MA at 10 and 100 µM promoted cell death (cell survival was 83% and 13% for OA and 9% and 13% for MA, respectively) ( Figure 1a). For MCF7 cells, MA induced a strong cytotoxic effect at 100 µM (8% survival) (Figure 1b). MDA-MB-231 cells treated with the two acids showed a marked cytotoxic effect for OA or MA at 100 µM (68% and 17% survival, respectively). MA concentrations between 0.01 µM and 10 µM appeared to promote cell survival ( Figure 1c). In the human mammary epithelial cells, both compounds were cytotoxic at the highest concentrations. However, for MCF7, which is a multi-drug-resistant cancer cell line, only MA was capable of promoting cell death. OA did not significantly promote cytotoxicity in MCF7 cells, according to our previous study [10]. Our results agree with Shan et al., who showed that OA did not strongly inhibit the growth of MCF7 cells [23]. In MDA-MB-231 cells, other studies of different plant extracts (which contain OA) have described antiproliferative effects [24,25]. Ponou et al. showed that isolated OA did not promote cytotoxicity at a maximum concentration of 200 µM [26], while we observed cytotoxicity at 100 µM.
Effects on Proliferation
The results are expressed as the percentage of cell survival with respect to the untreated control, which was set as 100%. MA at 10 and 100 µM had antiproliferative effects for MCF10A cells at 24, 48, and 72 h (10%, 38% and 11% cell survival for 10 µM and 9%, 10% and 11% for 100 µM, respectively) ( Figure 2).
Effects on the Cell Cycle
The results are expressed as the percentage of cells in the different phases of the cell cycle. For MCF10A cells, OA treatment resulted in an increase in cells in the G0/G1 phase at 10 µM with respect to the control and a decrease in the G2/M phase. MA treatment resulted in a dramatic increase in the sub-G0/G1 phase at 10 µM (65%) with respect to the control (0.4%), and consequently resulted in a decrease in the other phases. At 10 µM, both compounds affected the cell cycle of MCF10A cells (Table 1). We have discussed the importance of the different concentrations of treatments used in experiments [27], and our results show that high concentrations of these compounds could promote cell death in human mammary epithelial cells. For MDA-MB-231 cells, OA treatment resulted in a decrease in the number of cells in G0/G1 and an increase in G2/M at 0.1 µM with respect to the control. At 10 µM, OA increased the number of cells in the G2/M phase with respect to the control. MA treatment did not result in a significant difference in MDA-MB-231 (Table 1) or MCF7 cells (data not shown). These results suggest that MA affects the cell cycle of MCF10A cells, increasing the Sub-G0/G1 ratio. This increase could be due to pro-apoptotic effects. To assess this apoptotic effect, our group studied the apoptosis-promoting effects of these compounds in the three breast cell lines.
Analysis of Apoptosis
The percentages of living, apoptotic, and necrotic cells are represented with respect to the total, which was set as 100% (Table 2).
For MCF10A cells, 10 µM OA resulted in a high percentage of apoptotic cells with respect to the control. MA at 10 µM increased the rate of apoptotic cells. For MDA-MB-231 cells, statistically significant differences were not found, but 1 µM OA resulted in a slight increase in the apoptotic cell rate ( Table 2). MA treatment in MCF7 cells did not result in a significant difference with respect to the control (data not shown). MA and OA at the highest concentrations caused apoptosis in MCF10A cells, while concentrations lower than 10 µM did not appear to promote apoptosis. However, in both breast cancer cell lines, neither OA nor MA produced a dramatic increase in apoptosis; only 1 µM OA slightly increased the apoptotic ratio in MDA-MB-231 cells. This slight increase could correspond with the proliferation observed, where OA decreased the proliferation in a dose-dependent manner over time.
Effects on the Intracellular ROS Level
In MCF10A cells treated with OA and MA, the levels of ROS decreased from 1 µM to 100 µM OA and from 10 to 100 µM for MA ( Figure 5a). MA treatment in MCF7 cells increased the ROS levels in a dose-dependent manner ( Figure 5c). In MDA-MB-231, OA treatment resulted in an increase in the ROS levels at 0.001 µM and 100 µM, while MA did not alter the ROS levels at any concentration tested ( Figure 5d). To induce intracellular oxidative stress, H2O2 was added before the fluorescence measurement. Figure 5b shows a decrease in the ROS levels in MCF10A cells for OA; however, this difference was statistically significant only at 1 µM. MA treatment increased the ROS levels in MCF10A cells at almost all concentrations (Figure 5b). For MCF7 cells, MA appeared to increase the ROS levels at lower concentrations ( Figure 5d). The ROS levels in MDA-MB-231 cells increased with OA treatment from 0.01 µM to 100 µM, while MA treatment did not result in any statistically significant differences with respect to the control, except for 100 µM, which decreased the ROS level ( Figure 5f).
OA had a protective effect on MCF10A cells. It diminished ROS levels in the basal state, and when oxidative stress was induced, OA continued protecting the cells, reducing their sensitivity to oxidative stress. ROS can act as a trigger for carcinogenesis by permanent damage of DNA, causing mutations in p53, the tumour suppressor gene, which is frequently mutated (in up to 50%) [28]. In this way, OA could act like an antioxidant, protecting cells in an oxidative stress microenvironment, which could promote carcinogenesis [27,28]. To assess this theory, our group studied the effects of OA and MAS in H2O2-induced DNA damage.
Although MA did not have this effect in MCF10A cells, it resulted in a strong increase in oxidative stress in MCF7 cells in a dose-dependent manner, which continued when oxidative stress was induced. In MDA-MB-231 cells, both compounds exerted this pro-oxidative effect. In the basal state, lower concentrations of OA appeared to increase the oxidative stress in MDA-MB-231 cells. In addition, when intracellular oxidative stress was induced by adding H2O2, OA dramatically increased the oxidative stress, approximately 30% more than the control. MA had the same effect but to a lesser extent. Therefore, OA had a protective role against oxidative stress in human mammary epithelial cells, while it had a pro-oxidant role in the highly invasive breast cancer cells. This pro-oxidant role in breast cancer cells could be important, considering that high enough levels of ROS may inhibit carcinogenesis by enhancing p53 expression and inducing apoptosis in tumour cells [28]. To corroborate these effects in ROS levels, antioxidant catalase (CAT) enzyme activity was evaluated.
Determination of CAT Activity
The activity of CAT measured in MCF10A cells after OA and MA treatment showed no statistically significant differences with respect to the control (Figure 6a). In MCF7 cells, 0.1 µM OA increased CAT production significantly but appeared to decrease its production at higher concentrations. While 0.1 µM of OA was not assayed by Allouche, et al. [10], 1 µM and 10 µM OA decreased the ROS levels; this could be related with the levels of CAT found in MCF7 cells in the present study (Figure 6b). MA did not alter the activity of CAT with respect to the control in MCF7 cells (Figure 6b).
Although there were no statistically significant differences in treated MDA-MB-231 cells, there was a slight decrease in the activity of CAT at 1 and 10 µM OA (Figure 6c).
Effects on H2O2-Induced DNA Damage
To study the protective effect of these triterpenes against induced DNA injury, H2O2 was used to promote single-strand DNA breaks. The results are expressed as the percentage of Olive_TM for each cell line. Olive_TM incorporates a measure of both the smallest detectable size of migrating DNA (reflected in the comet tail length) and the number of relaxed/broken pieces (represented by the intensity of DNA in the tail), so this measure gives us information about the injury induced to DNA and the capacity for self-repair [29].
Our results showed that for MCF10A cells, 1 µM OA protected against H2O2 injury to DNA, producing less DNA breaks than the control (Figure 7a). MA had the same effect at 10 µM, but it must be noted that at this concentration, MA was pro-apoptotic for the human mammary epithelial cells. Therefore, this result was likely due to cells that remained alive in the cytotoxicity and proliferation assay and were not affected by MA. Although 1 µM MA did not have pro-apoptotic effects, it appeared to promote damage to DNA, which supports the results obtained for the detection of ROS levels after H2O2 addition. MA could act like a pro-oxidant in these cells, increasing the ROS levels in the first moments of treatment and resulting in damage to the DNA after addition of H2O2, consistent with our results. However, the effect of MA at this concentration did not remain the same over time, as the proliferation results have shown.
In the MCF7 cells, MA did not show significant differences with respect to the control (data not shown). In the MDA-MB-231 cells, OA promoted an increase in Olive_TM at 10 µM, and although it was not statistically significant, this increase also occurred at 0.1 µM. MA induced more injury to the DNA, increasing the Olive_TM at all concentrations tested in MDA-MB-231 cells (Figure 7b). Consequently, for highly invasive breast cancer cells, with only 4 h of treatment, both compounds promoted a high extent of damage to the DNA. Therefore, the cytotoxic effects of oleanolic acid observed in MDA-MB-231 cells appeared to be connected with the increase observed in the ROS levels that in turn promoted damage to the DNA.
Discussion
OA and MA are two triterpenes present in several plants, including grapevines and olive trees and consequently in their fruits. It is well known that the Mediterranean diet plays a role in preventing breast cancer [8], and these foods are typically found in this diet. Several studies have suggested the antitumoral properties of AO and MA, but until now, there has not been scientific data about their chemopreventive activity in human breast cancer and in human mammary epithelial cells. The present study is focused on the effects of these two natural compounds on human breast cancer cells and on human mammary epithelial cells, which never were studied before.
The results obtained show that MA inhibited the growth of minimally invasive MCF7 human breast cancer cells only at the highest concentration tested. Thus, MA treatment does not alter the cell cycle or induce apoptosis at the concentrations used previously by our group [10] or in the present study. However, Janicke, et al. [30] indicated that MCF7 cells have lost caspase-3 due to a 47-base-pair deletion within exon 3 of the CASP-3 gene, and this deletion is required for DNA fragmentation and phosphatidylserine expression on the cell surface. Accordingly, in the present study, MCF7 cells did not experience apoptosis, as indicated by flow cytometry, nor did they have changes in DNA fragmentation by the comet assay, but a decrease in cell proliferation was observed with OA treatment [10] and MA treatment. Thus, MA, which in turn promoted a dramatic increase in the ROS levels inside MCF7 cells, may promote their death but through a pathway distinct from apoptosis. In fact, an increase in ROS levels could contribute to cell death in cancer cells [31].
Indeed, MA can promote apoptosis in HT29 colon cancer cells through ROS generation [32,33]. Therefore, the connection between the ROS levels and cell death appears to be established. Our results demonstrate that OA and MA promote DNA damage in MDA-MB-231 cells. Further in-depth studies focusing on the molecular mechanism of the effects of OA and MA in breast cancer cells must be performed to confirm this. It must be noted that for several assays, the MCF7 cells were treated with high-purity MA (purity >98%) because the present study shows differences in MCF7 cells not reported in the previous study [10], where the purity of MA was lower (>80%).
OA has been recently described to be pro-apoptotic in oestrogen receptor-negative/progesterone receptor-negative/HER2-negative (ER−/PR−/Her2−) breast cancer cells [34], and patients with an ER− genotype are considered to have more aggressive, highly invasive breast cancer than patients with an ER+ genotype [35]. Chu, et al. described the action of BN107 (an extract with several terpenoidal saponins similar to OA), which promotes apoptosis in MCF10A (ER−) and in MDA-MB-231 (ER−) cells [34]. They concluded that BN107 and OA are strong inhibitors of the Akt/mammalian target of rapamycin (mTOR) pathway, which could avoid chemoresistance development in ER− breast cancer cells. Our results show that although MCF10A cells are ER−, OA was not able to cause cell death at concentrations lower than 10 µM; at these concentrations, OA had antiproliferative effects in the highly invasive MDA-MB-231 human breast cancer cells. Based on these results, the effects of OA appear to not be related to ER expression; depending on the concentration used, OA is able to promote cell death in ER− cells (MDA-MB-231 and MCF10A) and ER+ cells (MCF7) [10].
OA has been shown to decrease the expression of Bcl-2 and increase the expression of Bax in B16F10 melanoma cells [36]. Perhaps OA exerts its effect in MDA-MB-231 cells by this pathway, which is related to oxidative mechanisms in the cell [27]. It is known that an increase in the ROS levels promotes apoptosis in breast cancer cells [37]. OA could increase the ROS levels in highly invasive cancer cells and could support the action of chemotherapies that increase oxidative stress inside cancer cells, which are usually used in more aggressive, highly invasive breast cancers.
Concentrations of OA and MA higher than 10 µM inhibited human mammary epithelial cell proliferation and promoted apoptosis over time, but lower concentrations even improved the proliferation of these human mammary epithelial cells. Hence, the concentration of the treatment used is an important consideration. Very few articles describe the bioavailability of these triterpenes in humans after intake [38][39][40]; but several studies confirm that OA can be absorbed (0.7% of total oral bioavailibity) by rats after intake, as well as MA which was observed even after 60 min of oral administration in rat's plasma [27]. However, the concentration within the cells after the metabolism of these compounds is not described yet. Nevertheless, the concentration at which they are present in virgin olive oil is less than in other types of olive oils [5].
Our results showed that OA acts like an antioxidant in human mammary epithelial cells (MCF10A) in vitro. OA may decrease the oxidative stress of cells by enzymatic CAT activation. Furthermore, when oxidative stress was induced, the cells treated with OA had decreased levels of oxidative stress compared to the untreated cells. The irreversible injuries to DNA and proteins caused by oxidative stress are usually prevented by antioxidants [28]; along these lines, OA acts as an antioxidant for MCF10A cells, protecting the cells against oxidative DNA damage. Moreover, OA inhibited proliferation in MDA-MB-231 cells (highly invasive human breast cancer cells).
For these reasons, we might consider that OA could have potential chemopreventive activity in human breast cancer: at low concentrations, OA is a natural compound that acts as an antioxidant and prevents oxidative DNA damage in human mammary epithelial cells. Additionally, it has antiproliferative effects in highly invasive cancer cells. This compound could be used as an adjuvant in breast cancer oxidative therapies, where it could maximize the effects of chemotherapy while protecting human mammary epithelial cells against the oxidative effects of cancer therapy. However, pharmacologic effects of OA have to be studied before assure this.
Nevertheless, extreme caution should be applied in the extrapolation of the present in vitro results to potential clinical effects in humans. Further studies are needed to confirm both the chemopreventive capacity of OA and the differential mechanism of action on human mammary epithelial vs breast cancer cells suggested by the present study.
Cytotoxicity Assay
Cell survival, measured as the cellular growth of the treated cells vs. the untreated controls, was carried out in MCF10A, MCF7 and MDA-MB-231 cells using an XTT-based assay according to Scudiero,et al. [41], with some modifications. Briefly, cells were seeded into 96-well culture plates in a total volume of 100 µL per well (5 × 10 3 cells/well for MDA-MB-231 and MCF7 cells and 2.5 × 10 3 cells/well for MCF10A cells). After an overnight incubation to allow for cell attachment, 100 µL of fresh medium was added containing increasing concentrations from 0.001 µM to 100 µM OA or MA. After 24 h, the cells were incubated with XTT in Phenol-Red-free RPMI medium for 3 h at 37 °C with 5% CO2, and the absorbance was measured at a 450 nm wavelength (620 nm as a reference) in a plate reader (TECAN GENios Plus). The cell viability was calculated using the formula:
% viable cells = [A(treated cells)/A(control)] × 100
(1) where A is the difference in absorbance between optical density units (A = OD450 − OD620). All measurements were performed in quadruplicate, and each experiment was repeated at least three times. As a vehicle control, the cells were treated with EtOH at the highest concentration of OA and MA used.
Cell Proliferation Assay
Cell proliferation, measured as the cellular growth of the treated cells vs. the untreated controls, was carried out using a CellTiter-Blue Cell Viability Assay. Briefly, the cells were seeded into 96-well culture plates at 2 × 10 3 cells/well for MCF7 cells, 1 × 10 3 cells/well for MDA-MB-231 cells and 0.5 × 10 3 cells/well for MCF10A cells. After an overnight incubation to allow for cell attachment, the medium was removed and replaced with fresh medium containing OA or MA from 0.01 µM to 100 µM. The plates were incubated for 24, 48 or 72 h, followed by a 72 h, 48 h and 24 h proliferation period (incubation with fresh medium without OA or MA), respectively. At these three time points, the plates were incubated with CellTiter-Blue Cell Viability for 3 h at 37 °C with 5% CO2 and the relative fluorescence units were measured in a plate reader (TECAN GENios Plus) (Ex. λ485/Em. λ595, Gain 60). The cell viability was calculated using the formula:
% viable cells = [A(treated cells)/A(control)] × 100
(2) where A are the relative fluorescence units for each sample. All measurements were performed in triplicate, and each experiment was repeated at least three times. As a vehicle control, the cells were treated with EtOH at the highest concentration of OA or MA used.
Cell Cycle Assay
The cells were seeded in 12-well culture plates (1 × 10 5 cells/well for MDA-MB-231 and MCF7 cells and 0.5 × 10 5 cells/well for MCF10A cells) and incubated overnight to allow for cell attachment. Next, the cells were treated with 0.1 µM, 1 µM, or 10 µM OA or MA for 24 h; the cells were harvested with TrypLE Express and washed with 1× PBS (Ca 2+ /Mg 2+ free) (300× g, 10 min at 4 °C). Finally, the cells were fixed with cold 70% ethanol and stored at −20 °C for at least 24 h. Subsequent to propidium iodide labelling (PI/RNase Staining Buffer), the cells were analysed by flow cytometry (EPICS XL-MCL, Beckman Coulter, Spain). The FlowJo program (v5.7.2, FlowJo LLC data analysis software, Ashland, OR, USA) was used to calculate the percentage of cells in the G0/G1, S and G2/M phases. Each experiment was independently repeated at least three times.
Apoptosis Assay
The percentage of apoptotic cells was determined using a double staining assay with FITC-conjugated Annexin V and propidium iodide (PI). Briefly; the cells were seeded in 12-well culture plates (1 × 10 5 cells/well for MDA-MB-231 and MCF7 cells and 0.5 × 10 5 cells/well for MCF10A cells) and incubated overnight to allow for cell attachment. After cell exposure to OA or MA at 0.1 µM, 1 µM, or 10 µM for 24 h; the cells were harvested with TrypLE Express; washed twice in cold 1× PBS (Ca 2+ /Mg 2+ free) (300× g; 10 min at 4 °C) and resuspended in 100 µL of Annexin Binding Buffer. The cells were stained with 5 µL Annexin V-FITC and 2 µL PI solution; gently vortexed and incubated for 15 min at room temperature in the dark before flow cytometric analysis. As a positive control; the cells were treated with 1 µM camptothecin (CPT). Each experiment was independently repeated at least three times.
Detection of Intracellular Reactive Oxygen Species
Intracellular reactive oxygen species (ROS) levels were measured after OA or MA treatment using the cell-permeable fluorescent probe 2,7-dichlorofluorescin diacetate (DCFH-DA), as previously described by Warleta, et al. [11], with some modifications. Briefly, the cells were seeded on 96-well plates (5 × 10 3 cells/well for MDA-MB-231 and MCF7 cells and 2.5 × 10 3 cells/well for MCF10A cells), and after incubation with the treatments, DCFH-DA (100 µM) was added for 30 min at 37 °C with 5% CO2. The fluorescence was read in a plate reader for 30 min (Ex. λ485/Em. λ535, Gain 60). The intracellular ROS level percentage was calculated as follows: where F(t = 0) is the fluorescence at t = 0 min and F(t = 30) the fluorescence at t = 30 min. It has been described that the addition of H2O2 increases oxidative stress in cultured cells and directly damages DNA [42]. To evaluate the protective capacity of OA and MA against induced oxidative stress, 500 µM H2O2 was added 30 min before the fluorescence quantification. All tests were run in triplicate for each experimental condition, and each experiment was repeated at least three times. All experiments were conducted using iron-free media (MEM and HuMEC).
Determination of Catalase (CAT) Activity
The cells were seeded into a 6-well plate at 0.5 × 10 6 cells/mL for MCF10A, MDA-MB-231 and MCF7 cells. The cells were incubated overnight for cell attachment. Then, the medium was changed to fresh medium containing OA or MA. The assay was performed according to the manufacturer's protocol for the determination of catalase enzymatic activity.
Alkaline Single-Cell Gel Electrophoresis (Comet Assay)
The cells were seeded into 12-well plates (1 × 10 5 cells/well for MDA-MB-231 cells and MCF7 cells and 0.5 × 10 5 cells/well for MCF10A cells) and incubated overnight for cell attachment. Then, the cells were treated with OA and MA. Finally, the cells were scraped and washed twice (300× g, 10 min, 4 °C) with cold 1× PBS (Ca 2+ /Mg 2+ free) and suspended in 1 mL of cold 1× PBS. To evaluate the ability of OA and MA to protect against oxidative DNA damage, the cells were exposed for 10 min to 50 µM H2O2 at 4 °C. After that, the comet assay was performed according to Warleta, et al. [11].
Slide Scoring and Analysis
DNA strand breaks were examined using a fluorescence microscope (Zeiss Axiovert 200) equipped with a Luca EMCCD camera (Andor Technology, Belfast, UK) under 494 nm excitation and 521 nm emission wavelengths using the Komet 5.5 software package (Kinetic Imaging Ltd., Liverpool, UK). Twenty-five cell images were randomly characterized per sample using 20× magnification. The relative fluorescence between the head and tail through the olive tail moment (Olive_TM) was used to determine DNA damage. Olive_TM is defined as the product of the Tail Moment Length and the fraction of DNA in the tail: Olive_TM = [(tail (mean) − head (mean)) × tail (% DNA)]/100 (4)
Statistical Analysis
The results are displayed as the mean of at least three independent experiments (± SEM), and the results are expressed as a percentage relative to the untreated control, which was set as 100%. Statistical analysis was performed using a one-way analysis of variance (ANOVA) followed by Fisher's LSD test. Values of p < 0.05 were considered significant. STATGRAPHICS Plus 5.1 statistical software (Statpoint Technologies, Inc., Warrenton, VA, USA) was used for the statistical analysis.
|
2015-09-18T23:22:04.000Z
|
2015-07-28T00:00:00.000
|
{
"year": 2015,
"sha1": "9b0dcb0d85eb06149211332322863424025a49f8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/20/8/13670/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b0dcb0d85eb06149211332322863424025a49f8",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
202888732
|
pes2o/s2orc
|
v3-fos-license
|
A Renormalization-Group Study of Interacting Bose-Einstein Condensates: II. Anomalous Dimension $\eta$ for $d\lesssim 4$ at Finite Temperatures
We study the anomalous dimension $\eta$ of homogeneous interacting single-component Bose-Einstein condensates at finite temperatures for $d\lesssim 4$ dimensions. This $\eta$ is defined in terms of the one-particle density matrix $\rho({\bf r})\equiv \langle \hat\psi^\dagger({\bf r}_1)\hat\psi({\bf r}_1+{\bf r})\rangle$ through its asymptotic behavior $\rho({\bf r})\rightarrow N_{\bf 0}/V+C r^{-d+2-\eta}$ for $r\rightarrow \infty$, where $N_{\bf 0}/V$ is the condensate density and $C$ is a constant. It is shown that the anomalous dimension is given by $\eta=0.181\epsilon^2$ to the leading order in $\epsilon\equiv d-4$. The change of the prefactor $0.181$ from the value $0.02$ at the transition point of the ${\rm O}(2)$ symmetric $\phi^4$ model is attributed to the emergence of three-point vertices and the anomalous Green's function when $N_{\bf 0}$ acquires a finite value.
I. INTRODUCTION
In a previous paper, 1 which is referred to as I hereafter, exact renormalization-group equations have been derived for interacting single-component Bose-Einstein condensates based on the functional renormalization-group formalism [2][3][4][5][6][7][8] in such a way as to satisfy the Hugenholtz-Pines theorem 9 and Goldstone's theorem I. 10,11 Using them, it has been shown that the interaction vertex g Λ vanishes below d c = 4 dimensions at finite temperatures as the infrared cutoff Λ reduces to 0, thereby causing disappearance of the Bogoliubov mode 12 with a linear dispersion relation at long wavelengths.Specifically, g Λ approaches zero as g Λ ∝ Λ ǫ with the exponent ǫ ≡ 4 − d for d 4 dimensions at finite temperatures.Moreover, it is predicted that this vanishing of g Λ is accompanied by the development of the anomalous dimension η > 0 in the singleparticle density matrix ρ(r) ≡ ψ † (r 1 ) ψ(r 1 + r) as ρ(r) where N 0 is the number of condensed particles, V is the volume, and C is a constant.The exponent η is predicted to be expressible as η ∝ ǫ 2 for d 4 dimensions, which has the importance of distinguishing the interacting Bose-Einstein condensates from the ideal ones with η = 0. Since the phase rigidity and coherence are expected to emerge due to the interaction, 13,14 which is also responsible for a finite η > 0, we call η alternatively as coherence exponent here.
The purpose of the present paper is to confirm η ∝ ǫ 2 and also derive the prefactor through careful calculations of exhausting all the processes contributing to it.It will be shown that η for d 4 dimensions is given by which is exact up to the order of ǫ 2 and valid at any finite temperature with N 0 > 0. The prefactor is distinct from 0.02 of the O(2) symmetric φ 4 model at the transition point; [15][16][17][18][19] the difference is caused by the emergence of three-point vertices and the anomalous Green's function upon Bose-Einstein condensation.The emergence of η > 0 is expected to cause nonanalytic behaviors in various thermodynamic quantities of Bose-Einstein condensates at low temperatures. 20However, the methods of extracting the exact value of η experimentally are yet to be clarified theoretically.This paper is organized as follows.Section II presents basic formulas for obtaining η.Sections III-V consider the contributions of Fig. 1 (2a)-(2c) to η separately to obtain Eqs.(31), (95), and (122), respectively, which add up to Eq. (2) with Eq. (5).We set = k B = 2m = 1 throughout with m and k B denoting the mass and Boltzmann constant, respectively.
II. KEY QUANTITIES FOR CALCULATING η
According to Eq. (81) of I, 1 the exponent η in Eq. (1) can be calculated by the formula Functions δ W(2α) ∞ ( k) represent the momentum-dependent processes of Fig. 1 (2a)-(2c).These diagrams and Eq.(3) indicate that a finite η originates mostly from the momentum dependences of the three-and four-point vertices, which in turn are caused by the loops in Fig. 1 (3c)-(4f).A complete analysis of the loops will turn out laborious even for d 4 owing to (i) the emergence of the three-point vertices and (ii) the internal FIG. 1: Diagrammatic expressions of W (n) Λ, j 1 ••• jn for n = 2, 3, 4. A line with a dash (dotted line with a dash) denotes ĠΛ,j 1 j 2 (∂ Λ Ψ Λ ).degrees of freedom in the three-and four-point vertices.Our goal is to derive Eqs.(31), (95), and (122) for the three contributions in Eq. (3), which add up to Eq. (2) as seen by using Eq.(5).
To start with, δ W(2α) ∞ ( k) are given analytically by Eq. (85) of I, which for d 4 can be approximated by Here Ψ ≡ lim Λ→0 Ψ Λ is the condensate wave function, β ≡ T −1 with T denoting the temperature, K d and g * are given by with S d ≡ 2π d/2 /Γ(d/2) the area of the unit sphere in d dimensions, and Θ(x) and δ(x) are the Heaviside step function and Dirac delta function, respectively.The integrals over λ in Eq. ( 4) have the effect of producing the n-point vertices Γ(n) (n = 3, 4) from the source functions δ W(n) , as seen from Eq. (84) of I.
The key quantities in Eq. ( 4) are δ W(3) x and δ W(4) x for x → ∞, which are obtained from Eq. (76) of I through the rescaling given by Eq. (79d) in I as δ W (3) x ( k1 , k2 ; k3 ) δ W(4) x ( k1 , k2 ; k3 , k4 ) where x and k denote and z Λ,− is a renormalization factor defined by Eq. (59) of I. Functions W (3) Λ, j 1 j 2 j 3 and W (4) Λ, j 1 j 2 j 3 j 4 on the right-hand sides of Eqs.(6a) and (6b) are expressible diagrammatically as Fig. 1 (3a)-(3d) and (4d)-(4f), respectively, where we have omitted: (i) vertices with more than five legs as irrelevant; and (ii) j = 1, 2 degrees of freedom corresponding to the annihilation and creation operators for simplicity.See Eqs.(47d) and (47e) of I for their analytic expressions.Correspondingly, we can divide each of δ W (3) x and δ W(4) x into the three contributions.Moreover, it follows from Eq. (77) of I that they both vanish when all the momenta are set equal to 0. Hence, we can express them as This prescription is useful for considering each contribution separately in Eq. ( 4), because its unphysical divergences, which cancel out eventually, are absent from the beginning.We will adopt it throughout.The expected behavior η ∝ ǫ 2 is one order of magnitude smaller in ǫ than the exponent of g Λ ∝ Λ ǫ .This fact enables us to calculate Eq. ( 6) by using the leading-order expression of g Λ obtained as Eq.(67) of I, i.e., The vertices in Fig. 1 (3c)-(4f) are given in terms of g Λ by The other vertices such as Γ (3) 111 , Γ (4) 1112 , and Γ (4) 1111 are omitted as irrelevant.See Eq. (53) of I on this point.
For calculating η at finite temperatures, the loops in Fig. 1 (3c)-(4f) are expressible in terms of the 2 × 2 matrix Green's functions ĜΛ (k) and ĜΛ (k) with zero Matsubara frequency.It follows from Eq. (30) of I that ĜΛ (k) can be written as ĜΛ Moreover, we can express the elements (G Λ , F Λ ) for Λ → 0 and k Λ as Eqs.( 50) and (62) of I, i.e., Hence, G Λ (k) ≈ F Λ (k) holds within the leading order for k → 0, and the difference emerges in the next-to-leading order.
We focus on the other two contributions below.
We thereby obtain the 2a-3d contribution to the coherence exponent as
C. Sum of various 2a contributions
The net 2a contribution is obtained by adding Eqs. ( 21) and (30) as It is worth noting that this finite result could not have been obtained without exhausting all the processes, as has been done in Fig. 3 for Fig. 1 (3d), where a complete cancellation of the leading-order terms exists as mentioned above Eq.(25), which removes the divergence in each of them.We will encounter this kind of cancellation two times below, in one of which it even extends up to the next-to-leading order terms.
IV. CALCULATION OF η (2b)
We proceed to calculate the 2b contribution in Eq. (3) given by Eqs.(4b) and (6b).There are three kinds of topologically distinct diagrams for δ W(4) x , i.e., those in the third row of Fig. 1.Among them, the 4d contribution has already been studied to yield Eq. (93) of I, i.e., Hence, we here focus on the other two diagrams.
Let us substitute Eq. (34) into Eq.(4b) and make a change of variables q → − q for the χ 1,−2 ABC contribution in Eq. ( 34) to combine it with that of χ 1,2 ABC .We then find that terms of O (δG) 0 cancel out, and only the contribution of Eq. ( 35) survives in the next-to-leading order.Subsequently, we substitute Eq. ( 9) and approximate d ≈ 4 in the integrand as justified for We consider each term in the curly brackets separately.First, we focus on the first term and substitute Eq. ( 27).Transforming the resulting expression in a way similar to Eqs. ( 18) and (29), we obtain which is identical with Eq. (29) except for the numerical factor.Hence, its contribution to Eq. ( 3) is immediately found to be Second, we focus on the second term in the curly brackets of Eq. (36).Let us substitute Eq. ( 27) into it, subsequently exchange the order of integrations between q and q1 , express the q integral in the four-dimensional spherical coordinates where q1 lies along the first axis, and transform the resulting expression in a way similar to Eq. ( 18).We thereby obtain δ W(2b4e1) ∞ ( k) in terms of the functions in Eqs.(19) and (20) as where θ 1 (θ q ) is the angle between k and q1 ( q and q1 ).Let us differentiate Eq. (39) twice with respect to k, set k = 0 subsequently, substitute the resulting expression into Eq.( 3), and perform the integrations.The procedure yields Third, we focus on the third term in the curly brackets of Eq. ( 36), which is given explicitly by with q′ ≡ q + k.The calculation of this term requires a new and lengthy treatment.However, we will eventually arrive at a simple analytic expression of Eq. ( 58) below for its contribution to η.
To start with, we note that the integral over q1 for q = 1 depends only on two variables, i.e., the magnitude k and angle θ q between ( q, k).This fact enables us to write the q1 integral in the coordinate system where q lies along the first axis and k lies in the 12 plane.The key vectors are expressible in the four dimensional spherical coordinates with and where θ q ′ q is the angle between ( q′ , q), and q′ is given in terms of ( k, θ q ) as q′ Using Eq. ( 42) and the corresponding Jacobian sin 2 θ 1 sin ϕ 1 for the q1 integral, we can transform Eq. (41) into Here f 0 and ξ λ are given by Eqs. ( 19) and ( 20), respectively, s 1 denotes s 1 ≡ cos ϕ 1 , and θ q ′ 1 is the angle between ( q′ , q1 ) that satisfies cos θ q ′ 1 = cos θ q ′ q cos θ 1 + s 1 sin θ q ′ q sin θ 1 , as seen from Eq. (42).We can draw Fig. 5 that divides the (θ q , θ 1 ) plane into two regions according to the range of integration over s 1 : region A with s 1 ∈ [−1, 1] and region C with where s c2 is defined as the solution of the equation (cos θ q ′ 1 − cos ξ λ q′ ) s 1 =s c2 = 0 given explicitly by The boundary of C is determined partly by s c2 = ±1, which can be solved as θ 1 = ξ λ q′ ± θ q ′ q .They yield the two curves in Fig. 5, which are expressed alternatively in terms of the function κ (±) λ = κ (±) λ (θ q , k) defined by for convenience.The quantities θ c1 = θ c1 (λ, k) and θ c3 = θ c3 (λ, k) in Fig. 5 are solutions to the equations κ (+) λ (θ c1 , k) = 0 and κ (−) λ (θ c3 , k) = 0, which can be solved analytically as The expression of θ c1 , for example, has been obtained by: (i) expressing κ (+) λ (θ c1 , k) = 0 as q′ cos(ξ λ + θ q ′ q ) = q′ cos ξ λ q′ ; (ii) writing q′ cos ξ λ q′ = q′2 cos ξ λ , q′ cos θ q ′ q = 1 + k cos θ q , q′ sin θ q ′ q = k sin θ q , and k cos ξ λ = cos ξ λ k based on Eqs. ( 20) and (42); and (iii) noting θ c1 ∈ [0, π 2 ].On the basis of these considerations, we can perform the integration over s 1 in Eq. (44) elementarily.To express the 5: Distinct regions of the double integral over (θ q , θ 1 ); we set (λ, k) = (0.9, 0.2) to see the basic features clearly.The ranges of integration over s 1 for regions A and C are s 1 ∈ [−1, 1] and s 1 ∈ [s c2 , 1], respectively.Region C disappears as k → 0. result concisely, it is convenient to introduce three local functions for considering 2b-4e contributions by where the second expression of Eq. (49a) has been obtained by substituting Eq. ( 46), performing the integration over s 1 , and using Eqs.( 46), ( 20), ( 42) and (43) successively.Now, we can write Eq. ( 44) as This expression needs a further improvement before differentiating it with respect to k.Specifically, we express regions A and C in Fig. 5 as A = (A+C+D)−(C+D) and C = (C+D)−D, subsequently combine the contributions of (C + D), and use Eq.(49b).We can thereby transform Eq. (50) into where J (±) k (λ, θ q ) is defined by The contribution of Eq. (51) to Eq. ( 3) is obtained by differentiating Eq. ( 51) twice with respect to k and setting k = 0 subsequently.Terms with derivatives of θ c j ( j = 1, 3) all vanish due to J (+) k (λ, θ c1 ) = J (−) k (λ, θ c3 ) = 0 for Eq. ( 52), which result from κ (+) λ (θ c1 , k) = κ (−) λ (θ c3 , k) = 0. Also using Eqs.(20) and (48), we obtain where k (λ, θ q )/∂ k2 k=0 , and f0 (θ) is defined in terms of Eq. ( 19) more generally by Since the integrand turns out to vanish in the limit, we have removed (λ → 0) from Eq. ( 53).The coefficient J (2) 0 in Eq. ( 53) can be calculated straightforwardly from Eq. (49) as while J (±2) 0 is obtained in Appendix A 1 as Substituting them into Eq.( 53), we find that the contribution of J (2) 0 vanishes upon the integration over θ q .Moreover, the two integrals of J (±2) 0 in Eq. ( 53) can be combined by using the symmetry The double integral can be evaluated both numerically and analytically.We obtain Fourth, we focus on the fourth term in the curly brackets of Eq. (36).The calculation of this term also requires a new and lengthy treatment, but we will eventually arrive at the simple 6: Distinct regions of the double integral over (θ q , θ 1 ) for (λ, k) = (0.9, 0.25).The ranges of integration over s 1 for regions A, B, and C are analytic result of Eq. ( 70) below.We start by expressing the contribution in the coordinate system of Eq. (42) as where f 0 and ξ λ are given by Eqs. ( 19) and ( 20), respectively, s 1 denotes s 1 ≡ cos ϕ 1 , θ q ′ 1 is given by Eq. (45), and θ k1 is the angle between ( k, q1 ) that satisfies cos θ k1 = cos θ q cos θ 1 + s 1 sin θ q sin θ 1 , as seen from Eq. (42).We can draw Fig. 6 that divides the (θ q , θ 1 ) plane into three regions according to the range of integration over s 1 in Eq. (59): region A with and region C with s 1 ∈ [s c2 , 1], where s c2 is given by Eq. (46), and s c1 is defined by (cos θ k1 − cos ξ λ k) s 1 =s c1 = 0, i.e., The boundary of B is determined partly by s c1 = ±1, which is solved as The boundary of C is determined partly by s c2 = ±1 in terms of Eq. ( 46), which have yielded θ 1 = ξ λ q′ ± θ q ′ q ; these curves in Fig. 6 are expressed conveniently in terms of κ (±) λ defined by Eq. (47) similarly as in Fig. 5.There is another nontrivial one, i.e., the B-C boundary determined by s c1 = s c2 , which can be transformed by using Eq. ( 20) and the last equality of Eq. (42) into cos θ 1 = q′ cos ξ λ q′ − k cos ξ λ k = − λ 2 (1 + 2 k cos θ q ), i.e., we can express the B-C boundary as θ 1 = ξ λ − κλ .On the other hand, θ c1 and θ c3 in Fig. 6 are solutions to the equations They can be solved analytically to yield Eq. (48) once again; for example, the equation for θ c3 ∈ [ π 2 , π] is expressible by using Eq. ( 62) as (1 + 2 k cos θ c3 ) cos ξ λ = cos(2π − θ c3 − ξ λ k), from which we easily obtain θ c3 = π + ξ λ k − ξ λ .Another angle θ c2 in Fig. 6 is given simply by On the basis of these considerations and using Eq. ( 45), we can perform the integration over s 1 in Eq. ( 59) elementarily.To express the result concisely, it is convenient to introduce additional local functions for considering 2b-4e contributions by where J(±) λ k are defined by Eq. (49a), and the second term originates from the lower bound of the s 1 integral, which we have transformed by using Eqs.(42), (43), (61), and (20) successively.Now, Eq. (59) can be written in terms of the functions in Eqs. ( 49) and (64) as Let us express the integral over region C as Subsequently, we write the integrand of the third term as J(−) based on Eq. (49b), and combine its J λ k contribution with that of region A. We can thereby express Eq.
B. The 2b-4f contribution
Next, we focus on the diagram of Fig. 1 (4f).It is shown in AppendixB that its contribution to Eq. (4b) is expressible as where φ n 1 n 2 ( k1 , k2 ) is given by Eq. ( 27), and φ n 1 n 2 n 3 ( k1 , k2 , k3 ) is defined similarly by Let us consider each term in the square brackets of Eq. ( 72) separately.The first term is the same as the second term in the curly brackets of Eq. (36) except for the numerical factor, whose contribution to η has already been studied to yield Eq. ( 40).The contribution of the second term in the square brackets of Eq. ( 72) can be calculated similarly.We obtain The third and fourth terms in the square brackets of Eq. ( 72) are also the same as the third and fourth terms in the curly brackets of Eq. (36), respectively, except for the numerical factor, whose contributions to η are given by Eqs. ( 58) and (70).Hence, we can conclude immediately that their contributions to η are given by Fifth, we focus on the fifth term in the square brackets of Eq. ( 72), which can be treated in the same way as the 2b-4e3 contribution described from Eq. (41) through Eq. (58).We need two modifications due to the change 4 in the denominator of the integrand.The first is to use instead of f 0 (θ 1 ) in Eq. ( 51).The second is to replace Eqs.(49a) and (49b) by the local functions and We thereby obtain δ W(2b4f5) ∞ in place of Eq. (51) as where J (±) k (λ, θ q ) are now given in terms of Eq. (76) by Let us differentiate Eq. ( 78) twice with respect to k, set k = 0 subsequently, and substitute the resulting expression into Eq.
C. Sum of various 2b contributions
The net 2b contribution is obtained by adding Eqs.(32), ( 71) and (94) as V. CALCULATION OF η (2c) We here calculate the 2c contribution to Eq. ( 3) given by Eq. (4c).The first term on the right-hand side of Eq. (4c) has already been studied to yield Eq. (95) of I, i.e., Hence, we here focus on the second term on the right-hand side of Eq. (4c), which is expressible diagrammatically as Fig. 1 (3a)-(3d).Among them, we can exclude Fig. 1 (3a) owing to Eq. ( 14).Hence, we consider the other two contributions.
A. 2c-3c contribution
First, we focus on δ W(3c) ∞ given diagrammatically by Fig. 1 (3c).Its analytic expression has already been derived as Eq. ( 17).We regularize it as Eq.(8a) and substitute the resulting δ W(3c) ∞ into the second term of Eq. (4c).We thereby obtain the 3c contribution to δ W(2c) ∞ as where we have also made a transformation similar to Eq. ( 18).Let us substitute Eq. (97) into Eq.( 3), calculate the second derivative at k = 0, and evaluate the integrals.The process is the same as Eqs.( 91)-( 93) of I except that we have to take care of the additional k dependences of (i) the upper limit ξ˜k of the θ q integral and (ii) f˜k(θ q ).However, one can show that these extra dependences give null contribution to η.Also noting that the upper limit ξ˜k approaches π/2 instead of π as k → 0, we obtain
B. 2c-3d contribution
Next, we consider the contribution of δ W(3d) ∞ given diagrammatically by Fig. 1 (3d).Its analytic expression has already been derived as Eq. ( 28).We regularize it as Eq.(8a), substitute the resulting δ W(3d) ∞ into the second term of Eq. (4c), and approximate d ≈ 4 in the integrand as justified for ǫ ≪ 1.The procedure yields the 3d contribution to δ W(2c) ∞ as We consider each term in the square brackets of Eq. (99) separately.First, we focus on the first term, which can be transformed in the same way as the third term in the curly brackets of Eq. (36), i.e., from Eq. (41) through Eq. (51).The key difference lies in the additional factor Θ(| k+ q| −1), which introduces ξ˜k defined by Eq. (20) as the upper limit of the θ q integral.Also noting θ c1 < ξ˜k < θ c3 as seen from Eqs. ( 20) and (48), we obtain where J λ k and J (+) k are given by Eqs.(49b) and ( 52), respectively.
|
2019-09-25T21:43:55.000Z
|
2019-09-25T00:00:00.000
|
{
"year": 2019,
"sha1": "0f1dbe1c2b421cbf6fac538b1d94d9a807a18d58",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1909.11787",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0f1dbe1c2b421cbf6fac538b1d94d9a807a18d58",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
213712034
|
pes2o/s2orc
|
v3-fos-license
|
Chili cultivation on tin mined land at Bangka Island: prospects and constraints
Chili (Capsicum sp.) is one of most valuable crop in some part of Indonesia due to high price and profitability compared to another crop. However, growing chili needs high capital and special expertize due its growth requirements. All chili cultivar needs good drainage land and good soil nutrients status for their optimum growth. Meanwhile, tin mined land is very degraded land and the tailing dominated by quartz sand. Poor soil physical properties and poor nutrient status cause this land unsuitable for most crops including chili. However, high demand and high price, promises profitable chili cultivation on such land. Advanced technologies in fertilizer and water management have been assessed at Bukit Kijang Village, Bangka Island and showed very promising results for expansion. Amending soil with mixture of manure and biochar increased water holding capacity and cation exchange capacity and hence, increased yield. Although attained yield of chili was 5.7 t ha−1, which is about 23.7 % from the average of agronomic potential yield, the net profit ranged from IDR 51.8 to 92.9 million per year after taking into account the investment for fertigation installation. The main constraints identified included low water holding capacity due to sandy texture. However, this constraints could be solved by drip irrigation technique and fertigation system. Pest and diseases were another contraints that also contributed significantly to decreased chili yield.
Introduction
Chilli (Capsicum sp.) is one of common crop which cultivated throughout Indonesia, therefore it is determined as one of strategic commodity. Chilli is not only used as a spice and adds flavor to cuisine, but also has health benefits. Study results showed that consuming chili can reduce the risk of stroke and maintain heart health. Besides that chili contains a lot of vitamin C, vitamin A, betacarotine and minerals needed to keep the body healthy.
Chili is also consider as valuable crop due to high price and profitability. [1] stated that chili price substantially increased ahead of special moment. Chili price at local market in Bangka island can reach Rp. 100,000 per kg. Therefore many farmer cultivated chili as their family's income. There is three cultivar whichis most common cultivated in Indonesia namely common red chili (Capsicum annum), curly red chili (Capsicum annum) and bird chili or cayenne (Capsicum frutescens). Kind of chili cultivated is depend on local people preference. People in Java are prefer common red chili, meanwhile people outside of Java prefer curly red chili or cayenne pepper.
Chili can grow well in wide range of altitude from 0 -1400 m asl, where it is depend on chili cultivar. Bird chili or cayenne grow better in lowland (< 500 masl), curly red chili in medium plateau and common red chili grow better in high plateau. However, in general chili requires fertile soil and good drainage soil for it's optimum growth. Soil fertility and good management practices are determining factor to the success in chili farming. Land constraints must be overcome by applying Tin mined land is very degraded land as impact of open tin mining operation, then actually unsuitable for most of agricultural crops. Most of mined land consist of sandy tailing which dominated by quartz sand [1]. As a result, tin mined land have poor soil properties both of physical, chemical and biological properties. [16] showed that sand fraction of tin mined land at Bukit kijang village is about 84% that causes soil has low water holding capasity. Sand content > 85% is also reported by Santi [6,15] in several other places. The soil with the dominant sand fraction causes the soil unable to keep soil moisture, easily dries and increasing soil temperature very high at noon. Rain water as a natural source of water will leach out quickly through infiltration and evaporation processes. As a result the plants will experience drought stress and even in the worst condition it will be dead. Reduced land cover also potential to change local microclimate conditions. High local air temperature causes the plants grown in extreme environmental stress. [2] suggested that maximum land surface temperature in sandy tailings can reach 48.8°C. [17] also reported surface temperatures of sandy tailings ranged from 40 -50 o C.
Sand texture is also causes low nutrient holding capacity. [16] reported that cations exchange capacity (CEC) of soil at the research site is 2.19 cmol.kg -1 . The similar condition also reported that CEC of sandy tailings is only about 4.35 cmol.kg -1 [15] and 2.27 cmol kg -1 [6] at the other site of Bangka. Furthermore, [16] showed that nutrient status of sandy soil is very low. Total N recorded only 0.03% and total C-organic only 0.21%. Meanwhile P2O5 and K2O is only 14 mg.100 g -1 and 3 mg.100 g -1 respectively. The macro nutrient content such as N, P, and K in sandy tailing and humic tailing are in range low to very low. N-total content ranged from 0.03-0.17%, P-Bray 4.20 -10.65 μg g -1 , K-exch 0.00 -0.32 cmol kg -1 . [14] concluded that very low nutrient content and bases cations were due to soil texture dominated by sand fractions. The same condition was also reported by [3] in Malaysia Peninsular and by [17] in Thailand.
Although actually tin mined land is consider to be unsuitable for plants, tin mined land can still be utilized through extra efforts to improve land conditions. The main efforts is improvement the ability of soil to hold water and nutrients and provide nutrients needed by plants. Biochar can improve the physical properties both of acid non-acid mineral soils, namely increasing total pore space (TPS), fast drainage pores (FDP) and available water [8]. Various studies have shown that biochar is effective in retention of water [11,12,13], [8,10] showed that biochar as soil ameliorant can also increase pH, nutrients availability and CEC after 2 planting seasons. [6] showed that biochar, both husks and Accasia mangium, when mixed with manure, had a better effect compared to just manure. This paper will be assess about the prospect and constraints of chili cultivation on tin mined land based on experiences and research result on reclamation and rehabilitation of tin mined land that held during 2016 -2019 at Bukit Kijang Village, Namang, Central Bangka District.
Land availability
Tin mined land widely spread out in Bangka Belitung provinces (Babel), the largest tin producer in Indonesia. In Bangka Island itself, 321,577 ha in the mainland is belong to Tin Mine Concession of PT. Timah [4]. Most of the mined land is left abandoned even though the company is obliged to carry out land rehabilitation. Local people can ask permission to use the land to cultivate both of food crops or vegetable crops . However, farmers have to make extra efforts because their land conditions are infertile.
Tin mined land has different characteristics compared to land that has not been mined, where there has been a decline in land quality (Table 1). Significant changes that have occurred are the texture of the soil turned into sand. Soil organic matter and nutrient content also substantially declined. Sandy texture soil will experiences drought quickly because the soil is unable to hold water after rain. To avoid drought, farmer must make a well as a water source or use a pond around the farmland to water
2.2.Chilli Supply and Demand
Characteristics of supply and demand for chili commodities in the Bangka region are also indicative of the prospect of tin mined land utilization for chili cultivation. Searching results in the local market for chilli commodities in Bangka, showed that most of the chillies were imported from outside the area so that the price is expensive. The national average chili price in 2018 reached 44 thousand rupiah. Even at certain moments the price can surge very high. In April to May 2019, the average of chilli price at National level about Rp. 55,000/kg (Figure 1), meanwhile in the local market at Bangka has reached Rp. 90,000/kg. This condition is a big opportunity for local farmers to grow chili to meet the needs of the local market. The cultivation of chili around Bangka becomes very competitive because competitors are very limited so farmers can get high profits. Searching result also revealled that local
Chilli Farming Technologies
Proven technologies to support good agriculture practices in chilli farming have been released by Ministry of Agriculture. Those technologies consist of seeds, land preparation, soil amendment, fertilization, irrigation, pest and diseases control, harvest and post harvest management. There are 37 high yielding varieties of cayenne available that can choose by farmer depend on local market preference. Good seed selection is the initial step that really determines plant growth and yield. Farming technologies on tin mined land is focused on soil amendment, fertilization and irrigation which are different with chilli cultivation on common farmland. This is because of the problems faced are very specific namely the soil is unable to retain moisture, soil is unable to hold nutrients, very poor nutrient content and nutrient as subject to leaching. Manure and mixture of manure and biochar application on tin mined land increased soil moisture content at certain soil tension ( Table 2). This is indicate that water holding capacity of soil increased with both of manure and mixture manure+biochar treatment. The treatment of organic matter in the form of manure and biochar also increases CEC and nutrient availability for plants (Table 3). Similar result also stated by Nurida et al. (2014) and Gerard et al. (2018) where biochar as soil ameliorant can also increase pH, nutrients availability and CEC after 2 planting seasons. [6] also showed that biochar, both husks and Accasia mangium, when mixed with manure, had a better effect compared to just manure. Organic matter should be apply continuously because organic matter will decompose very fast due to high temperature. Fertilizer application was carried out through fertigation system using a special AB-mix fertilizer formula. The AB-mix fertilizer, both of commercial and AARD formula, have significantly effect and consistent to the number of harvested chili fruits and total attained yield compare to conventional NPK fertilization. [5] also stated that the fertigation system can increase growth and yield of fresh chili. This is because of the AB-mix fertilizer has complete nutrient content both macro and micro nutrients, while conventional NPK content is only macro nutrients N, P and K. As mention before, tin mined land has low macro and micro nutrient status, so that all nutrient needed for optimum growth should be provided through fertilization. The micro nutrient such as Cu, Zn and B play an important role in generative growth which includes the formation of flowers and fruits. By applying chilli cultivation technology that is suitable for local land conditions, the productivity of cayenne pepper on the tin mined land can reach 5-8 tons per hectare. The productivity achieved is indeed still relatively low compared to the potential yield of cayenne plants around 15 tons per hectare. But economically the yield achieved is still profitable and feasible because it is compensated by the high chili selling price
Constraints Utilization of Tin Mined Land
Constraints faced in the use of tin mining land for chili farming include physical land constraints, labor availability, the cost of fertigation installations and pest and disease control. Those constraints should be solved in order to chili cultivation on tin mined land can be feasible and profitable.
Physical Constraints
Tin mined land consist of 84% sand fraction, whereas clay fraction 3% only. This condition causes poor soil properties both of physical, chemical and biological properties. From physical aspect, soil low slow drainage pores (SDP) conversely high in fast drainage pores (FDP). That's mean the soil has a low water holding capacity (WHC) so that soil moisture will loss very fast. Incorporating manure and biochar tends to increase water holding capacity, so that irrigation was more eficient and risk of plant water stress decreased. Among parameters observed, only water holding capacity showed significant different due to manure and biochar treatment. Chili plant is very susceptible to water stress, therefore watering effort should be done in order chili plant do not experiences water stress. Drip irrigation is one effort to provide water continuously and increase water use efficiency. [5] also stated that the fertigation system can increase the efficiency of water and fertilizer use.
From chemical aspect, tin mined land has low nutrient holding capacity and low nutrients availability. Low CEC causes low efficiency use of fertilizer because nutrients that provide through fertilization will loss very fast due to leaching. Inherently, tin mined land consist of quartz sand so that poor in both of macro and micro nutrients. All nutrients needed by plant for optimum growth should be provide. Conventional NPK could not meet plant requirement, so that chili cultivation on such land should be provide special fertilizer formula than contain most of nutrient needed. AB-mix fertilizer formula is a special for apply through drip irrigation network then called fertigation. Fertigation system will increase efficiency use of water and fertilizer.
Lack of Labour
Conventional chili cultivation is labor intensive, meanwhile local labor availability is very limited. Most of people at Bangka prefer work in tin mining because they received much more wages compared to work on farm. Worker wages also much more higher than other region. Therefore, it is consider to use of farming tools to facilitate worker work faster and more efficient. Water management aspect is one of intensive activities because soil moisture loss on tin mined soil much faster than soil in general. Therefore, it is consider to use drip irrigation system in order to minimize use of labor. Besides that, drip irrigation network can also use for fertilizers application, then called fertigation system. The costs for purchasing equipment and installing a fertigation network are indeed quite high. But the equipment can be used for several years. So that if calculated for one planting season, the cost is still affordable and economical
Pest and Diseases
The main constraints in chilli cultivation is the attack of pests and diseases that can substantially decreased crop yields. The common pests that attack the chili plants are fruit flies, thrips and aphids. Control of fruit flies in addition to spraying, is also done by setting traps that contain pheromone methyl eugenol. Aphids and thrips can be controlled by installing plastic mulch and spraying insecticides. The common diseases that attack chili plants in tin mined land are curly virus, anthracnose and wilt disease. Viral diseases can be controlled by planting disease-free seeds and controlling the vectors. Anthracnose disease can be controlled by using resistant varieties and spraying fungicides regularly.
Conclussion
1. Chili cultivation on tin mined land has good prospects because there are many lands available, promising markets and farming technologies available. 2. The main constraints faced include poor soil properties, pest and disease infestation and lack of labor 3. Poor soil properties can be solve by application of manure mixed with biochar to improve the water holding capacity, cations exchange capacity, and nutrient availability 4. The application of fertigation technology can increased chili yield and improving the efficiency of watering and fertilizer use. 5. The productivity of chilies planted on tin mined land is still low, but this is compensated by high selling prices so that it is still profitable and feasible to be cultivated
|
2020-01-02T21:45:29.135Z
|
2019-12-30T00:00:00.000
|
{
"year": 2019,
"sha1": "b3019659ca13a1182fd87940b892e8e0ec96a665",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/393/1/012096",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4e67183e0f300633729e7dd089e9b1633dcc8598",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
218679205
|
pes2o/s2orc
|
v3-fos-license
|
Does Laughter Predict Onset of Functional Disability and Mortality Among Older Japanese Adults? The JAGES Prospective Cohort Study
Background While laughter is broadly recognized as a good medicine, a potential preventive effect of laughter on disability and death is still being debated. Accordingly, we investigated the association between the frequency of laughter and onset of functional disability and all-cause mortality among the older adults in Japan. Methods The data for a 3-year follow-up cohort including 14,233 individuals (50.3% men) aged ≥65 years who could independently perform the activities of daily living and participated in the Japan Gerontological Evaluation Study were analyzed. The participants were classified into four categories according to their frequency of laughter (almost every day, 1–5 days/week, 1–3 days/month, and never or almost never). We estimated the risks of functional disability and all-cause mortality in each category using a Cox proportional hazards model. Results During follow-up, 605 (4.3%) individuals developed functional disability, identified by new certification for the requirement of Long-Term Care Insurance, and 659 (4.6%) deaths were noted. After adjusting for the potential confounders, the multivariate-adjusted hazard ratio of functional disability increased with a decrease in the frequency of laughter (P for trend = 0.04). The risk of functional disability was 1.42 times higher for individuals who laughed never or almost never than for those who laughed almost every day. No such association was observed with the risk of all-cause mortality (P for trend = 0.39). Conclusions Low frequency of laughter is associated with increased risks of functional disability. Laughter may be an early predictor of functional disability later on in life.
INTRODUCTION
Increasing functional disability, defined as difficulty in performing the activities of daily living, is a significantly important public health concern in rapidly aging societies worldwide. 1 Particularly in Japan, one-fourth of its population of 127 million people is now aged ≥65 years. 2 Furthermore, the number of people certified with functional disability has increased by nearly 1.4 times in the past decade, accounting for 17.3% of the Japanese population aged ≥65 years. 3 Identifying the factors for preventing incident functional disability is a critical goal for super-aged societies, including Japan, because age-related functional disability negatively affects an individual's health status, predicts mortality, 4 and increases the healthcare costs associated with long-term care and hospital services. 5,6 Laughter could potentially be regarded as medicine. Recently, an increasing number of studies have reported the beneficial effects of laughter on several health outcomes among older adults, such as on the cardiovascular functions and diseases and mental health. [7][8][9][10][11] However, studies assessing the association between laughter and functional disability and mortality, while considering the individuals' socioeconomic background, have not been reported. The frequency of laughter can vary according to an individual's socioeconomic status, 12 which is associated with the late-life health trajectories. 13 The socioeconomic status can be considered a common cause of the association between laughter and health outcomes. Therefore, by targeting a large general population of community-dwelling older adults, this prospective cohort study aimed to test the hypothesis that low frequency of laughter is associated with a higher risk of onset of functional disability and all-cause mortality when the socioeconomic status is taken into consideration.
Study sample
This study was based on the cohort data from the Japan Gerontological Evaluation Study (JAGES), 14 which is an ongoing longitudinal study investigating the factors associated with health and well-being in the community-dwelling adults aged ≥65 years who could independently perform the physical and cognitive activities of daily living. Functional independency was defined as not being certified for Japan's national Long-term Care Insurance system. We used the data of the 2013 wave (from October to December). In the 2013 wave, self-reported questionnaires were mailed to 193,694 community-dwelling elderly adults aged ≥65 years in 30 municipalities; of these, 137,736 individuals responded to the survey (response rate = 71.1%). The questionnaire comprised basic questions and five modules that covered different topics, as follows: module A, nursing care and medical care and lifestyles; module B, oral hygiene, optimism, and subjective health; module C, social capital and history of abuse; module D, subjective quality of life, sleep, and cognitive function; and module E, physical activity. Of the respondents, 21,377 individuals in 23 municipalities in 9 (out of 47) prefectures responded to the basic questions and module B, including questions about laughter, in the questionnaire of the JAGES. Of the eligible sample of 21,377 individuals, 20,714 were successfully associated with the administrative records in 2016, corresponding to a follow-up rate of 96.9%. After excluding 6,481 participants with missing information regarding the frequency of laughter (n = 958), annual household income (n = 3,191), medical history (n = 878), and survey questions on other covariates used in the analysis (n = 1,454), we finally analyzed the data of 14,233 participants (men, 7,162; women, 7,071).
Outcomes
The outcomes of the present study were the onset of functional disability and all-cause mortality obtained from the municipal and national databases. The onset of functional disability was determined when an individual was newly certified for Long-term Care Insurance level 2-5, 15,16 which is based on a multistep assessment of functional and cognitive impairments by a qualified investigator and on comments from the family physician. 17 Information regarding the onset of mortality was obtained from the administrative databases of the national Long-term Care Insurance registers. These definitions were used in previous epidemiological studies. 18,19 Exposure The daily frequency of laughter was measured based on the response to the following standard single-item question: "How often do you laugh out loud?" The possible item answers were as follows: almost every day, 1-5 days per week, 1-3 days per month, or never or almost never. The 1-year test-retest reliability of the question was reported in a previous study 20 ; subsequently, regional and seasonal differences in the daily frequency of laughter among the Japanese men and women were not observed. This item had been used in several previous studies. 8,9,12,21 Covariates We included a wide range of covariates in the analyses as potential confounders based on prior literature. 8,9,12,18,21 Information on sex, age, hypertension, diabetes mellitus, smoking habit, alcohol intake, family structure, social participation, depressive symptoms, cognitive function, instrumental activities of daily living (IADL), educational attainment, and equivalent income was obtained from a self-administered questionnaire. Smoking habit and alcohol intake were classified into the following three categories: current, ever, and never. We considered the respondents who answered "Yes" to the question, "Have you ever been diagnosed with hypertension or diabetes mellitus?" as participants with hypertension or diabetes mellitus, respectively. Family structure was assessed through two questions, one related to marital status and the other to number of people living together. The marital status question provided five answer categories (married, bereaved, divorced, never married and other). According to the responses to these questions, family structure was classified into four groups: alone, ≥2 without partner, ≥2 with partner, or ≥2 with no information about marital status. Social participation was defined as the person's involvement in social activities (eg, volunteer group, sports group or club, leisure activity group, senior citizen club, neighborhood association or residents' association, study or cultural group, nursing care prevention or health building, teaching skills or passing on experiences to others, local events). We defined the participants who engaged in one or more of the social activities more than once per week as socially active. To assess the depressive symptoms, we used the 15item Geriatric Depression Scale; the participants were categorized into the following two groups based on the scores: not depressed (0-4 points) and depressed (≥5 points). 22,23 Cognitive function was assessed through three questions (part of the Kihon Check-list, 24 a basic function checklist in Japanese): First, Do your family or your friends point out your memory loss? Second, Do you make a call by looking up phone numbers? Third, Do you find yourself not knowing today's date? Participants are asked to respond either "negative" (score: 1) or "positive" (score: 0). We divided the participants into the following two groups based on the scores: Decline (1-3 points) and Normal (0 point). Our assessment of IADL was based on a five-item subscale of the Tokyo Metropolitan Institute of Gerontology Higher Competence Scale. 25 We categorized those who had difficulty with at least one item as 'dependent'; others were categorized as 'independent.' Attainment of education and annual equivalent income served as indicators of the socioeconomic status. Attainment of education was evaluated based on the self-reported history of education and was classified into two categories (≤9 years and ≥10 years). The equivalent income was divided into nine categories (≤$14,900, $15,000-19,900, $20,000-24,900, $25,000-29,900, $30,000-34,900, $35,000-39,900, $40,000-45,900, $45,000-49,900, and ≥$50,000).
Statistical analysis
For the demographic characteristics, summary statistics were constructed using frequencies for categorical variables. Linear trends regarding the frequencies of risk factors according to the frequency of laughter categories were tested using logistic regression analysis. Cox proportional hazards model was used to estimate the crude and adjusted hazard ratios (HRs) and their Laughter, Functional Disability, and Mortality 95% confidence intervals (CIs) for the onset of functional disability and all-cause mortality according to the frequency of laughter. In multivariate adjustment, all covariates (sex, age, hypertension, diabetes mellitus, smoking habit, alcohol intake, marital status, social participation, depressive symptoms, educational attainment, and equivalent income) were included. All statistical analyses were performed using the International Business Machines Corporation Statistical Package for the Social Sciences (SPSS) version 25 statistical software (SPSS, Inc.; Chicago, IL, USA), and two-sided P-values <0.05 were considered statistically significant in all cases.
Ethical issues
Our study protocol and informed consent procedure were approved by the Ethics Committee on Research of Human Subjects at Nihon Fukushi University (August 6, 2013, [13][14]. Table 1 shows the baseline characteristics of the study population according to the frequency of laughter. The likelihood of being female, being socially active, and having 10 years or more of education increased gradually with the increasing frequency of laughter. The likelihood of having been diagnosed with diabetes mellitus, being with cognitive decline, being dependent in IADL and being depressed decreased gradually with the increasing frequency of laughter. The frequency of age, smoking habit, alcohol intake, family structure, and equivalent income categories were significantly different across the frequency of laughter categories. During follow-up (median, 3.3 years), 605 (4.3%) individuals developed functional disability and 659 (4.6%) deaths were noted. The all-cause mortality and functional disability rates were compared according to the daily frequency of laughter using the Kaplan-Meier method. Functional disability and all-cause mortality were more commonly observed among participants with a low frequency of laughter (log-rank test, P < 0.001, Figure 1A and log-rank test, P < 0.001, Figure 1B, respectively). Table 2 shows the results of Cox proportional hazards analysis for the association of the frequency of laughter and functional disability and all-cause mortality. In the crude model, significantly inverse associations between the frequency of laughter and functional disability (P for trend <0.001) and all-cause mortality (P for trend <0.001) were observed. These inverse associations remained significant after adjusting for sex and age (functional disability, P for trend <0.001; all-cause mortality, P for trend = 0.001). After adjusting for the abovementioned covariates, the multivariate-adjusted HR of functional disability increased with a decrease in the frequency of laughter (P for trend = 0.04). The risk of developing functional disability was 1.42 times higher for individuals who laughed never or almost never than for those who laughed almost every day (95% CI, 1.10-1.85). However, no
DISCUSSION
To the best of our knowledge, this is the first study to comprehensively examine the association between laughter and functional disability and all-cause mortality after carefully controlling for the potential confounders, such as the socioeconomic status. The present prospective cohort study of community-dwelling Japanese older adults revealed an inverse association between the daily frequency of laughter and onset of functional disability, indicating that participants with a lower frequency of laughter were at higher risk of the onset of functional disability. Particularly, laughing never or almost never could increase the risk of functional disability by nearly 50%. In this study, approximately one-fifth of the participants laughed less than once per week; hence, it is reasonable to hypothesize that public health efforts regarding the dissemination of information on the importance of laughter to reduce the future incidence of functional disability predicting mortality among the older adults are warranted. While published reports indicating the association between the frequency of laughter and functional disability are not currently available, several previous reports revealed that the daily frequency of laughter was associated with the prevalence and incidence of cardiovascular diseases, 8,21 which constitute the second leading cause of functional disability in Japan. 26 Based on our present results being in line with these previous findings, we provide valuable new evidence that the low frequency of laughter itself contributes to the development of functional disability, independent of the established confounders.
There are several plausible mechanisms underlying the association between laughter and functional disability among the older adults. First, laughter might produce physiological changes in various systems of the body, 27 such as improvement of the immune function 28 and stimulation of circulation. 29 In turn, a low frequency of laughter can trigger functional impairments. Second, a high frequency of laughter may be a marker of positive emotions in daily life, which is associated with lower functional limitations. 30 Moreover, laughter-related positive emotions are able to downregulate the cardiovascular aftereffects of negative emotions, which can serve as a buffer against functional disability. 31 Finally, laughter can play a role in buffering the effects of stress. For example, stimulated and spontaneous laughter is reported to decrease salivary cortisol level, a biomarker of stress. 32,33 Thus, individuals with a higher frequency of laughter may cope more effectively with stress than individuals with a lower frequency of laughter, which may moderate the adverse effects of stress on the individuals' physical health.
Regarding all-cause mortality, our study revealed that age-and sex-adjusted HR of all-cause mortality increased with a decrease in the daily frequency of laughter, but this inverse association was insignificant after adjusting for all covariates. Meanwhile, a recent previous study 21 reported a significant association between the daily frequency of laughter and all-cause mortality. This discrepancy is possibly attributed to the differences in the study settings (nine prefectures covering a wide area in Japan vs one prefecture), participants (a general population of communitydwelling older adults vs community-based annual health checkup examinees), and controlling for the confounding effects of the socioeconomic status (adjusted vs unadjusted). The present study attempted to reduce the degree of selection bias and potential confounding effects as much as possible. In contrast, both studies included a limited number of mortality events during relatively short periods of time, namely 3-5 years. Thus, further long-term follow-up studies are warranted to elucidate the association between the daily frequency of laughter and onset of mortality.
The primary strengths of the present study are its prospective cohort design, large sample size, population-based sampling, and control for potential confounding factors. In contrast, a limitation of the study was that we evaluated the daily frequency of laughter using a single-item self-reported question. The perceived frequency of laughter may be different from the actual frequency; hence, it may be plausible that less healthy individuals are more likely to not report their frequency of laughter, possibly leading to an underestimation of the association between laughter and health outcomes. Additionally, it is unclear whether laughter itself can prevent the onset of functional disability and mortality. Therefore, further studies are required to precisely identify the causal inference using observational data 34 because random assignment of the daily frequency of laughter and long-term follow-up of the randomized participants to collect the data on number of onset events are difficult in the real-world setting.
In conclusion, the present study revealed that communitydwelling older Japanese who do not laugh much in daily life are at a higher risk of the onset of functional disability, suggesting that the frequency of laughter is potentially considered an early indicator of late-life functional disability.
|
2020-05-19T13:02:30.424Z
|
2020-05-16T00:00:00.000
|
{
"year": 2020,
"sha1": "6a9f86a2b998f72d4a008616c026dd7144f17219",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/jea/31/5/31_JE20200051/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df52e32000da2b6dad6da5495ae02a50197292a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252969554
|
pes2o/s2orc
|
v3-fos-license
|
Genome-wide association study meta-analysis of suicide death and suicidal behavior
Suicide is a worldwide health crisis. We aimed to identify genetic risk variants associated with suicide death and suicidal behavior. Meta-analysis for suicide death was performed using 3765 cases from Utah and matching 6572 controls of European ancestry. Meta-analysis for suicidal behavior using data across five cohorts (n = 8315 cases and 256,478 psychiatric or populational controls of European ancestry) was also performed. One locus in neuroligin 1 (NLGN1) passing the genome-wide significance threshold for suicide death was identified (top SNP rs73182688, with p = 5.48 × 10−8 before and p = 4.55 × 10−8 after mtCOJO analysis conditioning on MDD to remove genetic effects on suicide mediated by MDD). Conditioning on suicidal attempts did not significantly change the association strength (p = 6.02 × 10−8), suggesting suicide death specificity. NLGN1 encodes a member of a family of neuronal cell surface proteins. Members of this family act as splice site-specific ligands for beta-neurexins and may be involved in synaptogenesis. The NRXN-NLGN pathway was previously implicated in suicide, autism, and schizophrenia. We additionally identified ROBO2 and ZNF28 associations with suicidal behavior in the meta-analysis across five cohorts in gene-based association analysis using MAGMA. Lastly, we replicated two loci including variants near SOX5 and LOC101928519 associated with suicidal attempts identified in the ISGC and MVP meta-analysis using the independent FinnGen samples. Suicide death and suicidal behavior showed positive genetic correlations with depression, schizophrenia, pain, and suicidal attempt, and negative genetic correlation with educational attainment. These correlations remained significant after conditioning on depression, suggesting pleiotropic effects among these traits. Bidirectional generalized summary-data-based Mendelian randomization analysis suggests that genetic risk for the suicidal attempt and suicide death are both bi-directionally causal for MDD.
INTRODUCTION
Suicide is a worldwide public health crisis, accounting for close to 800,000 deaths per year. In 2019, suicide was the tenth leading cause of death [1], the second leading cause of death among individuals between the ages of 10 and 34, and the fourth leading cause of death among individuals between the ages of 35 and 54 in the United States [2]. The rate of suicidal behavior (SB) has climbed steadily over the past two decades [3][4][5][6]. Suicide outcomes encompass a range of behaviors. Non-suicidal selfinjury (NSSI) is defined as the intentional destruction of body tissue without suicidal intent [7]. Suicidal attempt (SA), defined as nonfatal self-injurious behavior with the intent to die, has been estimated to occur over 20 times more frequently than suicide death (SD) and is a major source of disability, reduced quality of life, social and economic burden. SB includes both SD and SA. There are intricate relationships between trauma exposure, NSSI, suicidal ideation, SA, and SD with common genetic contributions to trauma exposure and self-injurious thoughts and behaviors [8]. The observations of a higher frequency of SD and SA in monozygotic twins compared to dizygotic twins among suicide twin survivors but not non-suicide twin survivors suggest a genetic contribution to SB [9][10][11][12][13][14]. Family studies suggest a significant genetic contribution with heritability ranging from 30 to 55% to suicidal thoughts and behaviors [15,16], while SAs have heritability estimates of 17-45%, even after controlling for psychiatric disorders [17].
Psychiatric disorders are nevertheless a major risk factor for SB, and shared heritability has been demonstrated via polygenic risk score (PRS) analysis and/or genetic correlation, with the strongest genetic correlation with depression (r g = 0.81) in the UK Biobank (UKB) [18,19]. A prior SA is also a risk factor for SD and shared heritability is expected. However, based on conceptual and empirical differences (e.g., methods used, peak age, gender differences, frequency; CDC, 2016), nonfatal and fatal attempts are considered qualitatively distinct phenomena [20], suggesting genetic risk factors specific to SD may exist. Approximately two dozen genome-wide association studies (GWAS) using dichotomized traits of SA, SD, SB, and/or suicidal or self-harm ideation, or severity-based quantitative traits have been reported from both clinical and non-clinical population cohorts (civilians and military personnel) [18,19,. Most studies are from primarily European ancestry, but studies from other ancestries such as the Hispanic/Latino [27,28,41,42], Asian [34,41,42], and African American [30,41,42] have started to emerge. Few genome-wide significant loci have been identified, and to date, replication has occurred for genome-wide significant association signals from chromosome 7, and variants near LDHB (European) and FAH (African American) [26,30]. The largest meta-analysis to date between International Suicide Genetics Consortium (ISGC) and MVP identified 12 genome-wide significant loci [42]. Significant chip-based SNP heritability was estimated from several GWAS as well, with a heritability of 3.5% in the UKB (p = 7.12 × 10 −4 ) [24], 4.6% in a Danish cohort (heritability was reduced to 1.9% after adjusting for mental disorders) [25], and 6.8% in the ISGC metaanalysis [26]. Gene-by-environment genome-wide interaction study was reported identifying PTSD as the main environment driver and a replicated male-associated genome-wide signal near CWC22 [43]. In addition, the contribution of rare variants by wholeexome sequencing was evaluated in UKB for the ever-attempted suicide and suicidal ideation phenotypes, and no significant finding was reported using the study-wide significance threshold of 2.18 × 10 −11 [44]. Lastly, an increased global copy number variation (CNV) rate was reported for SA cases and a common CNV verified by qPCR assay near ZNF33B was reported in a small MDD cohort [45].
Since the recently published genomic analysis of SD data from a large population-ascertained cohort from Utah [22], we have genotyped another~1200 samples from the original cohort and matched the cases with controls genotyped using a matching array platform. We additionally added three cohorts of subjects with a lifetime history of suicide attempts (FinnGen cohort and two Janssen clinical trial samples). We aim to further interrogate the genetic basis of suicidal death and SB and further understand the relationship between SA and suicidal death. We also would like to interrogate SD-specific and SA-specific genetic risk factors by performing conditional analysis on the most correlated psychiatric condition (depression), and SA (for the SD phenotype only). Lastly, we would like to further dissect the genetic architecture of SD, SA, and SB by understanding the shared genetic components and causal relationship between these traits and psychiatric/non-psychiatric traits.
SUBJECTS AND METHODS Cohorts and sample ascertainment
A total of five cohorts were included in this study (Supplementary Method S1). Cohorts 1 and 2 consist of SD cases from the University of Utah [22] and matching controls from Janssen Research & Development, LLC and dbGaP. The Utah case samples were genotyped in three waves to date. Wave 1 and 2 samples were included in a previous report [22] except that the cases were matched to different sets of controls (Generation Scotland samples genotyped using OmniExpress and UK10K samples with wholegenome sequencing data). Compared to the previous SD GWAS (3413 cases) [22], a total of 3765 cases were included between cohorts 1 and 2, among which 2832 are common while 581 were unique in the previous SD GWAS analysis and 933 cases were unique in this study.
Cohorts 3 and 4 consist of suicide attempt cases and controls of European ancestry and were drawn from 12 clinical trial samples (NCT00044681, NCT00397033, NCT00412373, NCT00334126, NCT01193153, NCT02497287, NCT02422186, NCT01627782, NCT00253162, NCT00257075, NCT01515423, and NCT01529515) conducted by Janssen Research & Development, LLC. A subset of samples from cohorts 3 and 4 were included in a previous GWAS [26]. All Janssen clinical studies were approved by the appropriate institutional review boards or ethnic committees and have followed the principles outlined in the Declaration of Helsinki for all human investigations. In addition, informed consent has been obtained from the study participants involved.
Cohort 5 consists of SA cases and controls from FinnGen (https:// www.finngen.fi/en/about). The SA analysis using FinnGen data release 6 (R6) from the FinnGen Study included 4098 individuals with SA history, defined as the presence of SA International Classification of Diseases codes, and 247,898 individuals without the relevant codes. The diagnosis codes used to define SA are provided in Supplementary Table S1. The detailed descriptions of these cohorts are available in Supplementary Method S1.
Cohorts 1 and 2 were used for SD GWAS, while all five cohorts were used for SB GWAS.
Genotyping and quality control SD cases were genotyped using the Infinium PsychArray platform (Illumina, Inc., San Diego, CA, Supplementary Method S2). Janssen's SA cases and control samples were genotyped using either PsychArray, Human1M-Duo, or HumanOmni5Exome (Illumina, Inc., San Diego, CA). QC was performed initially by a local QC pipeline by genotyping batch, while the combined data were QC'ed again using RICOPILI [46] pipeline. Additional details on QC, principal component analysis (PCA), case-control matching, and imputation can be found in Supplementary Method S3.
Genome-wide association analysis. For the Utah SD association analysis, a linear mixed model (LMM) algorithm was used to test the association between variants and SD. For SD cohort 1, GWAS were performed using GEMMA [47], a computationally efficient yet suitable for smaller sample size, and an open-source LMM algorithm for GWAS modeling population stratification remaining after PCA by use of genomic relatedness matrices. For SD cohort 2, GWAS was performed using BOLT-LMM [48] that implements an extremely efficient Bayesian mixed-model analysis and is suitable in large cohorts (requiring N to be at least 5000). For the FinnGen cohort, GWAS was performed using the standard FinnGen pipeline that implements the LMM algorithm using SAIGE [49] that efficiently controls for case-control imbalance and sample relatedness. For the two Janssen SA cohorts, the standard logistic regression using PLINK as implemented in the RICOPILI pipeline was used. Meta-analysis was performed using METAL [50] (as implemented in the RICOPILI pipeline) for the two SD cohorts to identify genetic variants associated with SD and across all five cohorts to identify genetic variants associated with SB. Conventional genome-wide significance threshold 5 × 10 −8 is used to declare study-wide significance. A list of variants with unadjusted p value less than 5 × 10 −6 is also reported.
It is well known that substantial genetic liability is shared across psychiatric traits. To identify putative SD-specific genetic associations, multi-trait conditional and joint analysis (mtCOJO) [51] was further used to adjust using GWAS summary statistics for the effects of genetically correlated traits (MDD and SA). For MDD and SAs, the GWAS summary statistics from Howard et al. [52], without 23andMe cohort and Mullin et al. [26] without SD cohort (Utah wave 1 and 2 data and two other cohorts with predominant SD cases) were used (Supplementary Method S4), respectively. Likewise, to identify SB-specific genetic associations, mtCOJO was also used to adjust for the effects of MDD to identify putative SBspecific genetic associations. mtCOJO analyses were performed using Genome-wide Complex Trait Analysis version 1.93.2 beta [53].
Variant annotation and multi-marker analysis of genomic annotation (MAGMA) gene-and gene-set-based analysis. Variant clumping to identify independent genomic locus and annotation was performed using FUMA [54]. In addition to single-marker-based GWAS, gene-based analyses followed by pathway enrichment analysis were computed using MAGMA [55] based on GWAS meta-analysis summary statistics. SNPs were mapped to 18,927 protein-coding genes. Genome-wide significance was defined at p = 0.05/18,927 = 2.64 × 10 −6 . The MAGMA analyses were performed using FUMA [54].
Replication of genome-wide significant loci from ISGC and Million Veteran Program suicidal attempt meta-analysis
Among the five cohorts used in this study, the FinnGen cohort was completely independent of the cohorts included in the ISGC and Million Veteran Program meta-analysis [42]. We used the results from the FinnGen cohort to replicate the 12 genome-wide significant loci reported in the meta-analysis between ISGC and MVP cohorts [42]. SNPs with an association p value less than 0.05/12~0.00417 were considered replicated. Other cross-references/replication attempts of published results are also described in Supplementary Method S5. Q.S. Li et al.
Polygenic risk score association with suicide death. PRSs derived from 92 summary statistics (for non-unique traits) were tested for association with SD for cohort 1 and cohort 2, respectively, using PRSice-2 [56] with P T fixed at 1. Traits for calculating PRS included both psychiatric and somatic comorbidities, personality traits, and lifestyle factors. A full list of PRS derived is available in Supplementary Table S2 and Supplementary Method S6. Association p value less than 0.05/92~0.00054 was considered significant. The association results between cohort 1 and cohort 2 were compared for consistency. The results were also compared to the published PRS prediction or genetic correlation analysis results.
SNP heritability and genetic correlation estimations. The phenotypic variance explained by variants (both genotyped and imputed, mostly SNPs) (h 2 SNP ) for each of the phenotype groups was estimated using association statistics as implemented in linkage disequilibrium (LD) Score regression [57]. Genetic correlations between a smaller list of selected traits (psychiatric and non-psychiatric as described in Supplementary Method S7) and SD and SB both before and after mtCOJO adjustments were also evaluated using LD Score regression. ISGC SA [26] (Supplementary Method S4) was also included as a reference for the calculation of genetic correlation. Details of summary statistics used for these traits together with population prevalence rate assumptions are available in Supplementary Table S3. Genetic correlation with a p value less than 0.05/18~0.0028 was considered significant as we accounted for the number of non-suicide traits for multiple testing corrections.
Generalized summary-data-based Mendelian randomization (GSMR). GSMR [51] is a method to test for a putative causal association between a risk factor and a disease using summary-level data from GWAS. In this study, we test the relationship between MDD and SD/SA as well as the relationship between SA and SD.
RESULTS
For the SD cohorts from the University of Utah and the SA cohort from FinnGen, psychiatry conditions were more prevalent among the cases as expected (Table 1).
Genome-wide association (SNP-level and gene-level associations)
For the expanded SD GWAS meta-analysis using 3765 cases and 6572 populational controls genotyped (Table 1 and Supplementary Method S1), one locus in neuroligin 1 (NLGN1) passing Specifically, cohorts 1 and 2 are used for suicide death (SD) GWAS meta-analysis, and cohorts 1-5 are used for suicidal behavior (SB) GWAS meta-analysis. genome-wide significance threshold for SD meta-analysis compared to the general population was identified (top SNP rs73182688, with p = 5.48 × 10 −8 before and p = 4.55 × 10 −8 after mtCOJO analysis conditioning on MDD summary statistics (Table 2); Manhattan plots in Fig. 1A and Supplementary Fig. S3A, QQ-plots in Supplementary Fig. S2A, B). Conditioning on SA summary statistics (p = 6.02 × 10 −8 , Manhattan plot in Supplementary Fig. S3B, QQ-plot in Supplementary Fig. S2C) or both MDD and SA summary statistics (p = 4.70 × 10 −8 ) did not significantly change the association strength, suggesting that this is an SDspecific genetic risk locus. NLGN1 encodes a member of a family of neuronal cell surface proteins. Members of this family act as splice site-specific ligands for beta-neurexins and the NLGN1 protein is involved in the formation and remodeling of central nervous system synapses. Additional variants associated with SD with suggestive association p value less than 5 × 10 −6 and the annotated genes based on positional, eQTL, or chromatin interaction mapping are listed in Supplementary Tables S4 and S5, respectively. In addition, our study replicated the genomewide significant finding in rs116955121 [19] that was associated with suicidal ideation and attempt in UKB (p = 0.0003 in our SD GWAS meta-analysis, Supplementary Table S6). Additional replication attempts were described in Supplementary Text S1 and Supplementary Tables S7 and S8. Among the 22 implicated genes associated with SD, 6 were significantly differentially expressed in postmortem brain samples of schizophrenia, autism, and/or bipolar disorder (p value less than 0.05/22~0.0023) based on the analysis from the PsychENCODE Consortium (Supplementary Table S9). Of particular interest, EIF4G2 were consistently downregulated in both schizophrenia (p = 1.88 × 10 −5 ) and bipolar disorder (p = 0.001).
Gene-based association using MAGMA additionally identified ZNF28 to be associated with SD (Manhattan plot in Fig. 1B, QQplot in Supplementary Fig. S2D, regional plot in Fig. 1E), and the association did not weaken when adjusting for correlated traits including MDD and SA (Manhattan plots in Supplementary Fig. S3C, D, QQ-plots in Supplementary Fig. S2E, F). Additional regional plots for suggestive association signals from the SD GWAS are available in Supplementary Fig. S4. The top 10 genes with suggestive evidence associated with SD are also listed in Supplementary Table S10. No variant associated with SB passed the genome-wide significance threshold in the meta-analysis across five cohorts (n = 8315 cases and 256,478 psychiatric or populational controls, Table 1) before (Manhattan plot in Fig. 1C, QQ-plot in Supplementary Fig. S2G) and after (Manhattan plot in Supplementary Fig. S3E, QQ-plot in Supplementary Fig. S2H) applying the mtCOJO adjustment for MDD. Additional variants associated with SB with suggestive association p value less than 5 × 10 −6 and the corresponding implicated genes are listed in Supplementary Tables S11 and S12 with regional plots available in Supplementary Fig. S5. Genes with differential gene expression evidence from PsychEN-CODE are also provided in Supplementary Table S13. SB "replication" attempt of ISGC results is described in Supplementary Text S1 and Supplementary Tables S14 and S15. Among the 12 genomewide significant loci identified from the ISGC and Million Veteran Program SA meta-analysis, two associations with SA (equivalent to our SB endpoint in this study) were replicated accounting for 12 independent tests in the FinnGen cohort with consistent directionality (rs17485141 (Table 3).
Gene-based association using MAGMA additionally identified ROBO2 and ZNF28 passing study-wide significance as being Table 2. Top genome-wide association signals. associated with SB (Manhattan plot in Fig. 1D, QQ-plot in Supplementary Fig. S2I), this association did not weaken significantly when adjusting for MDD using mtCOJO, suggesting that the association is SB-specific (Manhattan plot in Supplementary Fig. S3F, QQ-plot in Supplementary Fig. S2J). It is noteworthy that the studywide significant gene-based association for ROBO2 corresponds to the most significant suggestive association in the SNP-based metaanalysis. ROBO2 encodes a transmembrane receptor for the slit Fig. 1 Genome-wide significant association signals. Manhattan plots for suicide death GWAS meta-analysis: SNP-level (A), gene-level (B); and suicidal behavior GWAS meta-analysis: SNP-level (C), gene-level (D). Regional plot for NLGN1 (E); regional plot for ROBO2 (F). The dotted line indicates a genome-wide significance threshold of 5 × 10 -8 . For the regional association plot generated by LocusZoom [95], SNPs in genomic risk loci are color-coded as a function of their r 2 to the index SNP in the locus, as follows: red (r 2 > 0.8), orange (r 2 > 0.6), green (r 2 > 0.4) and light blue (r 2 > 0.2). SNPs that are not in LD with the index SNP (with r 2 ≤ 0.2) are dark blue, while SNPs with missing LD information are shown in gray. homolog 2 protein and functions in axon guidance and cell migration. The top 10 genes with suggestive evidence associated with SB are listed in Supplementary Table S10. There are also a few genes with suggestive evidence in multiple analyses, such as ROBO2, which had suggestive evidence associated with SD (p = 6.21 × 10 −5 before adjusting with depression, p = 1.58 × 10 −4 after adjusting with depression, p = 1.46 × 10 −4 after conditioning on SA). The same is true for a few other brain-expressed genes such as LIMK2, NRBF2, NRG1, and ZNF710 (Supplementary Table S10). Among the 18 implicated genes across all traits from the genebased analysis, three of them were significantly differentially expressed in postmortem brain samples of schizophrenia, autism, and/or bipolar disorder (p value less than 0.05/18~0.00278) based on the analysis from PsychENCODE Consortium (Supplementary Table S16). Of particular interest, LIMK2 were consistently upregulated in ASD (p = 0.0003), schizophrenia (p = 3.18 × 10 −9 ), and bipolar disorder (p = 0.0003).
Polygenic risk score association with suicide death Among the 92 PRSs derived from psychiatric, personality, somatic comorbidity, and lifestyle traits, 22 and 41 were associated with SD status in cohort 1 and cohort 2, respectively (Fig. 2), among which 21 were common. In both cohorts, PRSs derived from depression, anxiety, stress, insomnia, schizophrenia, and pain were positively associated while smoking-related traits and education attainment/ intelligence were negatively associated with SD. Cohort 2 was much larger in sample size and revealed additional positive PRS associations derived from bipolar disorder, PTSD, general anxiety disorder (GAD), ASD, ADHD, substance use disorder (SUD), neuroticism, cholesterol/triglycerides, and negative associations derived from subjective well-being, intracranial volume, and cognitive performance. Many of these traits including depression, anxiety, pain, neuroticism, schizophrenia, bipolar disorder, and PTSD were previously reported to be genetically correlated with suicidality [19,26]. A complete list of associations passing multiple test corrections is available in Supplementary Table S18.
Genetic heritability of suicidal death, suicidal attempt, and suicidal behavior and genetic correlation with other traits Total Liability scale h 2 SNP for SD (from this study) and SA (from ISGC) was 5.02% and 5.45%, respectively. Conditioning on MDD, SA, or both reduced the h 2 SNP for SD to 3.93%, 4.6%, and 3.65%, respectively. Total Liability scale h 2 SNP for SB (from this study) was 2.98%, while conditioning on MDD reduced it to 2.19%.
SA (based on ISGC summary statistics excluding SD cohorts) was used as a positive control in this study and the detailed results are provided in Supplementary Text S1. SB was correlated with SA (r g = 0.72, p = 3.09 × 10 -8 ), pain (r g = 0.48, p = 8.15 × 10 -8 ), educational attainment (r g = -0.36, p = 2.51 × 10 -7 ), ever smoker (r g = 0.37, p = 1.41 × 10 -6 ), schizophrenia (r g = 0.43, p = 6.06 × 10 -6 ), and insomnia (r g = 0.44, p = 1.28 × 10 -5 ). SD was also associated with pain and educational attainment. To examine whether these genetic correlations were mediated by depression, r g was estimated with the same traits using the SB|MDD, and SD|MDD results. For SB and SD, genetic correlations with ASD, anxiety, PTSD were not significant before conditioning, while the genetic correlations with ADHD, insomnia (for SD|MDD only), risk tolerance (for SB|MDD only), bipolar disorder, and neuroticism (for both SB| MDD, and SD|MDD) became nonsignificant after conditioning. After conditioning on both MDD and SA, most of the genetic correlations with SD were nonsignificant except for the negative correlation with EA ( Fig. 3 and Supplementary Table S19).
Generalized summary-data-based Mendelian randomization (GSMR) Bidirectional GSMR analysis suggests that the genetic risk for the SA and SD are both bidirectional causal with the genetic risk for MDD ( Supplementary Fig. S6). Specifically, we found significant bidirectional causal relationships in SNP effect sizes for MDD loci in the genetic risk for SAs (p GSMR = 8.30 × 10 -63 ) and SA loci in MDD (p GSMR = 2.65 × 10 -9 ). In addition, we also found significant bidirectional causal relationships in SNP effect sizes for MDD loci Fig. 2 Polygenic risk score association with suicide death. P values plotted are association p value for the respective PRS and suicide death status in cohorts 1 and 2, respectively. Bar plots filled in red denote a negative association coefficient, while green ones denote a positive association coefficient. ADHD attention-deficit/hyperactivity disorder, ASD autism spectrum disorder, BIP bipolar disorder, CAD coronary artery disease, ICV intracranial volume, MDD major depressive disorder, PTSD posttraumatic stress disorder, SCZ schizophrenia, SWB subjective well-being, TG triglycerides, WHR waist-to-hip ratio.
DISCUSSION
Using a total of 3765 SD cases and 6572 populational controls, we identified one locus in neuroligin 1 (NLGN1) with genome-wide significance. The top SNP is rs73182688, with p = 5.48 × 10 -8 before and p = 4.55 × 10 -8 after mtCOJO analysis conditioning on depression; Howard et al., using summary statistics without the 23andMe cohort). Conditioning on the SA (ISGC summary statistics [26] without Utah and two other SD cohorts) did not significantly change the association strength (p = 6.02 × 10 -8 ), suggesting this is a locus with SD specificity. Gene-based association using MAGMA additionally identified ROBO2 and ZNF28 as being associated with SB in the meta-analysis across five cohorts consisting of 8315 cases and 256,478 psychiatric or populational controls, among which ZNF28 was also associated with SD. The gene-set enrichment analysis identified the MHC Class Ib receptor activity pathway as being significantly associated with SD. Using a completely independent sample set from FinnGen, we replicated two genome-wide significant findings from the ISGC and MVP meta-analysis including variants near SOX5.
Among the genes near the replicated variants associated with SA, SOX5 was previously associated with schizophrenia, depression, neuroticism, chronotype, chronic back pain, C-reactive protein levels, cortical thickness, and surface area [52,[58][59][60][61][62][63], and is among a panel of genes contributing to the bidirectional causal effect of neuroticism on MDD [64]. The variant associated with depression (rs78337797) and the variant associated with SA (rs17485141) are in weak LD with each other (r 2 = 0.13, D' = 0.75), suggesting allelic heterogeneity and pleotropic effect of this locus. SOX5 encodes a transcription factor important for embryogenesis and cell fate determination with the expression level highest during fetal development ( Supplementary Fig. S7). A full list of reported genome-wide significant associations annotated to SOX5 is provided in Supplementary Table S20.
Among the genes associated with SD, NLGN1 encodes a member of a family of postsynaptic neuronal cell surface proteins. Members of this family act as splice site-specific ligands for presynaptic β-neurexins and are involved in the formation and remodeling of central nervous system synapses [65,66]. Another variant in NLGN1 (weak LD with the variant reported herein) is associated with SA in the ISGC and MVP meta-analysis, suggesting allelic heterogeneity in this gene. Neurexin 1 variants were previously implicated as risk factors for SD [67,68]. The top associated variant rs73182688 in NLGN1 in this study is nominally associated with BMI (p = 0.0006), depression (p = 0.004 in FinnGen R5), and personality disorder (p = 0.004 in FinnGen R5) (Supplementary Table S21). Other variants (SNVs and CNVs) in NLGN1 and/ or other family members NLGN3 and NLGN4 were previously associated with PTSD, autism, obsessive-compulsive disorder, and depression [69][70][71][72][73][74][75]. The rs6779753 variant in NLGN1 associated with PTSD as well as with the intermediate phenotypes of higher startle response and greater hemodynamic responses (assessed using functional MRI) of the amygdala and orbitofrontal cortex to fearful face stimuli was not in LD with the variant identified herein. In our study, rs6779753 was only suggestively associated with SD (p = 0.06). NLGN1 was also implicated in a preclinical model of depression [76]. In addition, presynaptic neurexins and cytoplasm partners such as SHANK also have been implicated in autism, schizophrenia, and mental retardation [73,[77][78][79][80][81][82][83]. Overall, there is substantial genetic evidence on the NRXN-NLGN pathway in suicide and other psychiatric conditions.
ROBO2 and ZNF28 were associated with SB in our study. Variants in ROBO2 were previously reported to be associated with circadian phenotypes such as self-identification as a "morning person" and chronotype [62], smoking initiation [84], and highest math classes taken [85], all reaching genome-wide significance threshold. These genome-wide significant variants are in LD with the top variant rs7649370 from this study (r 2 > 0. 86). ROBO2 was also implicated in schizophrenia, and psychopathic tendencies, although a subsequent study replicated the emotionally reactive, impulsive aspects of conduct disorder, but not the concurrent risk for psychopathy [86][87][88]. ZNF28, on the other hand, was not previously associated with psychiatric disorders. A few genes with gene-level suggestive association evidence across multiple analyses were discussed in Supplementary Text S3.
Consistent with the previous SD GWAS [22], elevated PRS for disinhibition, MDD, schizophrenia, and ASD in SD cases were observed in this study. The previously reported PRS associations for child IQ (p = 0.03 in cohort 1 and p = 0.95 in cohort 2) and loneliness (p = 0.04 and p = 0.06) were nominal in this study. This study however uncovered additional evidence of elevated PRS in anxiety, insomnia, stress, smoking, alcohol use, and pain among SD cases, consistent with epidemiology evidence, known risk factors, and warning signs noted by several suicides-or healthfocused organizations, a previous meta-analysis on predictors of suicidal thought and behaviors, and/or previous reported genetic correlations [20,26,[89][90][91][92]. Reduced PRS for education attainment and intelligence was associated with SD. Cohort 2 revealed additional positive PRS associations for bipolar disorder, PTSD, GAD, ASD, ADHD, SUD, neuroticism, cholesterol/triglycerides, and negative associations for subjective well-being, intracranial volume, and cognitive performance. While the last SD GWAS did not reveal significant genetic correlation results [22], this study revealed a significant genetic correlation between pain and educational attainment, consistent with the PRS association findings. We had expected to identify a causal relationship from SAs to SD. However, the absence of a significant causal Fig. 3 Genetic correlation between suicide death, suicidal attempt, and suicidal behavior (p1), and selected psychiatric/non-psychiatry traits (p2). Triangle points indicate genetic correlations that passed the Bonferroni-corrected significance threshold (p < 2.78 × 10 -3 ). Error bars represent the standard error. ADHD attention-deficit/hyperactivity disorder, ASD autism spectrum disorder, BIP bipolar disorder, MDD major depressive disorder, PTSD posttraumatic stress disorder, SA suicidal attempt, SB suicidal behavior, SB|MDD SB results after conditioning on MDD, SCZ schizophrenia, SD suicide death, SD|MDD SD results after conditioning on MDD, SD|SA SD results after conditioning on SA, SD|MDD and SA SD results after conditioning on MDD and SA.
relationship in the current study may reflect limitations in statistical power for the GWAS for both SA and SD summary statistics, as few genome-wide significant findings have been reported to date. However, the non-significance of this relationship may alternatively reflect the observation that the majority of people who die by suicide die on their first attempt [93,94]. Thus, a large fraction of individuals who completed suicides would be expected to not have a prior attempt. Conversely, among patients who require medical attention for a suicide attempt only a relatively small fraction die on the first attempt [94].
The study has a few limitations. Even though this is one of the largest GWAS analyses for SD, the effective sample size is still more modest than previous SD GWAS or SA GWAS reported by ISGC. Even though we genotyped an additional~1200 SD cases, the net increase in SD case sample size was >300 after case-control matching, while the control sample size was smaller compared to the previous SD GWAS. However, this SD GWAS is the first wellpowered GWAS to leverage matching arrays across cases and controls, which we believe is an important strength. Secondly, we prioritized depression and SAs for conditional analysis. There are certainly other psychiatric conditions that warranted such a test and therefore the conditional test is not exhaustive. In addition, the summary statistics from Howard et al. are likely to be powerful enough for conditional analysis given that 102 independent variants were discovered, while the summary statistics from the ISGC analysis may not be powerful enough as the genome-wide significant loci are just beginning to be unveiled. Thirdly, even though we tried to provide replication evidence for suggestive association findings from ISGC, the samples used in this study partially overlapped with the samples in ISGC, and therefore are not completely independent. The Finngen replication of ISGC and MVP meta-analysis results on the other hand are completely independent. The GSMR analysis is just the beginning to explore the intricate relationship between these traits. The selection of SNPs used a relaxed p value threshold simply due to the limited power of available GWAS summary statistics and this certainly benefits from future growth in sample size.
|
2022-10-19T06:16:26.852Z
|
2022-10-17T00:00:00.000
|
{
"year": 2022,
"sha1": "aaca0318185fed99d4af4b6d3a68650a2f8aaa95",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41380-022-01828-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "1fd50430c8baca3bcb9b93192ada35c0e6e7126a",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265653220
|
pes2o/s2orc
|
v3-fos-license
|
Attitudes of international non-governmental employees towards working from home in Jordan
This research aimed to identify the perceptions of employees working in international non-governmental organisations in Jordan regarding working from home and the support they need from their organisations and management in order to be productive while working from home. It also examined the relationship between their perceptions of working from home and productivity. Employee perceptions were measured by distributing a questionnaire based on self-reported measures of perceptions. The results indicate a positive, statistically significant relationship between working from home and productivity. Organisations are encouraged to seriously consider switching to working from home, not only in times of crisis, disaster and disease, but on a permanent, gradual and possibly partial basis.
Introduction
Working from home (WFH) has been increasing for years and is likely to become a characteristic feature of 21st-century workplaces.These trends are linked to the presence of the internet and computers in homes, the need for both parents to work and the contribution that WFH can make to providing flexibility in working hours and improved work-life balance (Gibbs, Mengel & Siemroth, 2021).In 2019, a complete curfew was imposed in many countries due to the COVID-19 pandemic, and many organisations in the world were forced to switch from working from the office (WFO) to WFH.For many employees, this was the first time they had worked from home; even so, for many of them it was successful (Vyas & Butakhieo, 2020) and they were able to work effectively (Bick, Blanding & Mertens, 2020).
Among the organisations affected by the pandemic were international nongovernmental organisations (INGOs).INGOs are organisations that are independent of governments (Coppola, 2020) that provide aid in emergencies to communities affected by disasters or wars and engage in development work.The large INGOs typically have headquarters in Europe or North America in addition to regional and national offices around the world (Reis & Bernath, 2017).In Jordan, there are 64 INGOs and UN agencies (UNHRC, 2021), representing a significant proportion of employment.Accordingly, the pandemic greatly affected the working mechanisms of these organisations, and, as a result, remote work is no longer just an option; it has become imperative for INGOs to look for ways to continue operating in such circumstances.
This research aimed to identify the perceptions of employees working in INGOs in Jordan regarding WFH.Additionally, the research sought to explore how organisations can support employees to improve their productivity while working from home.These research findings can be useful for organisations operating in or seeking to operate in conflict areas, as well as for organisations trying to promote diversity and inclusion policies, giving them a broader relevance.
Literature review
Several recent studies have looked at the impacts of WFH on productivity and working time.For example, Gibbs, Mengel and Siemroth (2021) studied the productivity of work in offices and compared it with work in homes using data from more than 10,000 professional respondents.They found that, on average, total monthly working hours increased by about 30% and overtime working hours increased by 18%.While productivity decreased by about 20%, no significant change was observed in the average completion of assigned tasks.
Some researchers have mentioned concerns about the productivity of WFH employees (Gorlick, 2020), while others have said that WFH increases their productivity (Baker, Avery & Crawford, 2007), offers high flexibility in work, and promotes better work-life balance (Dizaho, Salleh & Abdullah, 2017).Additionally, Purwanto, Asbari, Fahlevi, Mufid, Agistiawati, Cahyono and Suryani (2020) concluded from their study that WFH could benefit employees in other ways, such as saving money for commuting to work.Some studies looked at the productivity effects on supervisors (Lazear, Shaw & Stanton, 2015) or peers (Song, Tucker, Murrell & Vinson, 2018).Gibbs, Mengel and Siemroth. (2021) found that WFH was associated with weak interaction among the organisation's employees.
A few researchers have examined how the work environment affects productivity.Gubler, Larkin and Pierce (2018) found that increased physical activity, attention to diet, and other lifestyle changes have a positive effect on productivity among home-based workers.Such changes may become relevant to the long-term effects of WFH (Gibbs, Mengel & Siemroth, 2021).
At the beginning of the COVID-19 pandemic, a number of articles were published forecasting the future of WFH.Dingel and Neiman (2020), for example, analysed which jobs were most likely to shift from WFO to WFH, and concluded that 'Computer and Mathematical Occupations' were most amenable for WFH.Rebolledo, Vega and Belmar (2021) found positive effects of WFH on employee productivity and in relation to several other dimensions of work, such as promoting digital skills development, improving creativity and productivity, increasing job satisfaction, improving work-life balance, improving business management and increasing societal benefits.
A household longitudinal survey conducted in the UK found that employees who work from home believed that there was no change in the level of their productivity whether they worked in an office or in the home (Etheridge, Wang & Tang, 2020).Bellmann & Hübler (2020) found that working remotely might have no long-term effect on work-life balance and that WFH increased job satisfaction temporarily.Rebolledo, Vega and Belmar (2021) and Barrero, Bloom and Davis (2020) highlighted the importance of a quiet workplace environment and the availability of material aspects (separate rooms, internet, electronics, etc.) in addition to the individual competencies and skills (time management, discipline, self-motivation, self-orientation, etc.) as factors affecting the productivity of employees and their ability to WFH.
Rubin, Nikolaeva, Nello-Deakin and Brommelstroet (2020) point out that WFH saves commuting time, especially for those who use cars.Additionally, WFH employees spend less time communicating with workmates and on coffee breaks, allowing more time for work, increasing the number of working hours and thus productivity.Counterbalancing this, however, during WFH employees spend more time in meetings and video calls leaving them less time to work uninterruptedly (Gibbs, Mengel & Siemroth, 2021).
Family and childcare responsibilities may affect the productivity of working parents compared to childless workers.Andrew et al. (2020) showed that parents' working time decreased by 3.5 hours per day when WFH, which negatively affected work productivity.Another study by Arntz, Sarra and Berlingieri (2019) showed that with WFH, employees without children worked overtime for at least an hour per week.
Moreover, some studies have drawn attention to other working features and characteristics that affect productivity.Etheridge, Wang and Tang (2020) found that WFH had different impacts on productivity depending on the type of job.For workers in jobs that are fit for a home office, WFH increases productivity, while it reduces productivity for low-paid workers.A variety of aspects could account for this, including the nature of the work, the availability of resources and the level of support provided by the organisation.Moreover, Etheridge, Wang and Tang (2020) highlighted some potential negative impacts of WFH on low-paid workers' well-being, which raised important questions about the equity and fairness of WFH policies.Furthermore, Dutcher (2012) found that the employees performing creative tasks showed an increase in productivity during WFH, whereas WFH had a negative impact on the productivity of employees dealing with dull and routine tasks.
The Organisation for Economic Cooperation and Development (OECD) defines productivity as 'a ratio of a volume measure of output to a volume measure of input use' (OECD, 2001).However, the concept of productivity and its measurement is not straightforward.A closer examination of the productivity literature highlights the lack of clarity on how these outputs and inputs should be defined or measured and how they relate to the goals of the organisation.Palvia (1991), similarly, described productivity as the relation of inputs to the outputs which can be measured by dividing the quantity of outputs (products, services) by the quantity of inputs (labour, capital).Others have adopted a different approach.Dunnette and Hough (1991), for example, defined productivity as 'how well a system uses its resources to achieve a goal' .Such definitions do not always work for all sectors.In the not-for-profit sector, the most important purpose of the organisation is not to make money but to generate impact.However, and based on lessons learned from working in this sector, in order to evaluate productivity, charities, for instance, need to spend time defining how they measure their impact.INGO measures of productivity are likely to look very different from those of many for-profit businesses.In addition, the influence of the pandemic and its impact on changing working practices, as observed in this study, will probably require charities to rethink and redefine their measurable impact and productivity measures compared with how they have done so in the past.
In general, previous studies have shown that WFH has an effect on employee productivity in different business firms (Rebolledo, Vega & Belmar, 2021;Gibbs, Mengel & Siemroth, 2021;Gorlick, 2020).However, there have been no studies on employees in INGOs, and how their productivity can be improved while working from home.This study focused on INGOs and aimed to identify factors that may be related to employees' productivity while working from home, and to identify the support they need.
Methodology
As mentioned in the above section, the research design was based on an extensive review of previous studies.The aspects related to WFH that emerged as important from this review were workplace environment, individual competences and skills, time management and family responsibilities.The specific aspects of work related to productivity were the management of time during the day to complete tasks and the quality of the tasks performed.
A survey questionnaire was developed based on the survey instruments used in previous studies.The study sample was randomly selected and consisted of 44 participants, employees from both managerial and non-managerial levels drawn from 54 INGOs operating in Jordan (Jordan Humanitarian Partners Directory, 2022).The questionnaire consisted of two parts; the first part captured the demographic profile of the respondents while the second part focused on the employees' perceptions.A four-point Likert Scale was used: strongly agree, agree, disagree and strongly disagree.The questionnaire was reviewed by specialists to ensure its validity and modified based on their comments and feedback.The reliability was tested and it was found that Cronbach's Alpha = 0.79, which means that the reliability is acceptable.
Results
Details of respondents are shown in Table 1.There were more female respondents than male respondents.More than half of the respondents were without family responsibilities.About 30% of respondents were in managerial-level positions.The percentage of respondents engaged in desk/office work was higher than that for mixed work (desk and field work).Most of the work of about two-thirds of the respondents was regarded as routine, while the work of one-third of the respondents was creative (non-routine).
The distribution of respondents according to their work department is shown in Table 2.The highest percentage of respondents (38.6%) was from programme/project implementation departments, followed by 25% from information/monitoring and evaluation departments and then 13.6% from advocacy/communication departments.
All respondents mentioned that they worked from home during the curfew due to the COVID-19 pandemic.54.5% of them reported working from home at the time of the study, 40.9% said sometimes (2-3 days a week), and 4.5% all the time.
Respondents' perceptions of working from home are shown in Table 3.The majority of responses (over 60%) were positive.Positively evaluated aspects related to working from home were: spending less time on coffee breaks, smoking and side conversations; saving a lot of commuting time (driving to work, transportation, etc.); quiet workplace; availability of tools and materials (e.g.headphones, internet, desk, printer, etc.); and taking care of the family.However, more than 40% of the responses were negative.These negative aspects of working from their homes in Jordan were: spending too much time in meetings and video calls to understand and complete tasks, experiencing a lack of creativity and having difficulty solving problems.Respondents' perceptions of the productivity of working from home are shown in Table 4.Over 97% of responses were positive about being able to get things done well, while over 86% of responses were positive about managing time well and so finishing tasks on time.
Based on respondents' opinions, the top four forms of support that employees need are: • Around 80% of respondents mentioned the importance of ensuring that team communication between employees is transparent, frequent and consistent.• About 73% of respondents mentioned celebrating employee success and making individual employees feel appreciated for their hard work.• More than 71% of respondents stressed the need to listen to the needs of employees and make extra efforts to understand the challenges and fears they may face.• Over 64% of respondents emphasised supporting the professional and personal development of employees.
About 75% of respondents expressed the view that working from home would be beneficial for organisations working in conflict areas.One of them stated that It creates a safer environment for the employee to stay at home rather than travelling to work whilst living in a conflict area.Also, it will bring peace of mind to the employee staying with his family during these difficult times.
Around 80% of respondents mentioned that increasing weekend days to two and a half days would have a positive impact on productivity, as one put it because it will give employees more time to disconnect from work and rest.Also, it will impact mental health greatly as there will be extra time to do whatever they want to relax and come back to work rested and energised.
Another respondent stated, 'The more you take care of your employees, the more they are productive' .However, someone who objected to the idea of an extended weekend stated, 'The amount of work couldn't be done in 4.5 days, we usually work after working hours to complete the work requested' .
The descriptive statistics (mean and standard deviation) of work-from-home and productivity are shown in Table 5 of the Appendix.As shown, the averages of work-fromhome and productivity are very close to the 'agree' point on the measurement scale.
Outputs of one sample t-test are shown in Table 6 of the Appendix.As shown in Table 6, responses related to working from home and productivity were not significantly different from the 'agree' point on the scale.
Results of the correlation test are shown in Table 7 of the Appendix.The results indicate a statistically significant positive correlation between working from home and productivity.Results of regression tests are shown in Tables 8, 9 and 10 of the Appendix.The results indicate that more than 56% of the change in responses to the productivity variable can be explained by the change in responses to the work-from-home variable.The results also indicate that there is a statistically significant positive relationship between working from home and productivity.
The results are consistent with previous studies by Rebolledo, Vega and Belmar (2021) and Barrero, Bloom and Davis (2020) who found that the employee needs a calm environment and the right tools to perform work from home, in addition to having certain skills that would affect productivity while working from home in terms of his/her ability to plan well, think creatively and complete tasks with high quality.Moreover, working from home saves time commuting to work, time spent on breaks and talking with colleagues about non-work issues, as well as time spent in online discussions, trying to understand and do work as required.Additionally, employees in the organisations said they are able to work and care for their family members (children and/or the elderly) at the same time.Handy (1995) argued that the traditional mindset of management is that employees need to be constantly monitored and supervised.This would hinder the development of working from home, as this mindset can limit the trust and autonomy afforded to employees, which is essential to a remote working culture.However, as mentioned earlier, working from home has been imposed on organisations and employees around the world due to the coronavirus pandemic regardless of managers' inclinations.Many companies have claimed that this will negatively affect business performance, productivity and profitability (Bai, Brynjolfsson, Jin & Wan, 2021).
Discussion
Nevertheless, as the pandemic continued, institutions and companies began to create new methods and systems to help employees get their work done remotely.The companies insisted that their employees attend online courses related to or focusing on the essentials of working from home, remote working, and staying motivated while working remotely, through various platforms such as Kaya, Udemy, Alison and others.These courses were designed to provide employees with the skills and knowledge necessary to enhance their job performance and productivity while working remotely.Furthermore, new policies were put in place to ensure productivity and employee well-being (Wilson, 2021).Companies also increased the use of ICT systems and applications (such as Zoom, Google Hangouts, Webex, WhatsApp, etc.) for messaging and meetings, to keep in touch with their employees and improve efficiency (Rachmawati et al., 2021).
For this reason, despite the fact that many countries now have successful vaccination programmes, and that vaccinations are available for many employees, organisations cannot force their employees to take the vaccine and return to work in offices and companies.They have to give employees the option of working from home partially or completely in some cases, such as for health reasons or pregnancy (Riva, Paladino, Paleari & Belingheri, 2022).
Working remotely was not new to the INGO sector, since they work worldwide in different regions.Moreover, international NGOs have employees working in dangerous places and war zones; with the increase in the effectiveness of technology to support remote work, fewer employees are needed in the field and organisations can increasingly provide their services without risking the lives of their employees.Not only that, INGOs can now reach a greater number of beneficiaries by recruiting staff already living in a hazardous area while ensuring that services are well delivered using the tools and methods of communication recently gained from the WFH epidemiological experience.
From a financial point of view, organisations may save a lot of operating expenses and costs, such as labour and material costs, or office expenses by using WFH (Lister & Harnish, 2011).This is very important for non-profit organisations, as many of these expenses usually go towards salaries and incentives, such as transportation, living allowance, office rent, and expenses.The larger the office, the greater the need for office supplies.
Abu Nar and Schaefer (2022) reported that during an interview with a manager of a Norwegian INGO, the manager expressed the belief that effective management should be given priority and the manager further stated that.
It is quite difficult and inhibiting, many times some individuals or some churches and companies give us money, however, they want to make sure that not too much money is spent on the administrative part.This is not the case when the donations come from the government.
This was confirmed by an expert former project manager in one of the organisations operating in Jordan, who mentioned that saving on these expenses will make charities more attractive to donors who are more likely to provide donations (funds) for organisations whose administrative costs are low because they believe that more money will go to projects and beneficiaries.This will be very useful for organisations that have difficulty obtaining funding.
Moreover, private companies that provide financial, management, information technology (IT), software development and consulting services such as EY (Ernst & Young, 2020) have realised that working from home is the way forward and started to implement it.For example, they may hire workers or consultants from different countries at low salaries, offering fewer incentives, and without supplying a place to work or office supplies, especially when projects need a large number of employees and the minimum wage is high and it can be expensive to hire them all from one country.On the other hand, WFH has opened a bridge whereby companies can now hire specialists and professionals in a particular field from their home countries to work for certain hours or under a short-term contract, or just to accomplish a specific task.
Information security and confidentiality have been major concerns when it comes to working from home (Sturgeon, 1996).However, the pandemic has forced companies to improve their IT systems.This is very expensive, but has opened up new opportunities and allows for new trends, such as 'bring your own device' (BYOD) (Laudon & Laudon, 2013), which calls for allowing workers to use the personal devices they already own for work purposes.This can enable companies and organisations to reduce their expenditure on hardware, especially items with high specifications (such as desktop computers, laptops, tablets, monitors, accessories, etc.).
Employees can also use their mobile devices to access work emails and other job-related applications that allow employees to solve urgent problems anytime, anywhere, carry out tasks or even attend meetings on the way.Therefore, even if improving information security costs a lot now, it creates many opportunities and has great potential in the future.
Likewise, for the worker, working from home partially or completely will affect the daily expenses of the individual, as working from home will save the costs of daily transportation and ordering of food (Lister & Harnish, 2011), which might be equal to the salary of a low-income individual.Although remote work may result in extra expenses for employees, particularly with regard to home office setup and utilities, the overall cost difference may not be considerable, especially for those who do not live alone.Furthermore, the savings from remote work might enable employees to meet their financial obligations and achieve their future aspirations, even with the added expenses.
On the other hand, working from home will likely reduce micromanagement, which may reduce psychological pressure on employees, enabling them to achieve better performance and productivity, increase their self-confidence, and leave them room for self-reliance to solve problems and complete tasks creatively.In addition, working from home will save commuting time, as employees are usually stuck in stifling city traffic, requiring them to leave early as well as come home late in the evening after work.,Alternatively, this time might be used for other important activities, such as relaxing, spending time with family, taking up hobbies, learning new skills, studying, as well as having enough time to do part-time work such as consulting, private tutoring or working on research.
However, to ensure the success of working from home, organisations must hire people who have the ability to work remotely or from home in different circumstances.In the foreseeable future, it is likely that remote work proficiency could be deemed as a prerequisite for employment, considering the growing trend towards remote work arrangements and the benefits they offer for both employers and employees.In addition, organisations should have clear protocols to cover working remotely or from home.They should provide training on 'how to work from home effectively' , which has spread widely during recent years, and choose ICT systems and software that suits the work environment, in addition to providing the employee with work performance necessities such as laptops, internet and headphones.Since the work is carried out in the form of a team, it is important from time to time to conduct employee meetings and recreational activities to get to know each other and reduce the level of stress.
Conclusion and implications
It can be concluded, based on the results of this research, that employees who work from home and have excellent individual competencies and skills (e.g.time management, creativity and problem-solving skills) can be productive.Furthermore, working from home can enhance the productivity of employees with family responsibilities (e.g.childcare and/or elderly care) if they provide the opportunity to work and take care of their dependents at the same time.Other factors increasing the productivity of employees working from home include saving on commuting time (driving to work, transportation, etc.) and reducing time spent on coffee breaks, smoking and side conversations.However, productivity was negatively impacted by spending more time in meetings and video calls.In addition, the study showed that the enhancements to productivity could only be achieved by having a quiet workplace and physical materials, such as headphones, internet, desk, printer, etc. in the home.
Furthermore, the research found that there is a need for management to recognise home-based employees and celebrate their successes.Here, our results confirm those of Deeprose (1994) who highlighted the importance of recognising and rewarding employees for increasing their performance, maintaining talented employees and growing the organisation's profits (mentioning 150 ways to do so, such as offering privileges, gifts and awards and organising special events).There is also a need for management to ensure that team communication between employees is transparent and consistent.
A limitation of this study is that it relied on the perceptions of 44 employees working for INGOs in Jordan.The relatively small sample size may affect the generalisability of the sample results to larger populations.
Even though this is a small study, it could pave the way for further studies aimed at further investigation about switching to working from home, not only in times of crisis, disaster and disease, but on a permanent, gradual or possibly partial basis.It is important to provide the necessary requirements for the success of working from home and achieving goals as if employees were working from the office.Working from home has positive repercussions at the national level in terms of reducing expenses, easing traffic congestion, and protecting the environment.There are jobs and tasks that do not need to be completed in the office, especially those that do not require face-to-face communication with others.
Accordingly, it is suggested that organisations analyse their work and functions and categorise them into those that can be done from home and those that require presence in offices.Organisations should study the feasibility and possibility of working part-time from home for a number of days and the rest of the days in the office (hybrid working).Organisations can use modern technical means that allow remote communication and direct meetings through various and multiple platforms to facilitate working from home and holding virtual meetings when necessary.Ensuring the success of this transformation requires the availability of its components, including equipment, capabilities, intelligent monitoring and control tools.
Table 2 :
Distribution of respondents according to work department
Table 3 :
Respondents' perceptions of working from home (%)
Table 4 :
Respondents' perceptions of the productivity of working from home
|
2023-12-05T16:26:33.619Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c7bebf8711eaec9cc95138ad3751dbbda43d01e2",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/6ef38324-a95e-4d2d-9de8-32df56de71fa/ScienceOpen/WOLG_17_2_Alkhawaldeh%20&%20l-Oran.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9a1bd2a90e724009d04d3e7f8b823f1528beed3f",
"s2fieldsofstudy": [
"Business",
"Sociology"
],
"extfieldsofstudy": []
}
|
244300281
|
pes2o/s2orc
|
v3-fos-license
|
Border and Memory in the village of Koshovice 1
This article focuses on the village of Koshovice, Albania, where its residents are part of the officially recognized Greek minority. The local perceptions of the community are discussed as linked to the Albanian-Greek border and its presence in the collective memory. After the borderline creation in 1913, local residents were divided between the two neighboring countries. The ethnographic data collected underline the experiences and the everyday practices of the villagers of Koshovice, especially during the period of the Albanian socialist state between 1945 and 1991, when the border became almost impenetrable. The article then discusses the changes after the fall of socialism and the opening of the border in the early 1990s, especially showing how the local borderland communities are still connected nowadays to each other despite the inter-state division.
When World War II ended, Albania was transformed into a socialist state under Enver Hoxha's government, and the national border between the two countries closed. As a result, the local communities that had developed several relationships such as economic trades, cultural events and bonds of kinship between the residents of the two villages, became separated and isolated (Nitsiakos and Mantzos, 2008, 257). Furthermore, the socialist government inflicted strict restrictions on the movement of people, both abroad and within the country. In that way, the authorities imposed the immobility of the population as one of the state's principles (Gregorič Bon 2017, 303). Under this premise, the communication between the two villages evolved to be stricter and more limited than before.
This article focuses on the concept of the border and its social consequences through the prism of Koshovice. To this end, we discuss how the people of Koshovice interpret the meaning of the border. 2 This research aims to explore two points: first, how the locals have challenged the strictness of the border through everyday practices, and second, how the border presence has affected the divided communities throughout the years. These two aspects will be evaluated within the analytical frame of the collective memory of the community.
Koshovice
Our ethnographic field site lies about 500 meters away from the borderline. As mentioned above, the demarcation of the border between Albania and Greece in 1913 split the village in half and cut it off from most of the agricultural and livestock areas, which in turn became part of the Greek territory. According to Voula, one of our interlocutors who lives and works in Athens, Koshovice and Aghia Marina were considered as two neighborhoods belonging to the same village. 3 The area of Aghia Marina, formerly called Vatsounia, was a natural continuation of the main settlement and included the arable land of the village. Gradually, an independent settlement in the Vatsounia region was created by Koshovice's residents who chose to settle permanently in Greece and keep contact with their relatives that were living on the other side of the border. Meanwhile, with the passage of time this new settlement was renamed by the Greek authorities to Aghia Marina. According to Takis, a 40-year-old resident of Koshovice, the specificity of this new settlement reflects the allocation of the houses that are quite far from each other: 'Koshovice was the center of the village. In Aghia Marina, there were only sheepfolds and agricultural fields. Over the years, villagers from Koshovice settled permanently in Aghia Marina and built their homes where their fields used to be. For this reason, the houses are far from each other. ' This arrangement was overthrown with the establishment of the Albanian socialist state in 1944. During this period the supervision of the Greek-Albanian borderline intensified, and the military was guarding the border on both sides. As Maria, a 60-year-old woman who lives in Koshovice and Athens told us: 'After World War II, the border was guarded by the army, while barbed wire was placed on the paths that connected the two settlements.' Thus, during the socialist period most of the villagers continued to work as land workers, but the agricultural production and the arable land in the Albanian side was controlled by the local state-owned agricultural cooperative.
According to the collected interviews, the majority of the village infrastructure collapsed after the fall of the socialist state in the early 1990s. The agricultural cooperative was disbanded, while the community's school and the cooperative's stores ceased to operate. Furthermore, most of the village's population migrated to Greece due to the reopening of the border. Today, Koshovice is a small and isolated village. The road that connects the village with the main highway is poorly maintained and the bridge that was located at the entrance of the village has collapsed, making the road very dangerous for vehicles. In Koshovice there are a few residents during the winter, most of whom are elders. In the summer, the village population increases due to the return of the villagers who have migrated abroad, for their summer vacations.
Photo No. 1 -The stone that defines the borderline between Greece and Albania is located 500 meters from the village. 4
Border and the collective memory of the community
The concept of the border is a key point of our research, and provides the basis for this article. Anthropologist Sarah Green (2018, 67) argues that the concept of borders, as straight lines on the map, is a result of specific political and historical perceptions, which ignore the complex reality that exists in border areas. For many theorists, borders represent powerful political spaces in which there are possible cracks in homogenizing discourses based on ethnicity, gender, race, and sexuality. Therefore, they conclude that borders are counters that have the ability to consolidate power or to challenge it through the existence of hybrids and crosses (Cunningham and Heyman 2004, 291). Following the approach of Donnan and Wilson (2010), we conclude that it is not possible to exclude the political context of the meaning of the border or to assume that its dimensions are shaped in a passive way. Although such dimensions are actively shaped and often visible, there are always some implicit and invisible aspects that underline the concept of the border. For those who live in the borderlands, it is a common practice to assume the border as something invisible (Donnan and Wilson 2010, 5). Borders can also be understood as 'gates', temporarily and selectively open or closed. Despite the main condition of its penetrability, populations from the two sides of the border are often related and manage to communicate with each other. For this reason, an anthropological analysis of borders and borderland areas must consider people and institutions from different ethnic groups and nations while also take into account actions that lead to border managing decisions either by the state or by the local residents. In the end, it is important to remember that people living on the borders are often agents in the construction, negotiation and maintenance of the border, and thus influence social, financial, and political changes, beyond their locality and sometimes beyond their state.
Collective memory is the second term we employ for our analytic frame. This research focuses on how the local community remembers its past, especially the separation period. The term collective memory was introduced by the French philosopher and sociologist Maurice Halbwachs (1992), who argues that individual memory is a part or even an aspect of collective memory. In this view "memory is not just about the person, but about the community and the collective as well" (Abrams 2010, 114). At the same time, it is evident that memory does not remain unchanged throughout the years, but it is a "dynamic process of reconstruction, through which the traces of the past interrelate, to tell a story" (Abrams 2010,114). For Marie-Claire Lavabre memory is a "space of communication" (Todorova 2014, 7), while Szacka (Main 2014, 98-99) considers that collective memory is a representation of a collective past, which is built on individual and personal memories. These personal memories are transformed within the framework of the common cultural perceptions shared by a community.
Community "is symbolically constructed as a system of values, norms, and moral codes, which provides a sense of identity within a bounded whole to its members" (Cohen 2001, 9). The community of Koshovice is geographically well defined because the people consist in a small social group located at Koshovice who, as Keesing puts it, "recurrently interact in an interconnected set of roles" (Amit and Rapport 2002, 14). However, at the same time its members also belong to the imagined community of the Greek nation, which "is imagined because the members will never know most of the fellow members, yet in the mind of each, lives the image of their communion" (Anderson 1983, 6).
Identity "can only be understood as a process of 'being' or 'becoming'" (Jenkins 2008, 17). It is not fixed, immutable, or primordial, rather utterly sociocultural in its origins, negotiable, and flexible (Jenkins 2008), while it is produced through the "interaction between relationships of similarity and difference" (Jenkins 2008, 200). It is also important to mention that "collective identity" -the one that we are discussing in this article -"is as much an interactional product of 'external' identification by others as of 'internal' self-identification" (Jenkins 2008, 200).
Historical frame
Albania was recognized as an independent state in 1913, after the London Conference that took place in the same year. Before this period, Albania was part of the Ottoman Empire, and ethnic Albanians or other ethnic groups such as Vlachs, Macedonians, Armenians, and Greeks resided within this area. However, the Greek minority was the most compact ethnic group in political terms (Pettifer 2001, 2). The Greek minority lived mainly in the south of the country, and the Greeks call this region 'Northern Epirus'. This region was the locus of discord in the early 20th century. The Greek state was able to gain independence from the Ottoman Empire after the First Balkan War of 1912-1913 and incorporated a substantial part of the region called 'Epirus', but not the entire area (Baltsiotis 2010, 2).
Although the Greek forces failed to integrate all parts of the Epirus area, Greek irredentists occupied the city of Gjirokastër and its surrounding areas, proclaiming these lands as part of Greek territory. In 1914, after some clashes between the Albanian forces and the separatists, a provisional government was established in Gjirokastër, which declared the region of Southern Albania as autonomous under the name of Autonomous Republic of Northern Epirus (Baltsiotis 2003, 53).
The outbreak of World War I and the political situation in Greece made the Greek government unable to support the newly established autonomous Republic and thus, it was later overthrown. The whole area was returned to Albanian territory in 1925 and the newly formed borders were based on the same borders as were set in 1913 (Pettifer 2001, 5). The Greeks of the area were naturalized as Albanian citizens, but at the same time they were officially recognized as members of the Greek minority by the Albanian state. During that period, the term 'Northern Epirus' was used by Greek nationalists to describe the areas Greece had failed to prevail upon. This term was heavily politicized and invested with irredentist meanings. In the consciousness of the lower Greek social classes, the region of Northern Epirus had transformed into an 'enslaved sister' and the term 'Northern Epirotes' signified a separate part of the Greek population living in Albania (Nitsiakos, 2013, 245).
In 1944, the partisan movement managed to liberate Albania from the occupation forces of the Axis and established a socialist republic under the strict rule of Enver Hoxha. The socialist regime was of benefit for this ethnic minority for two reasons: Firstly, because of the residents' attitude during the war in the minority zone, who sided with the guerrilla movement against the occupation forces, and second because the Greek minority was the regime's alibi regarding the respect for human rights 5 at the international level (Baltsiotis 2010, 4). However, the socialist government was suspicious of the Greek minority, and under its overall rhetoric of respect for minority rights, existed the desire for the subordination of the minority to the regime (Nitsiakos 2013, 247).
The conditions for the minority in southern Albania changed radically, once again, in the 1990s. In December 1990, the socialist state collapsed, and in March 1991, the first free elections were held. Because of intense financial problems, the political situation of the country remained unstable throughout the decade. As a result, the change in political power led, to the revival of bilateral relations between Albania and Greece at the intra-state level. This was accompanied by the massive inflow of economic migrants from Albania to Greece, leading to the rising of nationalism in both countries (Valden 2019, 21). During the first years of the transition, nationalist sentiments were observed within the minority and were supported by political and religious circles in Greece.
In regard to the relationship between the minority and the border, the fall of socialism in Albania sparked waves of excitement. The boundary that separated for 50 years the two communities was now open. According to anthropologist Sarah Green (2010, 309), this enthusiasm reveals the belief that socialism was a parenthesis in the history of the area and now that the system has collapsed, things could return to their previous state. But this belief obscures the fact that the national borders were already established 30 years before socialism.
An important factor that shaped the history of the minority in post-socialist years is the experience of mass migration to Greece and the depopulation of villages. The migratory tendencies of these communities were a familiar strategy even before the socialist period. However, this phenomenon mainly focused on the male population and was temporary, without affecting the social balance of the community. Instead, after 1991, migration came to concern the whole population, especially the productive age, regardless of any gender criteria.
The collective memory of Koshovice's residents about the border
In this section, we explore a different angle regarding the meaning of the border, the collective memory, and the everyday life in the village of Koshovice. Such concepts will be discussed from the spectrum of before and after the collapse of the socialist state. Firstly, we will take a look at how people who live on the border formulate their memories regarding this subject, during the socialist period and in which ways the border's definitive closure has affected the local communities. Secondly, we will explore the attempts of the Greek minority to renegotiate their Greek identity after the opening of the border.
Our research data reveal that until 1991, the memory of the border, according to the residents of the community, was intertwined with the separation of the two communities. After 1991 the local population's concerns were focused on the social phenomenon of migration and especially on the migration of the Greek minority to Greece. Through the conducted interviews, it was highlighted that the residents of Koshovice kept in their memory only the definitive closure of the border in 1945 instead of the establishment of the borderline in 1913. For the residents of Koshovice, the definite closure of the border symbolizes their exclusion from the Greek state, which they recognize as their homeland. Therefore, such an action also refers to their separation from their Greek identity.
During the socialist era, the Albanian state was trying to incorporate the Greek minority into the central state. This phenomenon occurred through continuous controlling and surveillance strategies, especially in the Greek communities located near the border, like Koshovice. As a result, the residents affiliated the values and habits which were enforced to them under the actions of Hoxha government. Further, the collective memory of the residents about the border is formed in a variety of ways through everyday practices. The supervision of daily agricultural work is one such example. Due to the geographical location of the state fields, the farmers worked under the strict supervision of border guards, and they were obliged to show their identities in order to enter and leave the workplace. As a woman from Koshovice, Voula narrates: One day, there was a boy in Kastaniani who yelled 'grandma-grandma it's me, Kostakis!' but the soldiers did not allow us to answer, not even to raise our heads because we were going to work in the fields, where they were located just before the neutral zone on the fence. There, there was a door that the soldiers were opening every morning for us. We had to show them our identities, and they were counting how many people were there. When we were finishing our work at night, we had to show them our identities again. We didn't have the right to leave the working field; we had very specific working places.
The collective memory concerning the permeability of the border was another important issue that has to be taken into account. The border of Albania opened officially in 1991, however, the border was already permeable since 1989. One interlocutor confirmed the fact that some residents from Koshovice (including her) started crossing the border secretly, by paying off the border guards. This secret crossing was conducted in order to visit their relatives in Aghia Marina from which they had been separated for the last 50 years. She also told us about an event where many residents from the Greek minority who lived near the border one day tried to cut down the fence that separates their communities from the villages in Greece. This action manifested as a symbolic gesture to express their intention to acquire their Greek identity.
After the opening of the borders in 1991, many residents from the communities near the border made a number of symbolic gestures that indicated their desire to affirm their Greek identity. For instance, in a short period of time, they established the institution of the Greek Orthodox Church in the area. The symbolic aspect of such an action is not only considered because the religious practices were banned during the previous system, but also because the Christian Orthodox religion is integral to the Greek identity (Nitsiakos, 2010). Owing to this, the local women from the area organized massive christenings for many children of the Greek minority. The interviews conducted reveal that, for the residents of Koshovice, their relatives who live in Aghia Marina play a significant part in their lives nowadays. According to their point of view, this role is established not only because they had not seen each other for 50 years, but also because the relatives from the Greek part are the living proof that the residents of Koshovice (and the other communities of the Greek minority) have Greek identity. In this sense, such relatives represented their connection with the Greek state.
According to the previous work of Van Boeschoten (2003), in the course of the socialist state, the collective memory of the residents was related to the fact that the border was closed and impermeable. As a result, the residents were not able to meet up with their relatives from the Greek side. When the border opened in 1991, another issue influenced the collective memory of the village of Koshovice: the phenomenon of migration to Greece. Although the work of Van Boeschoten (2003) does not refer to Koshovice, her assumptions apply to this case. According to Nitsiakos (2010), migration was a common practice in Albania not only after, but also before the opening of the border, although before the border opening only the male members of the community would migrate. Therefore, the local community kept having social cohesion and productivity as the migrants provided financial support to their families who left behind.
After the opening of the borders and the fall of socialism, many infrastructures collapsed, and most of the cooperatives closed. Hence, many residents from Koshovice were forced to migrate. During this period, both men and women participated in such migratory practices. At this point, many communities from the Greek minority -including Koshovice -became depopulated. It is also important to take into account that the remittances which were sent by the migrants to their relatives, who remained in Albania, were very valuable. However, the financial problem was irreversible because the entire productive workforce migrated from the community. As Nitsiakos (2010) mentions, owing to this workforce migration, after 1991 the community's cultivations were unseeded. As a result, the land could not be assigned utility value anymore, and its signification changed. More specifically, land possession was then related to the property acquisition and to 'rooting in place'. Owing to this situation, many residents of the Greek minority started feeling nostalgic about the previous order. In their narrations during our research, we observed that local people did not keep in their memory the previous political situation, but they kept the fact that under socialism there was a sense of unity and productivity in the community, as previous work also elaborates (Kasimis and Nitsiakos 1996, 131).
Another prominent part of the collective memory was the migration, as it was perceived by the residents of Koshovice who migrated to Greece after 1991. Following the opening of the borders, the Greek minority was in a liminal condition because they were accepted neither by the Greek state nor by the new Albanian state. Under this premise, the aforementioned facts can be verified both from our fieldwork and from the previous work of Nitsiakos (2010). While they were considered as Greeks by the Albanians, the Greeks addressed them as Albanians. According to the collected interviews, we noticed that it was notably important for the Greek minority to take Greek citizenship and to be identified as members of the Greek minority.
Through the obtained research data we conclude that this was a bureaucratic process that required many years. When they migrated to Greece, the majority of them became workers undertaking precarious forms of labor, and they were called 'Northern Epirotes'. This political naming assigns a negative meaning which also reflects on another issue of collective memory, concerning the difficulty of integration -as residents of the Greek minority from Albania -in the Greek society.
Photo No. 2 -The collapsed bridge that is located outside of the village.
The relations between Koshovice and Aghia Marina throughout the years
It is clear that the relations between these two communities across the border are influenced by the ever-changing nature of that border. As Nitsiakos (2010) has shown, the border can be considered a geographical entity, but a pure geographical perception would be insufficient regarding its symbolic aspect emerges, and the identities of the local residents. Liminal groups can be geographically marginal, but often their symbolic role is pivotal. That happens because the boundaries are usually a place where unity with the 'same' set is found. Additionally, the inhabitants of Koshovice and Aghia Marina are connected with bonds of kinship, which is crucial to their relations throughout the years. Therefore, if we consider the historical changes of the region, the significance of the border and kinship in the perspectives of local people becomes more apparent.
Despite the official establishment of the border between Albania and Greece in 1913, the residents of Koshovice did not perceive it as a strict border, or at least not as strict as the one set by Hoxha. As Takis, a 40-year-old permanent resident of Koshovice, and his mother Maria, detail, 'the border was open. [...] There were paths, but in each one of them there was a gatehouse with guards. From there, you could receive an official document that testified that you were from Koshovice.' Using that, locals could cross the border to visit their lands in today's Aghia Marina, then Vatsounia. According to another informant Voula, 'Aghia Marina was the other half of the village. It was our village's fields; it was the huts where people held their sheep. And one day, the church bell was ringing, and people were asking who died. And then other people responded it was the village that died because they took it from us'. Vatsounia was growing, little by little. As a result, when the border was utterly closed in 1945, a fact no one could have predicted, Aghia Marina was already developed according to our interlocutors.
As a result, many local families never saw their relatives in Koshovice for approximately 50 years.
This leads us to the question, 'what happened during Hoxha's time as leader of Albania and how did all these relatives lose contact or keep in touch across the border?' There were ways to communicate, not without struggles, but there were also cases in which communication was almost impossible. Regarding the first case, according to Takis and Maria, some people from Aghia Marina were writing letters to their relatives, which frequently arrived open in Koshovice or did not arrive at all. As Voula says, the same thing was happening with packages, from which things were sometimes missing. On the other hand, the case of Thodoris, a teacher whose mother was from Koshovice and whose father from Aghia Marina, is different. Before 1945, his father was working in Greece and often visited Albania for brief periods of time. After the establishment of the border, he returned to his job and the Albanian government considered him a fugitive. As a result, Thodoris could not communicate or see his father for 40 years. This changed in January 1985, with the opening of the customs in Kakavia. This first border opening was arranged between the two countries, in order for some selected locals to see their relatives. According to Voula: Some people came from Greece and selected […] those who had relatives in the village over there [Aghia Marina]. Maybe 20 -15 people from all the villages of the area. They went there [Kakavia] to see their relatives, but the border remained in the middle. At first, they wouldn't open the fence, and the people could only see each other from a distance. Then, they realized they couldn't continue like this, so they opened the gates, and people started crying.
That is when Thodoris saw his father for the first time. Generally, around 1985, the state adopted a looser border control, and people from Koshovice could travel for the first time to Greece and meet their relatives in Aghia Marina. Nonetheless, this trip was not possible without specific state documentation, which of course was hard to obtain.
This was the reality for those who chose the legal way, but there were other options as well. Voula told us a story about her brother, Giorgos, crossing the border from Koshovice to Aghia Marina. There, he visited his aunt, who told him to return immediately. She knew that back in Albania if they realized that he had escaped, there would be severe consequences upon his family. Voula, as a pregnant woman, also crossed the border, following the path to Aghia Marina. It was December of 1989 when, with three other people, they bribed the guards and after a demanding walking route, they arrived in Aghia Marina. There she visited her aunt and her cousins and then she returned to Koshovice. We should note that the border crossing during this period of time was much easier compared with the era her brother decided to cross the borders.
Although the official opening of the border was in 1991, local residents started to challenge its premises long before that date by making numerous symbolic actions, such as the illegal border crossings or the slow but gradual ripping off the border fences, as Voula mentioned. Nevertheless, it was a specific symbolic action that signified the definite opening of the border. This re-signification occurred with the first grand festivity that included collective christenings. According to Voula, the festivity took place in Koshovice, after the collapse of the socialist state (around the summer of 1991) and it was organized by the Women of Epirus Association. Many villages, such as Llongo, Kastaniani, Sotira, and of course Aghia Marina, took part in this activity. The aim was to christen the minority's children and establish bonds of friendship between Koshovice and Aghia Marina. The Women of Epirus Association brought Orthodox priests from the villages of the Greek territory. They priest baptized around 70 kids, and the people celebrated this with food, dance, and polyphonic music. As Voula tells us: 'Now, we keep organizing fests and people from the other half of the village, which is Aghia Marina, are coming to our fests and we are visiting theirs. We celebrate together.' Today, the relations between the two villages are very close, and kinship plays an important role to this. As we have heard from the people of Koshovice, they keep visiting their relatives in Aghia Marina, and also voted there for the last Greek elections 6 . According to Vlahaki, Tsintsirakos and Kokkinou "the politics of culture are practiced by members of the Greek minority in Albania in an effort to construct and represent a minority identity which manifests its national affiliation with the population on the Greek side of the border" (Manos 2016, 7). These people were cut off from their relatives and the Greek identity with the closing of the borders. Now, as they emphasize their kinship with the residents of the other half village in Greece, they approach once more their 'lost' Greek identity.
Conclusion
Since the opening of the border, the residents of Koshovice and Aghia Marina have coorganized religious festivities during the summer period. This atmosphere of cooperation belies what has been a complicated past, as we have discussed above, when the border arbitrarily separated ethnocultural groups who share similar social characteristics (Nitsiakos and Mantzos 2008, 255). However, this new air of cooperation is now being threatened by other forces. In recent years, many elderly residents of Koshovice have passed away. As a result, the community fears and mourns for the social death of the village and all the religious festivities have recently been cancelled, thus prolonging the feeling of desolation among villagers. Besides, in regions like Epirus, where the borders are contested, cultural identities can be considered notably dangerous for national cohesion (Green 2010, 300). Similarly, the identity of the 'Northern Epirotes' before the collapse of the socialist system was considered 'dangerous' for the Albanian state. On the other hand, after the opening of the borders and the following massive migration to Greece, the name 'Northern Epirotes' was assigned a negative meaning in Greece. As a result, 'Northern Epirotes' were considered a threat to the Greek national homogeneity.
As for the collective memory of such communities, we must take into account that local narratives about the past are influenced by the experiences of the present (Van Boeschoten 2003). Therefore, it stands to reason that collective memory is a dynamic procedure that can combine fragments from the past and the present, which means that it can construct a desirable future. That is why nostalgic feelings are clearly noticeable among the members of the community. In this sense, the desirable future for them is the revival of social life in the village of Koshovice.
Regardless of the harsh conditions that were elaborated above, the people of Koshovice did not necessarily call for the complete dismantling of the community, and those who live abroad have found associations and organize traditional fests in Greece. As our interlocutors told us, the people of Koshovice in Greece manage to distribute their time between their current residence in Greece and the village, engaging themselves in the preservation of a sense of community. Many communication bridges with Koshovice have fallen down, but many others persist.
Notes
1 We would like to express our gratitude to the anthropologists from the Border Crossings Network, who organized the Konitsa Summer School and provided assistance to us during our first anthropological research. In particular, we want to share our appreciation for Professor Vassilis Nitsiakos. His companionship in the field and his expert guidance taught us valuable lessons on how to conduct anthropological research. We are also thankful to Aliki Angelidou and Zeliha Nilüfer Nahya. With their remarks, they supported us through this research, and encouraged us up to the final form of the essay. Finally, we are much obliged to our interlocutors, that shared with us their deepest thoughts and feelings with courtesy and hospitality. 2 The team -consisting of three undergraduate students at that time -had the opportunity to visit the village of Koshovice during the short ethnographic research within the Konitsa Summer School, taking place from the 26 to the 30th of July in 2019. Our ethnographic data are based on unstructured interviews in Greek, on field notes, and on photographs. Furtherly, we had the chance to carry out a structured interview in Athens with a former resident of the village who lives and works in the Greek capital for the last 20 years. 3 All the names of the interlocutors in the article are pseudonyms. 4 All photographs in this article were taken by Maritina Vlachaki. 5 In the context of minority rights, the teaching of the Greek language was introduced in the villages of the minority area but not in the cities where a Greek-speaking population was also located (Baltsiotis 2003, 47). 6 The residents of Koshovice are officially recognized members of the Greek minority located in Albania and, for that reason, voting rights in Greek elections have been provided to them. The aforementioned information turned out during the interviews, and while such a premise is interesting, any further elaboration on this topic is outside of the scope of this work.
|
2021-11-18T16:09:19.086Z
|
2021-11-16T00:00:00.000
|
{
"year": 2021,
"sha1": "09693a71532824ae44c85a33128ab503d04ac070",
"oa_license": "CCBY",
"oa_url": "https://scholarworks.iu.edu/journals/index.php/aeer/article/download/31951/37125",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "379699210963a6577b182e077fe320408438d464",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"extfieldsofstudy": []
}
|
52999176
|
pes2o/s2orc
|
v3-fos-license
|
A molecular dynamics study of chemical gelation in a patchy particle model
We report event-driven molecular dynamics simulations of the irreversible gelation of hard ellipsoids of revolution containing several associating groups, characterizing how the cluster size distribution evolves as a function of the extent of reaction, both below and above the gel point. We find that in a very large interval of values of the extent of reaction, parameter-free mean-field predictions are extremely accurate, providing evidence that in this model the Ginzburg zone near the gel point, where non-mean field effects are important, is very limited. We also find that the Flory's hypothesis for the post-gelation regime properly describes the connectivity of the clusters even if the long-time limit of the extent of reaction does not reach the fully reacted state. This study shows that irreversibly aggregating asymmetric hard-core patchy particles may provide a close realization of the mean-field model, for which available theoretical predictions may help control the structure and the connectivity of the gel state. Besides chemical gels, the model is relevant to network-forming soft materials like systems with bioselective interactions, functionalized molecules and patchy colloids.
I. INTRODUCTION
Irreversible polymerization is a mechanism of selforganization of molecules which proceeds via the formation of covalent bonds between pairs of mutually-reactive groups. 1,2,3 If monomers with functionality (number f of reactive groups on a monomer) greater than two are present, branched molecules grow by reactions and convert the system from a fluid of monomers into a well connected cross-linked network, giving rise to a chemical gelation process. At the gel point, a persistent network spanning the sample first appears; the system is then prevented from flowing, yet not arrested on a mesoscopic length scale. The development of a network structure results, for example, from step polymerization, chain addition polymerization and cross-linking of polymer chains. 4,5 The same phenomenon is also observed in colloids and other soft materials when the thermodynamics and the molecular architecture favor the formation of a limited number of strong interactions (i.e., with attraction strength much larger than the thermal energy) between different particles. Chemical gelation has been extensively studied in the past, starting from the pioneering work of Flory and Stockmayer 1,6 who developed the first mean-field description of gelation, providing expressions for the cluster size distribution as a function of the extent of reaction and the critical behavior of the connectivity properties close to gelation. More appropriate descriptions based on geometric percolation concepts have, in the late seventies, focused on the non-mean field character of the transition, which reveals itself near the gel point, extending to percolation the ideas developed in the study of the properties of systems close to a second-order critical point. Several important numerical studies, 7,8,9,10,11,12,13,14,15,16,17,18 -most of them based on simulations on lattice -have focused on the critical behavior close to the percolation point, providing evi-FIG. 1: Graphic description of the A and B particles (left) and snapshot of the simulated system (right). The centers of the small spheres locate the bonding sites on the surface of the hard-core particle.
dence of the percolative nature of the transition and accurate estimates of the percolation critical exponents. As in critical phenomena, a crossover from mean-field to percolation behavior is expected close to the gel transition. 19 But, how the microscopic properties of the system control the location of the crossover (i.e., how wide is the region where the mean-field description applies) and how accurate is the mean-field description far from the percolation point is not completely understood. Another important open question regards the connectivity properties of chemical gels well beyond percolation. 20 Even in the mean-field approximation, several possibilities for the post-gel solutions have been proposed, based on different assumptions on the reactivity of sites located on the infinite cluster. 20,21 Different propositions predict dif-ferent cluster-size distributions above the gel point and a different evolution with time for the extent of reaction.
Here we introduce a model inspired by stepwise polymerization of bifunctional diglycidyl-ether of bisphenol-A (B particles in the following) with pentafunctional diethylenetriamine (A particles). 22 To incorporate excluded volume and shape effects, each type of molecule is represented as hard homogeneous ellipsoid of appropriate length, whose surface is decorated in a predefined geometry by f identical reactive sites per particle (see Figure 1). In this respect, the model is also representative of colloidal particles functionalized with a limited number of patchy attractive sites, 23 where the selectivity of the interaction is often achieved building on biological specificity. 24,25,26 The off-lattice evolution of the system is studied via event-driven molecular dynamics simulations, using a novel code which specifically extends to ellipsoidal particles the algorithm previously designed for patchy spheres. 27 Differently from previous studies, we do not focus on the critical properties close to the gel-point but study in detail the development of the irreversible gelation process and the properties of the cluster size distribution in the pre-and post-gelation regime.
We find that the dynamic evolution of the system produces an irreversible (chemical) gelation process whose connectivity properties can be described, in a very large window of the extent of reaction, with the Flory-Stockmayer (FS) predictions. 1,2,6 This offers to us the possibility to address, in a well controlled model, the kinetics of the aggregation and to evaluate the extent of reaction at which the breakdown of the Flory post-gel solution takes place.
II. METHOD
We study a 5:2 binary mixture composed of N A = 480 ellipsoids of type A and N B = 1200 ellipsoids of type B, for a total of N = 1680 particles. A particles are modeled as hard ellipsoids of revolution with axes a = b = 2σ and c = 10σ and mass m; B particles have axes a = b = 4σ and c = 20σ, mass 3.4m. Simulations are performed at a fixed packing fraction φ = 0.3. Five (two) sites are rigidly anchored on the surface of the A (B) particles, as described in Fig. 1. Sites on A particles can only react with sites on B particles. Every time, during the dynamic evolution, the distance between two mutuallyreactive sites becomes smaller than a predefined distance δ = 0.2σ, a new bond is formed between the particles. To model irreversible gelation, once a bond is formed, it is made irreversible by switching on an infinite barrier at distance r ij AB = δ between the sites i and j involved, which prevents both the formation of new bonds in the same sites and the breaking of the existing one. Hence, the newly formed bond cannot break any longer, and the maximum distance between the two reacted sites is constrained to remain smaller than δ. Similarly, the , with the fit-parameter k fixing the time scale. This functional form is expected when any pair of reactive groups in the system is allowed to react, but loops do not occur in finite size clusters. 21 two reacted sites cannot form further bonds with available unreacted sites. The composition of the system and the particle functionality are such that the reactive sites of type A and B are initially present in equal number, f A N A = f B N B , which in principle allows the formation of a fully bonded state in which all the sites have reacted. This offers a way to properly define the extent of reaction as the ratio p between the number of bonds present in a configuration and the maximum number of possible bonds f A N A .
Between bond-formation events, the system propagates according to Newtonian dynamics at temperature T = 1.0. As in standard event-driven codes, the configuration of the system is propagated from one collisional event to the next one. Note that temperature only controls the time scale of exploration of space, by modulating the average particle's velocity. An average over 40 independent starting configurations is performed to improve statistics.
III. RESULTS
In the starting configurations no bonds are present by construction. As a function of time, the fraction p of formed bonds -a measure of the state of advancement of the reaction-increases monotonically, until most of the particles are connected in one single cluster ( Figure 2). As a result, p saturates around 0.86, despite the fact that an equal number of reactive sites of type A and B is initially present in the system. Flory and Stockmayer 1,6 laid out the basic relations between extent of reaction and resulting structure in step polymerizations, on the assumptions that all functional groups of a given type are equally reactive, all groups react independently of one another, and that ring formation does not occur in molecular species of finite size. Only when p exceeds a critical value p c infinitely large molecules can grow. 1 In this respect the FS theory describes the gelation transition as the random percolation of permanent bonds on a loopless lattice. 28 The present model satisfies the conditions of equal and independent reactivity of all reactive sites. The absence of closed bonding loops in finite size clusters is not a priori implemented; as we will show in the following, however, such a condition -favored by the poor flexibility of the bonded particles and their elongated shape, the absence of an underlying lattice and the asymmetric location of the reactive sites-is valid in a surprisingly wide region of p values.
The FS theory predicts the p dependence of the cluster size distribution in the very general case of a mixture of monomers bearing mutually reactive groups. 6 In the present case, the number n lm of clusters containing l bifunctional particles and m pentafunctional ones can be written as w lm = (4m)! (l − m + 1)!(4m − l + 1)!m! and the number of clusters of size s is obtained by summing over all contributions such that l + m = s, i.e., n s = lm,l+m=s n lm . As shown in Figure 3a on increasing p, the n s distribution becomes broader and broader and develops a power-law tail. The theory predicts a gelation transition when p c = 1/ (f A − 1)(f B − 1) = 0.5. 1,6 Even close to p = 0.5, the FS prediction -which conforms to the prediction of random percolation on a Bethe (loopless) lattice where n s ∼ s −2.5 at the percolation threshold-is consistent with the numerical data. On further increasing p (Figure 3b), the distribution of finite size clusters progressively shrinks, and only small clusters survive. Data show that Eq 1, with no fitting parameters, predicts rather well the numerical distribution at any extent of polymerization, both below and above the point where the system is expected to percolate, including details such that the local minimum at s = 2.
To compare with the mean-field prediction of gelation at p c = 0.5, we examine the connectivity properties of the aggregates for each studied value of p, searching for the presence of clusters which are infinite under periodic boundary conditions. We find that configurations at p = 0.497 ± 0.008 have not yet developed a percolating structure while configurations at p = 0.513 ± 0.007 have. Hence, we locate the gel point at p c = 0.505±0.007, in close agreement with the theoretical mean-field expectations. Beyond this point, the material which belongs to the infinite (percolating) network N ∞ constitutes the gel, while the soluble material formed by the finite clusters which remain interspersed within the giant network constitutes the sol. Figure 4a shows that the fraction of gel P ∞ = N ∞ /N and even its partition between particles of type A (P A,∞ = N A,∞ /N ) and B (P B,∞ = N B,∞ /N ) calculated according to the FS theory, 29 properly represent the simulation results throughout the polymerization process. Indeed, the proportion of B particles to A particles in gel and in sol is a function of p (see inset). The relative amount of B particles in the sol (N B,sol /N A,sol ) increases as a consequence of the preferential transfer of the A particles (having more reactive sites) to the gel, in a way that the fraction p sol of sites B in the sol that have reacted (extent of reaction in the sol) differs from the total fraction p of sites B reacted (extent of reaction in the system). The constitution of the sol (Figure 3(b)) results to be the same as that of a smaller system made of N A,sol particles of type A and N B,sol particles of type B reacted up to the extent p sol . 1,30 The evolution of the cluster size distribution can be quantified by the number (x n ) and weight average (x w ) cluster sizes of the sol, defined as x n = s sn s / s n s and x w = s s 2 n s / s sn s . The numerical results and the FS theoretical predictions are shown in Figure 4b. Both averages increase before gelation; then, they regress in the sol existing beyond the gel point, since large clusters are preferentially incorporated into the gel network. While x n increases only slightly up to the gel point, never exceeding 3.5, x w increases sharply in proximity of p c as well as sharply decreases beyond this point, consistently with the fact that x w is singular at percolation being dominated by large clusters. Again, simulation data agree very well with FS predictions. Discrepancies between theory and simulation -which reveal the mean-field character of the FS theory-only concern the range of p very near p c , suggesting that for this model the crossover from mean-field to percolation is very close to the gel point -i.e., the Ginzburg zone 19 near the gel point, where non-mean field effects are important, is very limited. A finite-size study very close to the critical point would be requested to accurately locate the per-colation point and the critical exponents, a calculation beyond the scope of the present work.
From a physical point of view, the change from meanfield to percolation universality class is rooted in the presence of bonding loops in the clusters of finite size, which pre-empts the possibility to predict the cluster size distribution. The realistic estimate of the percolation threshold and the agreement between theory and simulation (Fig. 3) suggest that the present model strongly disfavors the formation of loops in finite clusters, at least for cluster sizes probed in this finite-size system. As a test, we evaluate the total number of finite (sol) clusters n sol = s n s as a function of the extent of reaction. If finite clusters do not contain closed loops, n sol equals the number of particles in the sol minus the number of bonds, since each added bond decreases the number of clusters by one. This applies equally to the system preceding gelation, or to the sol existing beyond the gel point. Thus, at p < p c (pre-gelation) the relation between n sol and p is linear, i.e. n sol = N − 2N B p. At p > p c (post-gelation), n sol can be calculated as n sol = N sol −2N B,sol p sol , where N sol is the number of particles in the sol fraction (N B,sol of which bear reactive sites of type B), and p sol = p is the reacted fraction of sites B in the sol. Hence, the relation between n sol and p crosses to a nonlinear behavior, so that the number of clusters becomes one when p = 1. As shown in Figure 4c, the number of finite clusters found in the simulation data conforms to the theoretical expectation for all p values, both below and above the gel point. Hence, as a first approximation, loops are only present in the infinite (percolating) cluster and do not significantly alter the distribution of the finite size clusters, both below and above percolation. The difference between n sol found in simulation and the value predicted by the FS theory counts the number of loops in the sol, n loop . Such a quantity is shown in the inset of Figure 4c. The maximum value of n loop , achieved for p ∼ p c , corresponds to 0.2% of the total number of bonds. This demonstrates that intramolecular bonds within finite clusters can be neglected, consistent with the Flory hypothesis for the post-gelation regime 20 . Figure 4c also shows that the linear relation between n sol and p is valid also after the gel point (up to p ≈ 0.6). This finding is in full agreement with recent experimental studies 22,31,32 on the polymerization of bifunctional diglycidyl-ether of bisphenol-A with pentafunctional diethylenetriamine, also suggesting that the number of cyclic connections in the infinite cluster is negligible well above p c .
As a further confirmation of the absence of closed loops we compare the time evolution of p with the prediction of the mean-field kinetic modeling of polymerization, based on the solution of the Smoluchowski coagulation equation. 33,34 For loopless aggregation, p(t) is predicted to follow where the fit-parameter k, which has the meaning of a bond kinetic constant, fixes the time scale of the aggregation process. The time evolution of p is found to perfectly agree with the theoretical predictions 21 (see Figure 2) up to p ≈ 0.6, i.e. beyond p c . While the prediction would suggest that p(t → ∞) = 1 (dash line in Figure 2), the simulation shows that the formation of a percolating structure prevents the possibility of completing the chemical reaction, leaving a finite number of unreacted sites frozen in the structure. As shown above (Figure 3), even in this frozen state the cluster size distribution is provided by the Flory's post-gel hypothesis. Such a feature is not captured by the mean-field Smoluchowski equation in which spatial information in the kernels are neglected.
IV. CONCLUSIONS
A binary mixture of patchy hard ellipsoids undergoing chemical gelation displays a very large interval of the extent of reaction in which parameter-free mean-field predictions are extremely accurate. The connectivity properties of the model are properly described -without any fitting parameter -both below and above percolation by the mean-field loopless classical FS theory. 1,21 The mean-field cluster size distribution for the sol component is found to be valid for all values of the extent of reaction, both below and above the gel point, suggesting that for the present model, the Flory's hypothesis for the post-gelation regime properly describes the irreversible aggregation phenomenon, despite the explicit consideration of the excluded volume.
The absence of loops in finite size clusters, which is not assumed by the model, results from the specific geometry of the bonding pattern and by the presence of the excluded volume interactions, disfavoring the formation of ordered bonding domains. Hence, the geometry of the particles and the location of the reactive sites on them may play a significant role in the stabilization of the mean-field universality class with respect to the percolation universality class, 35 locating the crossover between the two classes 19 very close to the gel point. The present study shows that irreversibly aggregating asymmetric hard-core patchy particles, even if excluded volume effects are properly taken into account, may provide a close realization of the FS predictions in a wide range of p values. The model thus offers a starting point -for which theoretical predictions are available-for further investigations of the gelation process and for a more precise control over the structure and connectivity of the gel state. In particular, a full and detailed structural information can be known along with the dynamics of the system, which is potentially useful to investigate the relation between structural heterogeneity and heterogeneous dynamics, 32 and to shed light on the microscopic aspects of the dynamic crossover from short 36 to long relaxation times, 37 during irreversible polymerization.
While the structural properties are all well-described by the FS theory, the evolution of the extent of reaction, modeled via the coagulation Smoluchowski equation, is properly described by the theory only in the pre-gelation region. After gelation, kinetic constraints due to the absence of mobility of the reactive sites anchored to the percolating cluster or to smaller clusters trapped inside the percolating matrix prevent the completion of the reaction and the extent of reaction freezes (to p ≈ 0.86 in the present case) before reaching one (as Eq. 2 would predict). A proper modeling of the long-time behavior will require the insertion of spatial information inside the kernels entering the Smoluchowski equation. The freezing of the extent of reaction at long times correspondingly freezes the cluster size distribution to that predicted by Flory for the reached p value.
In the present model, the entire polymerization process proceeds via a sequence of FS cluster size distributions, determined by p(t). Recently, it has been shown that the FS theory properly describes also equilibrium clustering in patchy particle systems when p is a function of temperature and density. 38 It is thus tempting to speculate that for loopless models, irreversible evolution can be put in correspondence with a sequence of equilibrium states which could be sampled in the same system for finite values of the ratio between temperature and bonding depth. If this is indeed the case, chemical gelation could be formally described as a deep quench limit of physical gelation. This correspondence would facilitate the transfer of knowledge from recent studies of equilibrium gels 39,40 to chemical ones. Concepts developed for irreversible aggregation of colloidal particles, like diffusionand reaction-limited cluster-cluster aggregation, could be connected to chemical gelation. Work in this direction is ongoing.
We acknowledge support from MIUR-PRIN. We thank P. Tartaglia for interesting discussions.
|
2008-02-27T09:12:20.000Z
|
2008-02-27T00:00:00.000
|
{
"year": 2008,
"sha1": "01dfe744e99a2a4305513e4b9dabdc9189bb3f4d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0802.3976",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "01dfe744e99a2a4305513e4b9dabdc9189bb3f4d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Chemistry",
"Medicine",
"Materials Science"
]
}
|
198394726
|
pes2o/s2orc
|
v3-fos-license
|
Experimental investigation for solar thermal hybrid installs to inverter air conditioning vapour-compression cycle system
This study is designed a new development and integration for the solar hybrid system connected to an inverter air conditioning system. This experimental system will provide an essential addition to the renewable energy future. The device is supplying the same cooling load with significantly less electricity demand. Apparatus system has the solar unit and solar collector with DC compressor that was used to compress the refrigerant in an air conditioning system, and that will effectively reduce the air conditioning electricity consumption. The solar collector unit is installed between the compressor and condenser, which provide part of compression pressure and heating by further superheating the refrigerant. The higher pressure and more substantial temperature difference enhance the condensation process in the condenser, resulting in the high-pressure liquid refrigerant. This configuration realty reduces energy consumption by reducing the load on the electric compressor.
Introduction
The increasing demand for electric power by the consumer has led to higher tax rates for electricity during the hot summer season in Iraq [1] & [2]. In Iraq, the renewable energies that available from solar energy with wind energy have not effectively invested due to excessive heat that reaches 55°C [3] & [4]. This heat leads to the reduction of solar cell capacity to 45% from real capacity processed. There is an actual need to air-conditioning using the compression systems with high power consumption of electricity [5] & [6]. Many solutions and applications use solar energy in air conditioning [5] & [7]. In Iraq, the maximum and minimum value of solar radiation yearly average is 5596 Wh/m2/day and 4266 Wh/m2/day [8]. Solid Desiccant, reduce moisture content in pure ambient air wheel rotates sluggishly, air flows through by separated into two sectors to be dehumidified and other sectors to renew the wheel [9]. Liquid Desiccant, same behave solid desiccants, the water vapor pressure is a role of temperature with moisture content [10] & [11] Most of the applications of air conditioning are derived from renewable energy, which has not achieved as an alternative to traditional compression cycleswith the effect on the coefficient of performance [12] and the cooling of temperatures at increasing temperatures in the summer, and with the exceptional situations of heat such as Iraq. The aim of this experimentally to improve performance coefficient and reduce the power consumption of the air conditioner to less than of the original value by using a solar heat exchanger with the system and using of an air conditioner split inverter device (12000Btu/h). In this work, a solar thermal radiation has been used to make up the energy provided to the system, that is the same value required (12000 Btu/h), after lowering the energy required from the compressor that connected to regular power supply to minimum speed (3105Btu/h). The solar thermal radiation has been achieved by the heat exchanger placed after compressor DC.
The apparatus described
Experimentation was completed and installed on the Laboratory Unit at the Faculty of Engineering / University of Qadisiyah, Iraq [13]. Used vapor-compression cycle compressor DC (wall type split unit). External heat exchanger immersed in water path connected between DC compressor and condenser. Control the cooling fluid flow by valve between in and out heat exchanger, and bypass valve to closed recycle to pass flow before heat exchanger see Table 1. The LM35 temperature sensor was used as a semiconductor with very high accuracy (± 0.5) where the linear relationship between voltage and temperature was assumed. Table 2 demonstrated the eight sensors distribution that use micro-shape processing unit (Arduino), calibration of sensors that a connection with the computer and output data as an Excel file for recording readings. Measuring power consumption and the value of voltages and current are shown in 'figure 1', 'figure 2' and'figure 3'. Before the compressor. Discharge pipe T 2 After compressor charging tube T 3 After the condenser. Before expansion T 4 After expansion. Before the evaporator T x =T 5 After the heat exchanger before the condenser T a =T 6 Ambient temperatures T s =T 7 The processed temperatures of the condenser T w =T 8 Heat the heated water in the water bath
Experimental procedure
The experiment was conducted by recording readings of temperatures from all sensors in standard laboratory. The first case is to completely closure of the cycle by open the bypass valve and close inout valves in the external heat exchanger. The second case closes the bypass valve and completely closure of the cycle through open in-out valves in the external heat exchanger.
Measurements
The readings and record were taken in two stages, with and without a water bath. That records the temperature, high and low pressures for cycle cooling, and power consumption. The recording temperature time from unstable primary state to a stable state to show two state 'figure 4' and ' figure 5'.
The text of your paper should be formatted as follows:
Calculations and results
The experimental procedure showed the effect on the cooling cycle and the performance factor with the consumption in the processed capacity. Used for calculations engineering equation solver (EES) software [14].
Calculations with Compressor Solar Assisted in standard comfort conditions
The voltage of the compressor with the current used (voltage = 104 volt & current = 1.178A). The enthalpy values were calculated from the intersection of high and low pressure values used equation (6)(7)(8)(9)(10)(11)(12)(13) with the recorded temperatures before and after each part of the compressive system with the solar assisted. The diagram was then drawn and the results obtained for the completed work were drawn from the plan, taking into account the efficiency of the compressor and the calculation of the real work extracted from the plan and the calculation The solar assisted from the solar collector, the networking calculation, the recording voltage and the current consumed by the compressor, and the calculation of the cooling coefficient of the cooling Freon show the 'figure 7'. The thermal performance coefficient values were obtained in both cases with and without the solar assisted when the comparison was made. The high coefficient of thermal performance in the case of running the solar assisted and the low pull of the current consumed by the compressor that reduced compressor consumption from the planned cooling cycle for Freon R410a as shown in Table 3.
Conclusions
The solar air conditioning system is a great innovation that drives solar energy leaps while enhancing the positive effect on the environment. This project provides a comprehensive energy-saving comparison that can be achieved for various air-conditioner capacities ranging from Btu / h 2388 Btu / h to 24,000 Btu /h. The system is capable of achieving up to 45% energy saving during the day and thus dramatically reduces the peak load of electricity during the day. At the same time, the DC compressor will contribute up to 25% energy saving overnight. The main modifications needed are the compressor (DC). In addition, thermal storage and photoelectric system proposed to provide future cycle performance.
|
2019-07-26T11:30:38.018Z
|
2019-06-28T00:00:00.000
|
{
"year": 2019,
"sha1": "2e8b7473f50788af1649f933e5d7c0c5167a673e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/557/1/012079",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cdd15bc30fbd3f569cc69f196e16c53bd9ada3e6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
51682009
|
pes2o/s2orc
|
v3-fos-license
|
Effect of TCP Buffer Size on the Internet Applications
The development of applications, such as online video streaming, collaborative writing, VoIP, text and video messengers is increasing. The number of such TCP-based applications is increasing due to the increasing availability of the Internet. The TCP protocol, works at the 4th layer of the Internet model and provides many services such as congestion control, reliable communication, error detection and correction. Many new protocols have been proposed such as stream control transmission protocol (SCTP) with more features compared to the TCP. However, due to the wide deployment, TCP is still the most widely used. TCP creates the segments and transmit to the receiver. In order to prevent the errors TCP saves the segments into the sender buffer. Similarly, the data is also saved at the receiver buffer before its transmission to the application layer. The selection of TCP sender and receiver buffer may be varied. It is very important because many applications work on the smartphones that are equipped with a small amount of memory. In many applications such as online video streaming, some errors are possible and it is not necessary to retransmit the data. In such case, a small buffer is useful. However, on text transmission the complete reassembly of message is required by the TCP before transmission to the application layer. In such case, the large buffer size is useful that also minimizes the buffer blocking problem of TCP. This paper provides a detailed study on the impact of TCP buffer size on smart-phone applications. A simple scenario is proposed in NS2 simulator for the experimentation. Keywords—TCP; sender buffer; receiver buffer; stream control transmission protocol (SCTP); error detection and correction
I. INTRODUCTION
The OSI model in the Internet provides a step by step characterization of the computer and telecommunication systems. Transport layer in the OSI model is one the main layers. It provides congestion control, error control, flow control, a stronger checksum and many other features. In a summarized way, the transport layer works for successful delivery of a process from a sender to a receiver. All of these features are provided the TCP protocol [1]. The two other famous protocols of the transport layer are the User Datagram Protocol (UDP) [2] and SCTP [3] (see also [4]). The UDP is mainly beneficial for applications such as video streaming. It is a less complicated protocol due to its header format but not preferable for applications where the reliability is mandatory. The SCTP is a new protocols and it is still in development phase. The key features in the design of the transport layer are the following: • Out-of-order delivery for faster data transmission to the application layer. In this mode, SCTP the receiver does not wait for complete message it simply forwards the data as soon as it is received. This feature is also available in the UDP. However, it is not available in TCP. One of the main application of out-of-order data is the online video or audio streaming. However, in both the sequenced-data delivery and out-of-order data delivery in online streaming may loss few of the segments. But the overhead of sequencing overhead in out-of-order data is less.
• Connection-orientation feature is available in the TCP and SCTP but not in UDP. By this features both the sender and receiver initiate a connection establishment procedure before the data transmission.
In UDP all the data units travel independently and forwarded by different routers.
• Connection formation is initiated by the SCTP and TCP before the data transmission. The connection formation procedures requires verification of sender and receiver, which also improves the security of the protocols. TCP and SCTP uses 3-way and 4way handshake procedures for connection formation. However, there is no service of connection formation in UDP.
• Connection termination is also completed by the TCP and SCTP after the successful transmission of data from sender to receiver. By this method the sender and receiver agree to close the session. There is no connection termination service provided by the UDP.
• Reliability by means of acknowledgment to the sender. This feature is available in TCP and SCTP but not in UDP. One the main factor that affects the buffer size is reliability. For example, a sender keeps a copy of the transmitted segment in the sender buffer until the acknowledgment is received or the time to acknowledgment expires.
• Flow control to maintain the data transmission rate of the sender. By flow control the protocol reduces the chances of network congestion and other others errors such as data overflow. With the flow control, TCP tries to maintain a synchronization between the sender and the receiver. For example, the synchronization is needed when the sender is very fast compared the receiver.
Many of the features of the transport layer protocols are presented in Fig. 1. Despite all the good features of SCTP, the TCP is currently fully operational over the Internet. In TCP, the segments that are in queue for transmission and the segments that are revived are stored in a memory called the TCP sender and receiver buffer. Buffer size play major role in the performance of TCP. If the buffer size is too small the TCP would be unable to complete the message by combining the segments at the receiver. It also leads to no buffer space for the parallel TCP flows of other applications. Such type of buffer condition is called the receiver buffer blocking. Similarly, on the sender side, if the buffer size is small it affects the transmission rate due the less number of segments within the sender buffer. With the increasing number applications such as video streaming, messengers, online chats, VoIP, collaborative scientific projects, wireless sensors and monitoring. It is difficult to decide the buffer size requirement. Because some of the applications require reliability and some of them do not. Further, the development of smart-phone apps is also increasing. It is difficult to determine which type of data processing will be carried by the apps at the time of development, because of the real time data processing and software updates. In order to help the developers of smartphone apps, the consideration of buffer size for the protocol is necessary. Additionally, the researchers are working on the parallel data transmission by using more than one NIC cards. By parallel transmission throughput increases by the factor depending on the number of NIC cards. For the parallel data transmission a new version of TCP is under development. It is called the Multipath TCP (MPTCP) [5]. This research work aims to provide the experimentation of the TCP protocol with various buffer size. The proposed scenario is composed of multihop network. The background traffic is also added in order to make the scenario more like as real life networks where the bandwidth is occupied by the several number of users. The experimentation results are also useful for evaluation of MPTCP. The simulation is carried in NS2 with varying size of the sender and receiver buffer.
The rest of the paper contains the related work in Section II. The experimental setup and configuration details in Section III. The analysis on the basis of the results is presented in Section IV. The conclusions are summarized in Section V.
II. RELATED WORK
The choice of buffer size affects the performance of TCP. For example, if the receiver buffer size equal to the 50 segments. If the there are two processes transmitted by the sender. Each of the process contains 30 segments. In simultaneous transmission of both the processes the receiver will be occupied by the 50 segments. 25 segments from each of the process. Both the processes are received incomplete. The receiver will be waiting for the remaining segments and none of the process will be delivered to the application layer. This kind of situation is called the receiver buffer blocking. Many researchers reported the problem of receiver buffer blocking while using TCP [7], [8], [9], [18], [19]. The researchers also suggested the use of retransmission policies for the transmission missing data of one process. However, such retransmission polices are beneficial for the parallel transmission of data by using more than one link. For one link between a sender and receiver the role of retransmission policy slightly improves performance.
The buffer splitting techniques were proposed by the researchers in [10] and [11]. They proposed two kinds of splitting. First, that equally divide the buffer space in the number of destinations or paths. In real life data from different paths to receiver take different time. So, on the slow path (path of smaller bandwidth or longer propagation delay) data transmission may affect the data that is already in the receiver buffer. On the other side the faster paths occupy more buffer space and may reach to the buffer overflow. Second, the technique, which divides the buffer into parts for the different processes according to outstanding data. The outstanding data is the data that already transmitted by the sender but not yet acknowledged. The work in [12] suggested the use of available buffer space in acknowledgment segment, because this value represents the exact free space of the buffer. Normally, TCP uses the advertised buffer space in the acknowledgment segment. The relationship between the buffer size and the round trip time (RTT) is investigated by Want et al. [13]. According the their findings the relationship is linear.
The work on RTT and the other path performance characteristics such as bandwidth is investigated by the researchers in [14]. The technique of buffer splitting at the sender and receiver is employed in order to reduce the buffer blocking problem. The splitting is performed on the basis of the RTT. The destinations with longer RTT value (slow paths) are allowed to use the small portion of the buffer size. Whereas, the destinations with shorter RTT value (fast paths) are configured to use the large portion of the buffer space both at the sender and receiver. The similar work to improve the performance by minimizing the buffer blocking is also presented in [15], where the technique of transmission scheduling are proposed. Scheduling of different data flows is based on a priority value, which is calculated by using the outstanding data of each flow. The researchers in [16] suggest that the design of a routing protocol is very important.
III. TCP IMPLEMENTATION AND CONFIGURATION
The NS2 [17] is used for the implementation and evaluation of the TCP. For the installation of NS2, the Ubuntu 14.04 OS is installed in the virtualbox. The multihop network is proposed for the experimentation. In all of the experiments the throughput is measured by assuming node 0 as source and node 6 as destination. Each of the simulation is repeated 10 times and the results are collect on the basis of average values. The implementation code of TCP is already available in the NS2, however its configuration is required according to the proposed topologies given below. The key parameters of the simulation are presented in Table I.
A. Topology 1
In this topology seven nodes are configured and attached as shown in Fig. 2. Node 0 and 1 are configured with TCP agents as source nodes. Node 6 is the destination node and it is configured with TCP agent as sink. In this topology the main connection that is monitored for throughput is 0-6, whereas 1-6 is just adding an additional TCP flow.
B. Topology 2
In the second topology, nine nodes are used. Nodes 7 and 8 are attached with the intermediate nodes 2 and 3. The main purpose these additional nodes is to provide the background traffic to the TCP. Topology 2 is shown in Fig. 3. The remaining parameters and their configuration are left on the default values present in NS2.
A. Experiment-1
In this experiment, three kinds of simulations are performed each time while changing the values of the propagation delay. The value of sending buffer changes from 50 to 500 number of segments. It is observed that when the delay is small as in Fig. 4, the TCP throughput is directly proportional to the value of bandwidth, i.e. 10Mbps. When the delay increase as in Fig. 5 and 6, the medium is not fully occupied. Hence the throughput at 10Mbps and 5Mbps is also same. However, it is greater than the throughput at 1Mbps. It is also clear that the increase in the buffer size does not significantly affect the data transmission rate. The buffer size of 200-250 is enough to reach the maximum throughput.
B. Experiment-2
The topology 2 is used in this experiment. A TCP flow is defined from node 3-8 and its affect on the flow of nodes 0-6 is observed. The same bandwidth and delay values of Experiment-1 are applied. The results trends are very similar to Experiment-1. The results are presented in Fig. 7, 8 and 9. When the buffer is smaller than 200 there is steady improvement in the throughput. However, with a larger buffer of more than 200 packets, the maximum throughput has reached.
C. Experiment-3
In this experiment, the simulation of Experiment-2 is extended for a very large buffer size. The buffer size used is from 1000 packets to 10000 packets. The delay is configured to 10ms, however the experiment is repeated with different values of bandwidth, i.e. 1Mbps, 5Mbps and 10Mbps. This expriments proves that the large size of buffer is not useful in the proposed scenario, the throughput remains the same. In Fig. 10, the throughput at 1Mbps, 5Mbps and 10Mbps is 50Kbps, 0.7Mbps and 1.2Mbps.
D. Experiment-4
In this experiment, Topology-2 is used. However the changes are performed in the receiver buffer instead of the sender buffer. According the investigations, when the delay is smaller, the sender transmits data as long as it is available. Due to which there are less occasions of packet loss in the 0-6 TCP flow. So, the output remains at the same value as shown in Fig. 11 and 12, where different values of bandwidth are used. In the case of longer delay of 100ms, the small buffer does not reaches the larger throughput. But as the buffer size increases the the throughput also increases. After the buffer size of 250 packets, the additional space in the buffer space does not increases the throughput.
E. Experiment-5
The last experimentation is carried out by changing the buffer size both at the sender and receiver. In this experiment, the buffer size increases from 50-to-500 packets. It is observed that, when the buffer is small the throughout is less. When the buffer size is large it increase to the throughput but up 100 packets size. After the size of 100, the increase in the buffer space does not increase the throughput. The results of Experiment-5 are shown in Fig. 13.
In all of the experiments after a certain buffer space the performance of TCP remain the same in terms of throughput. According the Experiment-5, the significant amount of the buffer space is 100 packets. That is equal to the bandwidth delay product. In this experiment, the bandwidth delay product is 10Mbps * 10ms = 100K. The suggestion of the buffer size that is twice the bandwidth product is also presented in [6].
V. CONCLUSION
The application development using Android or other platforms is increasing. The applications such as video/audio streaming, online collaboration, VoIP, messengers are the need of time. Some of them require sequenced delivery like collaborative writing projects, whereas some of them like online video streaming the sequenced delivery is not the priority. In video streaming the best and fast delivery is important. Many protocols are also available to deal with the sequenced and out-of-order delivery of data such as UDP, TCP and SCTP. TCP is one of the most widely used protocol over the Internet. Depending on the type of application the requirement of the buffer space at the sender and receiver is different. If not considered properly the buffer size the problem such as buffer blocking and buffer overflow may occur. This work provides the details of the experimentation of TCP with different buffer size options. According the results of the simulation over a multihop scenario, the too large buffer size does not increases the throughput. On the other hand the smaller buffer also degrades the performance of TCP. The finds suggest that the buffer size of twice the bandwidth delay product is suitable for the TCP flows. In future, the work may be extended on the upcoming version of TCP called the MPTCP.
|
2018-07-15T05:14:02.375Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "7d9900f96aea4dbc861a21c826bee7daefb73304",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume9No6/Paper_56-Effect_of_TCP_Buffer_Size_on_the_Internet.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7d9900f96aea4dbc861a21c826bee7daefb73304",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
7029730
|
pes2o/s2orc
|
v3-fos-license
|
Inverse subspace iteration for spectral stochastic finite element methods
We study random eigenvalue problems in the context of spectral stochastic finite elements. In particular, given a parameter-dependent, symmetric positive-definite matrix operator, we explore the performance of algorithms for computing its eigenvalues and eigenvectors represented using polynomial chaos expansions. We formulate a version of stochastic inverse subspace iteration, which is based on the stochastic Galerkin finite element method, and we compare its accuracy with that of Monte Carlo and stochastic collocation methods. The coefficients of the eigenvalue expansions are computed from a stochastic Rayleigh quotient. Our approach allows the computation of interior eigenvalues by deflation methods, and we can also compute the coefficients of multiple eigenvectors using a stochastic variant of the modified Gram-Schmidt process. The effectiveness of the methods is illustrated by numerical experiments on benchmark problems arising from vibration analysis.
1. Introduction. Eigenvalue analysis plays an essential role in many applications, for example, dynamic response of structures, stability of flows, and nuclear reactor criticality calculations. In traditional approaches, the physical characteristics of models are considered to be known and the eigenvalue problem is deterministic. However, in many important cases there is uncertainty, for example, due to material imperfections, boundary conditions or external loading, and the exact values of physical parameters are not known. If the parameters are treated as random processes, the associated matrix operators have a random structure as well, and the uncertainty is translated into eigenvalues and eigenvectors. Techniques used to solve this class of problems include Monte Carlo methods [19,22], which are known to be robust but slow, and perturbation methods [12,13,24,30], which are limited to models with low variability of uncertainty.
In this study, we explore the use of spectral stochastic finite element methods (SS-FEM) [5,14,34] for the solution of eigenvalue problems. The methods are based on an assumption that the stochastic process is described in terms of polynomials of random variables, and they produce discrete solutions that, with respect to the stochastic component, are also polynomials in these random variables. This framework is known as the generalized polynomial chaos (gPC) [5,35]. There are two main ways to use this approach: stochastic Galerkin finite elements and stochastic collocation (SC). The first method translates the stochastic problem by means of Galerkin projection into one large coupled deterministic system; the second method samples the model problem at a predetermined set of collocation points, which yields a set of uncoupled deterministic problems. Although numerous algorithms for solving stochastic partial differential equations by SSFEM have been proposed, the literature addressing eigenvalue problems is limited. Verhoosel et al. [29] proposed an algorithm for inverse iteration in the context of stochastic Galerkin finite elements, and Meidani and Ghanem [17,18] proposed stochastic subspace iteration using a stochastic version of the QR algorithm. In alternative approaches, Ghanem and Ghosh [4,7] proposed two numerical schemes: one based on the Newton-Raphson method, and another based on an optimization problem (see also [6,23]), Pascual and Adhikari [21] introduced several hybrid perturbation-Polynomial Chaos approaches, and Williams [31,32,33] presented a method that avoids the nonlinear terms in the conventional method of stochastic eigenvalue calculation but introduces an additional independent variable.
We formulate a version of stochastic inverse subspace iteration which is based on the stochastic Galerkin finite element method. We assume that the symmetric positive-definite matrix operator is given in the form of a polynomial chaos expansion, and we compute the coefficients of the polynomial chaos expansions of the eigenvectors and eigenvalues. We also compare this method with the stochastic collocation method in the larger context of spectral stochastic finite element methods. In particular, we use both these methods to explore stochastic eigenvalues and give an assessment of their accuracy. Our starting point for stochastic inverse subspace iteration is based on [18,29]. In order to increase efficiency of the algorithm, we first solve the underlying mean problem and we use the solution as the initial guess for the stochastic inverse subspace iteration, which computes a correction of the expected value of the eigenvector from the mean and coefficients of the higher order terms in the gPC expansion. The gPC coefficients of the eigenvalue expansions are computed from a stochastic Rayleigh quotient. We also show that in fact the Rayleigh quotient itself provides a fairly close estimate of the eigenvalue expansion using only the mean coefficients of the corresponding eigenvector. In our approach, it is also relatively easy to deal with badly separated eigenvalues because one can apply deflation to the mean matrix in the same way as in the deterministic case.
The paper is organized as follows. In Section 2 we introduce the stochastic eigenvalue problem and outline the Monte Carlo, stochastic collocation and stochastic Galerkin methods. In Section 3 we formulate the algorithm of stochastic inverse subspace iteration. In Section 4 we report the results of numerical experiments, and in Section 5 we summarize and conclude our work.
2. Stochastic eigenvalue problem. Let (Ω, F, P) be a complete probability space, that is, Ω is the sample space with σ-algebra F and probability measure P, and let D ⊂ R d be a bounded physical domain. We assume that the randomness in the mathematical model is induced by a vector ξ : Ω → Γ ⊂ R m ξ of independent, identically distributed (i.i.d.) random variables ξ 1 (ω), . . . , ξ m ξ (ω). Let B(Γ) denote the Borel σ-algebra on Γ induced by ξ and µ the induced measure. Then, the expected value of the product of measurable functions on Γ determines a Hilbert space L 2 (Γ, B(Γ), µ) with inner product where the symbol E denotes the mathematical expectation. In computations, we will work with a set {ψ } of orthonormal polynomials such that ψ j ψ k = δ jk , where δ jk is the Kronecker delta and ψ 0 is constant. This set, the gPC basis, spans a finite-dimensional subspace of L 2 (Γ, B(Γ), µ). We will also suppose we are given a symmetric positive-definite matrix-valued random variable A (x, ξ) represented as where each A is a deterministic matrix of size M x × M x , with M x determined by the discretization of the physical domain, and A 0 is the matrix corresponding to the mean value of the matrix A (x, ξ), that is The representation (2.2) is typically obtained from an expansion of a random process; examples are given in Section 4 on numerical experiments. We will also use the notation We are interested in a solution of the following stochastic eigenvalue problem: find a set of random eigenvalues λ s and corresponding eigenvectors u s , s = 1, . . . n s , which almost surely (a.s.) satisfy the equation We will search for expansions of a set of n s eigenvalues and eigenvectors in the form where λ s k and u s k are coefficients defined by projection on the basis {ψ k }, We will consider several ways to approximate these quantities.
Monte Carlo and stochastic collocation methods.
Both the Monte Carlo and the stochastic collocation methods are based on sampling. This entails the solution of independent deterministic eigenvalue problems at a set of sample points ξ (q) , In the Monte Carlo method, the sample points ξ (q) , q = 1, . . . , N M C , are generated randomly, following the distribution of the random variables ξ i , i = 1, . . . , m ξ and moments of the solution are obtained from ensemble averaging. For stochastic collocation, the sample points ξ (q) , q = 1, . . . , N q , consist of a predetermined set of collocation points. This approach derives from a methodology for performing quadrature or interpolation in multidimensional space using a small number of points, a so-called sparse grid [2,20,27]. There are two ways to implement stochastic collocation [34]. We can either construct a Lagrange interpolating polynomial that interpolates at the collocation points, or we can use a discrete projection in the so-called pseudospectral approach, to obtain coefficients of expansion in an a priori selected basis of stochastic functions. In this study, we use the second approach because it facilitates a direct comparison with the stochastic Galerkin method. In particular, the coefficients in the expansions (2.5) are determined by evaluating (2.6) in the sense of (2.1) using numerical quadrature as where ξ (q) are the collocation (quadrature) points, and w (q) are quadrature weights. We refer to [14] for a discussion of quadrature rules. Details of the rule we use in experiments are discussed in Section 4 (prior to Section 4.1).
Stochastic Galerkin method. The stochastic Galerkin method is based on the projection
Substituting the expansions (2.2) and (2.5) into (2.9) yields a nonlinear algebraic system to solve for the coefficients λ s i and u s j . The Galerkin solution is then given by (2.5). We will also consider a shifted variant of this method with a deterministic shift ρ, introduced in [29], which can be used to find a single interior eigenvalue, Thus we drop the superscript s, and the shifted counterpart of (2.9) is Writing this equation out using the gPC expansions gives which leads to a modified system instead of (2.10), where and i=0 are the quantities that would be obtained with ρ = 0. As will be seen in numerical experiments, deflation of the mean matrix A 0 is more robust than using a shift for identification of interior eigenvalues. For inverse iteration, deflation can be done via where λ d 0 , u d 0 , d = 1, . . . , n d , are the zeroth order coefficients of the eigenpairs to be deflated, and C λ is a constant such that C λ λ d 0 for d = 1, . . . , n d , for example C λ = max s (λ s ). Note that there are other types of deflation, where, for example, the computation proceeds with a smaller transformed submatrix from which the deflated eigenvalue is explicitly removed. Since this is complicated for matrix operators in the form of the expansion (2.2), we do not consider this approach here.
3. Stochastic inverse subspace iteration. The stochastic inverse subspace iteration algorithm is based on a stochastic Galerkin projection. In order to motivate the stochastic algorithm, we first recall the deterministic inverse subspace iteration that allows to find several smallest eigenvalues and corresponding eigenvectors of a given matrix. Then, we give a formal statement of the stochastic variant and relate it to stochastic inverse iteration [29]. Finally, we describe the components of the algorithm in detail and relate it to stochastic subspace iteration [18]. Our strategy is motivated by the deterministic inverse subspace iteration, when the aim is to find small eigenvalues by finding large eigenvalues of the inverse problem.
Algorithm 3.1 (Deterministic inverse subspace iteration (DISI)). Let u 1 , . . . , u ns be a set of n s orthonormal vectors, and let A be a symmetric positive-definite matrix.
for it = 0, 1, 2, . . . Stochastic inverse iteration [29,Algorithm 2], corresponds to the case where a stochastic expansion of a single eigenvalue (in [29] with M λ = M ξ ) is sought; in this case, we can select a shift ρ using the solution of the mean problem (3.2) and modify the mean matrix using (2.13).
Step 1 of Algorithm 3.2 then consists of two parts: 1(a). Use the stochastic Rayleigh quotient (3.9) to compute the coefficients λ (it) i , i = 0, . . . , M λ of the eigenvalue expansion (2.5), and set up the right-hand side components as In the deterministic version of inverse iteration, the shift (λ − ρ) applied to the vector on the right-hand side is dropped, and each step entails solution of the system Moreover, in the deterministic version of Rayleigh quotient iteration, the eigenvalue estimate λ (it) is used instead of ρ in (3.7). Here we retain the shift in the iteration due to the presence of the stochastic Galerkin projection, see (2.11). In particular, the shift in the left-hand side is fixed to ρ and the estimate of the eigenvalue expansion is used in the setup of the right-hand side in (3.6). Thus, stochastic inverse iteration is not an exact counterpart of either deterministic inverse iteration or Rayleigh quotient iteration.
We now describe components of Algorithm 3.2 in detail.
Matrix-vector multiplication. Computation of the stochastic Rayleigh quotient requires a stochastic version of a matrix-vector product, which corresponds to evaluation of the projection In more detail, this is The use of this computation for the Rayleigh quotient is described below. Algorithm 3.2 can also be modified to perform subspace iteration [18,Algorithm 4] for identifying the largest eigenvalue of A. In this case, the solve in Step 1 of Algorithm 3.2 is replaced by a matrix-vector product (3.8) Stochastic Rayleigh Quotient. In the deterministic case, the Rayleigh quotient is used to compute the eigenvalue corresponding to a normalized eigenvector u as λ = u T v, where v = Au. For the stochastic Galerkin method, the Rayleigh quotient defines the coefficients of a stochastic expansion of the eigenvalue defined via a projection In our implementation we used M λ = M ξ . The coefficients of v are computed simply using the matrix-vector product (3.8). In more detail, this is so the coefficients λ k are obtained as where the notation ·, · R refers to the inner product of two vectors on Euclidean M x -dimensional space. It is interesting to note that (3.9) is a Hadamard product, see, e.g., [11,Chapter 5]. Remark 3.4. We used M λ = M ξ in (2.10), which is determined by the definitions of eigenvalues and eigenvectors in (2.4), and we used the same convention to compute the Rayleigh quotient (3.9). It would be possible to compute λ k for k = M ξ +1, . . . , M A as well, since the inner product u T v of two eigenvectors which are expanded using chaos polynomials up to degree p has nonzero chaos coefficients up to degree 2p. Because M ξ < M A , this means that some terms are missing in the sum used to construct the right-hand side of (3.9). An alternative to using this truncated sum is to use a full representation of the Rayleigh quotient using the projection In more detail, this uses M λ = M A and is given by where k = 0, . . . , M λ . So the coefficients λ k are obtained as We implemented and tested in numerical experiments both computations (3.9) and (3.10) and found the results to be virtually identical. Note that (3.10) is significantly more costly than (3.9), so it appears that there is no advantage to using (3.10). The construction (3.9) appears to be new, but the truncated representation of λ with M λ = M ξ was also used in [18,29].
Normalization and the Gram-Schmidt process. Let · 2 denote the norm induced by the inner product ·, · R . That is, for a vector u evaluated at a point ξ, We adopt the strategy used in [18], whereby at each step of the stochastic iteration, the coefficients of the gPC expansions of a given set of vectors {v s } ns s=1 are transformed into an orthonormal set {u s } ns s=1 such that The condition (3.13) is quite strict. However, because we assume the eigenvectors have the form of stochastic polynomials that can be easily sampled, the coefficients of the orthonormal eigenvectors can be calculated relatively inexpensively using a discrete projection and a quadrature rule as in (2.8). Note that each step of the stochastic iteration entails construction of the eigenvector approximations at the set of collocation points and, in contrast to the stochastic collocation method, no deterministic eigenvalue problems are solved. We also note that an alternative approach to normalization, based on solution of a certain nonlinear system was recently proposed by Hakula et al. [9]. First, let us consider normalization of a vector, so s = 1. The coefficients of a normalized vector u 1 k , for k = 0, . . . , M ξ , are computed from the coefficients v 1 k as (3.14) Then for general s, the orthonormalization (3.13) is achieved by a stochastic version of the modified Gram-Schmidt algorithm proposed by Meidani and Ghanem [18]. It is based on the standard deterministic formula, see, e.g. [28, Algorithm 8.1], For brevity, let us write χ ts = v s , u t R / u t , u t R u t , so the expression above becomes The stochastic counterpart of (3.15) is obtained by the stochastic Galerkin projection Then the coefficients u s k are where χ ts k are computed using a discrete projection and a quadrature rule as in (2.8), Error assessment. Ideally, we would like to minimize is the true residual. However, we are limited by the gPC framework. In particular, the algorithm only provides the coefficients of expansion of the residual, i.e., the vector corresponding to the difference of the left and righthand sides of (2.10). One could assess accuracy using Monte Carlo sampling of this residual by computing possibly at each step of the stochastic iteration. A much less expensive computation is to use the expansion coefficients directly as an error indicator. In particular, we can monitor the norms of the terms of r s k corresponding to expected value and variance of r s , We can also monitor the difference of the coefficients in two consecutive iterations 4. Numerical experiments. In this section, we report on computations of estimates of the probability density functions (pdf) of certain distributions. The plots presented below that illustrate these were obtained using the Matlab function ksdensity, which computes a distribution estimate from samples. These samples were computed either directly by the Monte Carlo method or by sampling the gPC expansions (2.5) obtained from stochastic inverse subspace iteration or stochastic collocation. In particular, we report pdf estimates of eigenvalue distributions, and of the 2 -norm of the eigenvector approximation errors where u s ξ (i) are samples of eigenvectors obtained from either stochastic inverse (subspace) iteration or stochastic collocation. We also report the pdf estimates of the We have implemented the methods in Matlab and applied it to vibration analysis of undamped structures, using the code from [1]. For these models, the associated mean problem gives rise to symmetric positive-definite matrices. For the parametrized uncertain term in the problem definition, we take Young's modulus, which is a proportionality constant relating strains and stresses in Hooke's law, as to be a truncated lognormal process transformed from an underlying Gaussian random process using a procedure described in [3]. That is, ψ (ξ), = 0, . . . , M A , is a set of N ξ -dimensional products of univariate Hermite polynomials and, denoting the coefficients of the Karhunen-Loève expansion of the Gaussian process by g j (x) and η j = ξ j − g j , j = 1, . . . , m ξ , the coefficients in expansion (4.3) are computed as The covariance function of the Gaussian field was chosen to be where L corr is the correlation length of the random variables ξ i , i = 1, . . . , m ξ , and σ g is the standard deviation of the Gaussian random field. Other parameters in the models were deterministic (see below). Note that, according to [15], in order to guarantee a complete representation of the lognormal process by (4.3), the degree of polynomial expansion of E (x, ξ) should be twice the degree of the expansion of the solution. We follow the same strategy here. Denoting by p the degree of polynomial expansions of u (x, ξ) and λ (x, ξ), the total numbers of the gPC polynomials are see, e.g., [5, p. 87] and [34, Section 5.2], Finite element spatial discretization leads to a generalized eigenvalue problem of the form where is the stochastic stiffness matrix given by the gPC expansion, and M is the deterministic mass matrix. Although we can transform (4.5) into a standard eigenvalue problem M −1 K(ξ) u = λu, we found that the stochastic Rayleigh quotient is sensitive to the nonsymmetry of this matrix operator. We note that this is well known in the deterministic case and instead, two-sided Rayleigh quotients are often used [10]. Here, we used for simplicity the Cholesky factorization M = LL T and transformed (4.5) into where u = L −T w. So, the expansion of A corresponding to (2.2) is We used the Matlab function eig to solve the deterministic eigenvalue problems: the mean value problem in Algorithm 3.2 and at all sample points ξ (q) . We compared the results for the stochastic Galerkin methods with ones obtained using Monte Carlo simulation and stochastic collocation. The stochastic Galerkin methods include stochastic inverse subspace iteration from Algorithm 3.2, and direct use of stochastic Rayleigh quotient (3.9). The latter entails solving the deterministic mean problem (3.2) by eig and using (3.3)-(3.4) for u in (3.9), i.e., the coefficients from u are used for the zero-order terms of the polynomial chaos basis and the coefficients of higher-order terms are set to zero. The coefficients of v were obtained from the matrixvector product (3.8). This construction of eigenvalues will be denoted by RQ (0) to indicate that no stochastic iteration was performed. The stochastic dimension was m ξ = 3, degree of the gPC expansion of the solution p = 3, and degree of the gPC expansion of the lognormal process 2p. Unless stated otherwise, we used 5 × pare it with stochastic collocation (SC). We ran stochastic inverse iteration with a fixed number of iterations, so plots of convergence indicators (3.16)-(4.2) shown below just illustrate the performance of the algorithms. We computed estimates of pdfs for the distributions of the eigenvalues and of the 2 -norm of the relative eigenvector error (4.1) corresponding to the minimal eigenvalue of the Timoshenko beam, with CoV = 10% and 25%. Figure 4.2 shows the estimated eigenvalue distributions obtained using the "zero-step" computation (RQ (0) ), which uses only the mean solution (3.3)-(3.4). The figure compares these distributions with those obtained using Monte Carlo and stochastic collocation, and it is evident that the visible displays of the three distributions are virtually indistinguishable. (Analogous plots, not shown, obtained after one complete stochastic iteration produced essentially identical plots.) As expected, the pdf estimates are narrower for CoV = 10%. This computation is explored further in Tables 4.1 and 4.2, which show the first ten coefficients of the gPC expansion of the smallest eigenvalue obtained using RQ (0) , one step and 20 steps of stochastic inverse iteration, and stochastic collocation. It can be seen that RQ (0) provides good estimates of the four coefficients corresponding to the mean (d = 0) and linear terms (d = 1) of the expansion (2.5), and a single SII step significantly improves the quality of the quadratic terms (d = 2). 1 Analogous computations for eigenvector errors and eigenproblem residuals are summarized in Figure 4.3, the support of the pdf for RQ (0) (obtained from the mean solution) is essentially the interval [0, 0.02], which shows that the eigenvector error ε u from RQ (0) is of order at most 2%. The analogous result for CoV = 25% is 6% (upper left of Figure 4.4), so that RQ (0) is less accurate for the larger value of CoV . Nevertheless, it 1 To test robustness of the algorithms with respect to possible use of an inexact solver of the deterministic mean value problem, we also examined perturbed initial approximations u s,(0) 0 = u s + δu s for the stochastic iteration (3.3), where u s is an eigenvector of the mean problem computed by eig and δu s is a random perturbation with norm 10 −6 . We found this to have no impact on performance in the sense that the columns for SII (1) and SII (20) in Tables 4.1-4.2 are unchanged. The first ten coefficients of the gPC expansion of the smallest eigenvalue of the Timoshenko beam with CoV = 10% using 0, 1 or 20 steps of stochastic inverse iteration, or using stochastic collocation. Here d is the polynomial degree and k is the index of basis function in expansion (2.5 Table 4.2 The first ten coefficients of the gPC expansion of the smallest eigenvalue of the Timoshenko beam with CoV = 25% using 0, 1 or 20 steps of stochastic inverse iteration, or using stochastic collocation. Here d is the polynomial degree and k is the index of basis function in expansion (2.5 can be seen from Figure 4.4 that even with CoV = 25%, the eigenvector approximation error ε u is less than 0.15% after one step of inverse iteration and after the second step ε u is less than 0.01% and the error essentially coincides with the eigenvector error from stochastic collocation. In other words, the convergence of SII is also indicated by the "leftward" movement of the pdfs corresponding to ε u . The pdf estimates of the residuals are very small after one inverse iteration. We also found that when the residual indicators (3.16) stop decreasing and the differences (3.17) become small, the sample true residuals (4.2) also become small. Figure 4.5 shows the behavior of the indicators (3.16)-(3.17). Next, we consider the computation of multiple extreme eigenvalues. For the stochastic Galerkin method, this entails construction of the coefficients of n s > 1 eigenvalue fields in (3.9). The stochastic collocation method computes n s extreme eigenvalues for each sample point and then uses these to construct the random fields associated with each of them. Monte Carlo proceeds in an analogous way.
The performance of the methods for computing the five smallest eigenvalues of the Timoshenko beam with CoV = 25% is shown in Figure 4 was able to identify the three smallest eigenvalues λ 1 , λ 2 , λ 3 , but it failed to identify eigenvalues λ 4 , λ 5 . (Results were similar for larger values of polynomial degree, p = 4 and 5.) Stochastic collocation and Monte Carlo were able to find all five eigenvalues. Note that the error indicators ε 0 and ε 2 σ from (3.16), shown in the bottom of the figure, become flat for the converged eigenvalues but not for those that are not found. Performance results for the five largest eigenvalues are shown in Figure 4.7. The Galerkin method was robust in this case: for each of the five eigenvalues, the pdf estimates obtained by all three computational methods overlap, and the 2 -norm of the relative eigenvector error (4.1) corresponding to the fifth maximal eigenvalue is small. The error indicator ε σ 2 from (3.16) behaves somewhat inconsistently in this case: after initial decrease it can be seen that ε σ 2 increases slightly after approximately 85 iterations.
We explored several approaches to enhance the robustness of stochastic subspace iteration for identifying interior eigenvalues. One possibility is to use a shift. We tested inverse iteration with a shift to find the fifth smallest eigenvalue of the Timoshenko beam with CoV = 25%. The corresponding eigenvalue of the mean problem is λ 5 = 3.7548 × 10 5 . The top four panels in Figure 4.8 show plots of the pdf estimates of the eigenvalue distribution, the 2 -norm of the relative eigenvector error (4.1), the true residual (4.2), and the convergence history of the indicator ε 0 from (3.16) with the shift ρ = 4.1 × 10 5 . It can be seen that for the estimates of the pdfs of the eigenvalue, the relative eigenvector errors, and the true residual of the stochastic inverse iteration, the methods are in agreement. However, we also found that convergence depends on the choice of the shift ρ. Setting the shift far from the eigenvalue of interest or too close to it worsens the convergence rate and the method might even fail to converge. For this eigenvalue, the best convergence occurs with the shift set close to either ρ = 3.5 × 10 5 or ρ = 4.1 × 10 5 , but with shift set to ρ = 3.9 × 10 5 or ρ = 4.3 × 10 5 the method fails to converge. Similar behavior was also reported in [29]. We note that the mean of the sixth smallest eigenvalue is λ 6 = 8.9196 × 10 5 , that is, the means of An approach that we found to be more robust was to use deflation of the mean matrix. Suppose we are interested in some interior eigenvalues in the lower side of the spectrum, for example λ 4 and λ 5 , which we were unable to identify in a previous attempt (Figure 4.6). To address this, as suggested in (2.15) we can deflate the mean matrix A 0 using the mean eigenvectors corresponding to λ 1 , λ 2 and λ 3 . Figure 4.9 shows that in this case, Algorithm 3.2 was able to identify the fourth and fifth smallest eigenvalues, and the relative eigenvector errors (4.1) almost coincide. We note that the results in Figures 4.9 (and also in Figure 4.10) were obtained using the deflated mean matrix also in stochastic collocation and Monte Carlo methods.
One significant advantage of stochastic inverse subspace iteration over Monte Carlo and stochastic collocation is that it allows termination of the iteration at any step, and thus the coefficients of the expansions (2.5) can be found only approximately. Figure 4.10 shows the 2 -norms of the relative eigenvector error (4.1) and the pdf estimates of the true residual (4.2) corresponding to the fifth smallest eigenvalue of the Timoshenko beam with CoV = 25%, obtained using inverse iteration with deflation of the four smallest eigenvalues in iteration 0, 5, and 10. For example, the initial mean of the relative eigenvector error ε u from (4.1) is centered around 10%, after 5 iterations it is reduced to less than 0.5%, and after 10 iterations the results of stochastic inverse iteration and stochastic collocation essentially agree, and the difference from Monte Carlo represented by ε u is less than 0.05%.
Example 2:
Mindlin plate. For the second example, we analyzed vibrations of a square, fully simply supported Mindlin plate. For this problem we used 3 × 10 4 Monte Carlo samples. The physical parameters were set according to [1,Section 12.5] as follows: the mean Young's modulus of the lognormal random field was E 0 = 10, 920, Poisson's ratio ν = 0.30, length of a side L plate = 1, thickness 0.1, κ = 5/6, and density ρ = 1. The plate was discretized using 10 × 10 bilinear (Q4) finite elements with 243 physical degrees of freedom. The condition number of the mean matrix A 0 from (4.7) is 1.6436 × 10 3 , the norm A 0 2 = 1.8153 × 10 7 , and the eigenvalues of A 0 are displayed in Figure 4.11. Coefficient of variation of the Young's modulus was set to CoV = 25%, and the spatial correlation length L corr = L plate /4. This is a two-dimensional problem which means that there are repeated eigenvalues: for example, the four smallest eigenvalues of the mean problem are λ 1 = 1.1044 × 10 4 , λ 2 = λ 3 = 4.2720 × 10 4 , and λ 4 = 8.3014 × 10 4 .
As before, we first examined the performance of stochastic inverse iteration and stochastic collocation to identify the smallest eigenvalue. The results are in Figure 4.12 and Table 4.3 presents a comparison of the first 10 coefficients of the gPC expansion of the smallest eigenvalue obtained using RQ (0) , one and five steps of stochastic inverse iteration, and stochastic collocation. Monte Carlo simulation gave sample mean 1.0952 × 10 4 and standard deviation 1.2224 × 10 3 , i.e., CoV ≈ 11%. As before, RQ (0) alone provides a close estimate of the eigenvalue expansion (2.5), and the results of stochastic inverse iteration and stochastic collocation essentially agree. Next, we used stochastic inverse subspace iteration to identify the four smallest eigenvalues. The results are in Figure 4.13. It can be seen that the distributions of all four eigenvalues match and, in particular, the distributions of the repeated eigenvalues λ 2 and λ 3 overlap. However, it also appears that stochastic collocation exhibits some difficulties detecting the subspace corresponding to λ 2 , whereas the distribution of ε r corresponding to λ 4 suggests that in this case stochastic inverse subspace iteration and stochastic collocation methods are in excellent agreement. 2) corresponding to eigenvalues λ 4 (left), and λ 5 (right). Bottom: convergence history of the two indicators ε 0 and ε 2 σ from (3.16) corresponding to eigenvalues λ 4 and λ 5 of the Timoshenko beam with CoV = 25% obtained using inverse subspace iteration and deflation (2.15) of the three smallest eigenvalues λ 1 , λ 2 , and λ 3 .
Conclusion.
We studied random eigenvalue problems in the context of spectral stochastic finite element methods. We formulated the algorithm of stochastic inverse subspace iteration and compared its performance in terms of accuracy with stochastic collocation and Monte Carlo simulation. While overall the experiments indicate that in terms of accuracy all three methods are quite comparable, we also highlighted some differences in their methodology. In the stochastic inverse subspace iteration we formulate and solve a global stochastic Galerkin problem in order to find the coefficients of the gPC expansion of the eigenvectors. The coefficients of the eigenvalue expansion are computed from a stochastic version of the Rayleigh quotient. In fact, we found that the coefficients of the eigenvector expansion corresponding to the underlying mean-value problem, with the coefficients of the higher order terms set to zero, provide a good estimate of the probability distribution of the corresponding eigenvalue. From our experiments it also appears that the stochastic inverse subspace iteration is not robust when the nature of the eigenvalues is very different, for example, due to a badly conditioned problem. Moreover, the performance of inverse iteration for interior eigenvalues seems to be sensitive to the choice of the shift. Nevertheless, Top: pdf estimates of the eigenvalue distribution (left) and the 2 -norm of the relative eigenvector error (4.1) (right) corresponding to the minimal eigenvalue λ 1 of the Mindlin plate with CoV = 25% obtained directly using stochastic Rayleigh quotient RQ (0) . Bottom: pdf estimates of the true residual (4.2) (left) and of the relative eigenvector error (4.1) (right) after five steps of stochastic inverse iteration. Table 4.3 The first ten coefficients of the gPC expansion of the smallest eigenvalue of the Mindlin plate with CoV = 25% using 0, 1 or 5 steps of stochastic inverse iteration, or using stochastic collocation. Here d is the polynomial degree and k is the index of basis function in expansion (2.5 we were able to successfully resolve both issues by deflation of the eigenvalues of the mean matrix. The algorithm also performs well in cases when the spectrum is clustered and even for repeated eigenvalues. However, a unique description of stochastic subspaces corresponding to repeated eigenvalues, which would allow a comparison of different bases, is more delicate [8] and is not addressed here. We briefly comment on computational cost. Stochastic inverse subspace iteration is a computational intensive algorithm because it requires repeated solves with the global stochastic Galerkin matrix. However, our main focus here was on the methodology, and we view a cost analysis to be beyond the scope of this project. In our experiments, we used direct solves for the global stochastic Galerkin system, and for deterministic eigenvalue problems required by the sampling (collocation and Monte Carlo) methods, we used the Matlab function eig. Many other strategies can be brought to this discussion, for example preconditioned Krylov subspace methods, e.g., [25,26], to approximately solve the Galerkin systems, and state-of-the art iterative eigenvalue solvers for the sampling methods. Moreover, the solution of Galerkin systems is also a topic of ongoing study [16]. Finally, we note that an appealing feature of the Galerkin approach is that it allows solution of the random eigenvalue problem only approximately, performing zero (in case of the stochastic Rayleigh quotient) or only a few steps of the stochastic iteration, unlike the Monte Carlo and the stochastic collocation methods which are based on sampling.
|
2015-12-15T00:16:26.000Z
|
2015-12-15T00:00:00.000
|
{
"year": 2015,
"sha1": "86c201cb76951bdc6686fd9db8a380cbe86181bd",
"oa_license": null,
"oa_url": "https://mdsoar.org/bitstream/11603/23158/1/140999359.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1e5da6dac9363ad4ae21c9c6050e2881d229f377",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
199539235
|
pes2o/s2orc
|
v3-fos-license
|
RRM but not the Asp/Glu domain of hnRNP C1/C2 is required for splicing regulation of Ron exon 11 pre-mRNA
The Ron proto-oncogene is a human receptor for macrophage-stimulating protein (MSP). The exclusion of exon 11 in alternative splicing generates ΔRON protein that is constitutively activated. Heterogenous ribonucleaoprotein (hnRNP) C1/C2 is one of the most abundant proteins in cells. In this manuscript, we showed that both hnRNP C1 and C2 promoted exon 11 inclusion of Ron pre-mRNA and that hnRNP C1 and hnRNP C2 functioned independently but not cooperatively. Moreover, hnRNP C1 stimulated exon 11 splicing through intron 10 activation but not through intron 11 splicing. Furthermore, we showed that, whereas the RRM domain was required for hnRNP C1 function, the Asp/Glu domain was not. In conclusion, hnRNP C1/C2 promoted exon 11 splicing independently by stimulating intron 10 splicing through RRM but not through the Asp/Glu domain.
INTRODUCTION
Pre-mRNA splicing occurs in a large RNA-protein complex called a spliceosome (1). Spliceosome assembly is a stepwise process in which U1 snRNP basepairs with a 5' splice-site, then U2 snRNP basepairs with a branch-point. Then, the U4/U5/U6 snRNP is loaded into the complex. Through alternative exon inclusion/exclusion, proteins with slightly different, completely different, and opposite functions are produced (2)(3)(4).
Ron is a cell surface-located receptor composed of disulfide-linked and subunits, which are produced from a 190 kDa single chain precursor by proteolytic cleavage (5). RON is an uncleaved protein isoform lacking 49 amino acids in the extracellular domain, which is produced by the exclusion of exon 11 in alternative spicing procedures (6). For RON is unable to bind the ligand, it must be constitutively activated by tyrosine phosphorylation through intracellular oligomerization (6,7). SRSF1, SRSF2, and hnRNP A1 have been demonstrated to regulate exon 11 splicing (8)(9)(10).
Heterogeneous nuclear ribonucleoprotein (hnRNP) C1/C2 is one of the most abundant proteins in cells (11). hnRNP C1/C2 includes an N-terminal RNA recognition motif (RRM), a basic leucine zipper (bZIP)-like motif (bZLM), and an acidic aspartic acid/glutamic acid-rich (Asp/Glu) domain (UniprotKB -P07910) (12). While RRM plays a minimal role in the overall affinity of hnRNP C1/C2 for RNA, a highly basic 40 amino acid (aa) domain preceding the leucine zipper motif provides high-affinity for RNA (13,14). An acidic Asp/Glu domain is necessary for homo-or heterotetramer formation of hnRNP C1 and C2 (13). hnRNP C1/C2 inhibits splicing of the Alu element in the human genome by interfering with U2AF 65 binding to the cryptic PPT sequence (15).
Here, we show that hnRNP C1/C2 promoted exon 11 inclusion of Ron pre-mRNA through stimulation of intron 10, but not intron 11, splicing. In addition, we found that hnRNP C1 and hnRNP C2 functioned independently, not cooperatively. Importantly, although the RRM domain was required for hnRNP C1/C2 function, the Asp/Glu domain of hnRNP C1 was not necessary for promoting Ron exon 11 splicing.
Reduced hnRNP C1/C2 expression inhibits exon 11 splicing of Ron pre-mRNA
In order to understand the role of hnRNP C1/C2 in the splicing of Ron pre-mRNA, we investigated whether reduced expression of hnRNP C1/C2 affected Ron alternative splicing. Fig. 1A shows that hnRNP C1/C2-targeted shRNA treatment reduced both hnRNP C1 and hnRNP C2 expression analyzed http://bmbreports.org
hnRNP C1 and C2 promotes exon 11 of Ron pre-mRNA independently
Next, we investigated whether increased expression of hnRNP C1 or hnRNP C2 had the opposite effect of reduced hnRNP C1/C2 expression on exon 11 splicing. In order to address the question, an hnRNP C1 or C2 expression plasmid and Ron exon 10-12 mini-gene were co-transfected into MDA MB 231 cells. Fig. 2A shows that hnRNP C1 and C2 promoted an increase in exon 11 inclusion (∼22% or ∼17% independently, respectively) to similar levels, which was the opposite effect of hnRNP C1/C2 knockdown (lane 3 and 4). In addition, the results also indicate that the 13 amino acids included only in hnRNP C2 did not play a significant role in this activity.
In addition to forming homo-tetramers by themselves, hnRNP C1 and C2 have also been shown to form a hetero-tetramer with a 3:1 ratio (16). We, therefore, asked if hnRNP C1 and C2 could promote Ron exon 11 splicing cooperatively. To answer this question, we introduced both hnRNP C1 and C2 expression plasmids with the Ron mini-gene http://bmbreports.org BMB Reports into cells. The results in Fig. 2A show that the co-expression of hnRNP C1 and C2 had similar effects as the individual expression of either the hnRNP C1 or C2 plasmid (lane 5). Thus, we conclude that these two proteins did not synergistically increase either hnRNP C1 or C2 expression. By combining the results in Fig. 1 and 2A, we conclude that hnRNP C1 and C2 functioned similarly to promote exon 11 inclusion of Ron pre-mRNA and that the 13 amino acids in hnRNP C2 were not required for the function. To simplify our studies, we decided to examine only the roles of hnRNP C1 in this report.
hnRNP C1 promotes intron 10 but not intron 11 splicing of Ron pre-mRNA
We asked if hnRNP C1 affected splicing of intron 10 or intron 11 in Ron pre-mRNA. To detect intron 10 splicing, we applied a mini-gene in which only exon 10 to exon 11 sequences were included (E10-11, lower panel, Fig. 2B). Using this mini-gene, we performed RT-PCR with primer pairs corresponding to exon 10 and the downstream vector sequence. The results in Fig. 2B demonstrate that hnRNP C1 expression increased the intron 10-spliced isoform significantly (∼15%, lane 3). Thus, hnRNP C1 promotes intron 10 splicing. We next analyzed intron 11 splicing using another mini-gene, which includes exons 11-12 (E11-12). The primer pairs that corresponded to exon 11 and the vector sequences were used to detect intron 11 splicing (Fig. 2C, lower panel). The results in Fig. 2C show that intron 11 was almost completely spliced in the E11-12 mini-gene (lane 1). Thus, we expect that a further increase in exon 11-spliced products would be hard to detect. The results in Fig. 2C show that intron 11 splicing was not altered by the hnRNP C1 treatment. The results in Fig. 2B and 2C indicate that hnRNP C1 affected intron 10 but not intron 11 splicing. Therefore, we conclude that hnRNP C1 promoted exon 11 inclusion through activation of intron 10 splicing.
Conserved splice-site sequences of exon 11 do not affect hnRNP C1 function
We further asked if splicing sites flanking exon 11 would regulate the effect of hnRNP C1 on the exon 11-inclusion of Ron pre-mRNA. To test this possibility, we used two previously described mutant mini-gene constructs (10) in which either the 5' splice-site or the 3' splice-site was mutated into conserved sequences (5'-cons or 3'-cons, Fig. 3A). Consistent with our previous reports, more conserved splice-sites sequences in exon 11 facilitated exon 11 inclusion significantly (10), leading to the predominant production of exon 11-included isoforms by these two mutants (lane 1 of Fig. 3B and 3C). However, the results in Fig. 3B demonstrate that a mutation of the 5' splice-site in a conserved sequence did not disrupt hnRNP C1 function on Ron exon 11 splicing because hnRNP C1 still promoted exon 11 inclusion in the 5'-cons mutant. In the 3'-cons mutant, since the exon 11-excluded isoform was not detectable, a decrease in the exon 11-skipped form was not observable. Taken together, we conclude that conserved splice-site sequences of exon 11 did not affect the role of hnRNP C1 in exon 11 splicing.
The Asp/Glu domain is dispensable for hnRNP C1 function in Ron exon 11 splicing
hnRNP C1 includes RRM, bZLM, and Asp/Glu domains. While the RRM domain is required for RNA binding, the Asp/Glu domain is required for the formation of tetramers of hnRNP C1.
To determine if these two domains were necessary for hnRNP C1-mediated Ron exon 11 splicing, we produced two hnRNP C1 mutants, in which either the RRM domain or the Asp/Glu domain was deleted (RRM, Asp/Glu) (Fig. 4A). The results in Fig. 4B show that the RRM mutant protein was not able to promote exon 11 inclusion (lane 4). Thus, the RRM domain was required for the role of hnRNP C1 in exon 11 splicing. This was not unexpected because the RRM domain is required for the binding target RNA in exon 10. In contrast, we found http://bmbreports.org that the Asp/Glu mutant of hnRNP C1 was still capable of promoting exon 11 inclusion (lane 5). Therefore, we conclude that the Asp/Glu domain was not needed for the role of hnRNP C1 in Ron exon 11 splicing. The results indicate that tetramer formation was not required for the role of hnRNP C1 in Ron exon 11 alternative splicing.
DISCUSSION
We previously demonstrated that SRSF2 regulated Ron exon 11 splicing by contacting exon 11 sequences (10). In this study, by using shRNA-mediated knockdown and overexpression, we showed that hnRNP C1/C2 promoted Ron exon 11 splicing by stimulating intron 10 but not intron 11 splicing. Moreover, hnRNP C1 and C2 played roles in Ron exon 11 splicing independently but not synergistically, demonstrated by experiments using cells co-expressing both proteins. Furthermore, the acidic Asp/Glu domain required for tetramer formation was not necessary for the role of hnRNP C1 in Ron splicing. Ron pre-mRNA splicing is regulated by multiple proteins, including SRSF1, hnRNP A1, and SRSF2, through various sequences on Ron pre-mRNA (8)(9)(10). Whereas SRSF1 and hnRNP A1 target exon 12 RNA and SRSF2 targets exon 11 RNA, here, we demonstrated that hnRNP C1 targeted exon 10 sequences. However, in spite of these studies, it is still unclear if SRSF1, hnRNP A1, SRSF2, and hnRNP C1 function cooperatively in Ron exon 11 splicing, spliceosome assembly, and/or splice-site selection. Moreover, how and why these different proteins only target their respective sequences on Ron RNA, but not other potential binding sequences at other locations, need to be answered.
Although the role of hnRNP C1/C2 was demonstrated in an in vitro study (17), their functions in alternative splicing have not yet been elucidated in great detail. In addition, a previous study found that hnRNP C1/C2 was not necessary for viability (18). Thus, it is likely that hnRNP C1/C2 played redundant roles in splicing in the cells. In this study, we presented direct evidence that hnRNP C1/C2 regulated alternative splicing using cells in which hnRNP C1/C2 was either overexpressed or suppressed. Our results, as well as a few other reports, demonstrated that hnRNP C1/C2 was an essential regulatory protein of alternative splicing (19)(20)(21). However, previous studies of hnRNP C1/C2 were primarily based on large scale sequencing and screening, therefore, mechanistic insight from those studies was limited. The results in our study showed that reduced hnRNP C1/C2 expression induced a much more significant change in exon 11 splicing compared to hnRNP C1/C2 overexpression. The difference can be explained by the fact that hnRNP C1/C2 is one of the most abundant proteins in cells (11). Thus, shRNA treatment induced a significant decrease in hnRNP C1/C2 expression. Although we used various approaches, we were not able to show that the endogenous Ron exon 11 splicing was affected by hnRNP C1/C2 overexpression. Nonetheless, the transient expression of hnRNP C1/C2, along with the Ron mini-gene, demonstrated that it affected exon 11 splicing but to a much lesser extent than the effect seen from the knockdown. hnRNP C1 and C2 are able to form homo-or heterotetramers (13). However, it seems that the role of hnRNP C1/C2 in Ron splicing was not required for tetramer formation, based on two pieces of evidence. First, hnRNP C1 and C2 did not function cooperatively in alternative splicing of Ron pre-mRNA, but rather independently. Second, the acidic Asp/Glu domain that is essential for tetramer formation was dispensable in Ron exon 11 splicing. In addition, we showed that the RRM http://bmbreports.org BMB Reports domain was required for the function of hnRNP C1, which was not surprising. However, what was striking is that the long Asp/Glu domain was not necessary for Ron splicing, although it was previously shown to be essential for tetramer formation and that the hnRNP C tetramer was important for mRNA transport (22). Therefore, the role of Asp/Glu in Ron pre-mRNA splicing cannot be established. However, whether the Asp/Glu domain is required for other pre-mRNA splicing is still unknown. It is also possible that the Asp/Glu domain plays regulatory roles in alternative splicing.
Plasmid construction
The coding region of hnRNP C1, C2 was inserted into a pcDNA6/myc-His A (Invitrogen) plasmid. The Asp/Glu and RRM mutants of hnRNP were produced by overlapping PCR using the hnRNP C1 expression plasmid as a template.
RT-PCR
Total RNA was extracted using RiboEx (GeneAll) as previously described (23). Reverse transcription was performed using 0.5 g RNA with oligo (dT) primer and ImProm-II TM reverse transcriptase (Promega). The reaction mixture (0.5 l) was amplified by PCR using G-Taq polymerase (Cosmo Genetech).
Purification of hnRNP C1 protein
Total protein was extracted from HEK293 cells transfected with the pcDNA6/myc-His A-hnRNP C1 plasmid by 30 min incubation with lysis buffer (50 mM NaH2PO4, 500 mM NaCl, 5 mM imidazole, 0.5% Tween-20, and 1 mM PMSF). Prewashed Ni-NTA agarose beads (QIAGEN) were added to the lysates and the mixture was incubated overnight at 4 o C in the binding buffer (50 mM NaH2PO4, 500 mM NaCl, 0.5% Tween-20, and 1 mM PMSF). After washing, the hnRNP C1 protein was eluted from the Ni-NTA agarose beads using elution buffer (250 mM imidazole in binding buffer) for 20 min at 4 o C.
Knockdown of hnRNP C1/C2 with shRNA
To generate shRNA lentivirus, 293T cells were transfected with an shRNA-harboring plasmid (Open Biosystems) and PSPAX2 and PMD2G helper plasmids using PEI reagent. The media was changed after 12 h and incubated for another 24 h. The lentivirus-containing supernatants were harvested with a 0.45 m filter. To knock down the hnRNP C1/C2 expression, lentivirus-containing supernatants were added to the cells supplemented with 10 g/ml polybrene. After 72 h infection, the RNAs were extracted for RT-PCR.
|
2019-08-13T13:03:17.202Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "67d9b1fbbaf67757edcd71a54c518f3acd6b6183",
"oa_license": "CCBYNC",
"oa_url": "http://www.bmbreports.org/journal/download_pdf.php?doi=10.5483/BMBRep.2019.52.11.080",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dbc1a4b6c971ac3683847ecd51c89503b4003d65",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
233470369
|
pes2o/s2orc
|
v3-fos-license
|
Crystal structure of tert-butyl 4-[4-(4-fluorophenyl)-2-methylbut-3-yn-2-yl]piperazine-1-carboxylate
A sterically congested piperazine derivative, tert-butyl 4-[4-(4-fluorophenyl)-2-methylbut-3-yn-2-yl]piperazine-1-carboxylate, was prepared using a modified Bruylants approach. Its novel chemistry with a synthetically useful second nitrogen atom on the N-tert-butyl piperazine substructure generates a pharmacologically useful core.
The title sterically congested piperazine derivative, C 20 H 27 FN 2 O 2 , was prepared using a modified Bruylants approach. A search of the Cambridge Structural Database identified 51 compounds possessing an N-tert-butyl piperazine substructure. Of these only 14 were asymmetrically substituted on the piperazine ring and none with a synthetically useful second nitrogen. Given the novel chemistry generating a pharmacologically useful core, determination of the crystal structure for this compound was necessary. The piperazine ring is present in a chair conformation with di-equatorial substitution. Of the two N atoms, one is sp 3 hybridized while the other is sp 2 hybridized. Intermolecular interactions resulting from the crystal packing patterns were investigated using Hirshfeld surface analysis and fingerprint analysis. Directional weak hydrogenbond-like interactions (C-HÁ Á ÁO) and C-HÁ Á Á interactions with the dispersion interactions as the major source of attraction are present in the crystal packing.
Structural commentary
The title compound, prepared from achiral reagents as a racemic mixture, crystallizes in the chiral monoclinic space group P2 1 with one molecule in the asymmetric unit as shown in the Scheme and Fig. 2. No heavy atoms are present in the structure and data were collected using Mo K radiation. Thus, the absolute structure of the randomly chosen crystals could not be determined reliably (Parsons et al., 2013;Zhou et al., 2015). In the molecule, the NC( O)O group of the carbamate exists in resonance. The bond lengths between carbon and other atoms ( (4) Å ] is the shortest among all the bond lengths in the phenyl group, possibly due to the inductive effect of fluorine. The spatial distance between the extreme atoms of propargylamine groups (C7Á Á ÁN1) was observed to be 3.508 (3) Å , which is slightly longer than for the other reported propargylamines (3.372-3.478 Å ; Marvelli et al., 2004;Sidorov et al., 1999Sidorov et al., , 2000, and possibly due to the open L-shaped structure of the molecule. Also, the piperazine ring is shown in its most stable chair form conformation in Fig. 3, as evidenced by the bond angles (Table 1) Table 1 Selected geometric parameters (Å , ).
contact distances. These data also suggested the absence ofstacking as CÁ Á ÁC contacts contribute 0% of the Hirshfeld surfaces (Fig. 6d).
Database survey
A search in the Cambridge Structural Database (Version 5.41 update of March 2020; (Groom et al., 2016)) for compounds possessing an N-tert-butyl piperazine substructure identified 51 compounds. These compounds were several variations of BuckyBall adducts, diketopiperazine derivatives, and ligands. There were only 14 compounds viz. DIYWAK (McDermott et al., 2008), HEHZOL (Legnani et al., 2012), HICYID, HICYOJ (Sinha et al., 2013b), JIFHEO (Zhong et al., 2018), OFUDAW (Korotaev et al., 2012), PUYNUS (Jin & Liebscher, 2002), RIPWUJ (Bobeck et al., 2007), TILJIJ (Sinha et al., 2013a), UPIBIF, UPIBOL (Wiedner & Vedejs, 2010), UYIHOB (Chen & Cao, 2017), WANTAJ (Golubev & Krasavin, 2017), and WINMAH (Brouant & Giorgi, 1995) that were asymmetrically substituted on the piperazine ring, and none with a synthetically useful second nitrogen. All were effectively 'nonintermediate' compounds that could not reasonably serve for additional substitution at the second nitrogen and none had alkyne substitutions. The quaternary carbon piperazines were explored by Sinha et al. (2013a,b) using an Ugi reaction; however, the present structure is the only compound containing an ,-dimethyl carbon attached to an alkyne and an amine. This new methodology required the X-ray studies to confirm the generated structure. In summary, to the best of the authors' knowledge, there is no published crystal structure like the title compound, for a molecule containing asymmetrical substitutions on the piperazine ring, having a synthetically useful second nitrogen, and an ,-dimethyl carbon attached to an alkyne and an amine.
Note: the aqueous extracts (pH > 10) were collected and the residual cyanide was oxidized to cyanate with sodium hypochlorite (Gerritsen & Margerum, 1990) and absence of a cyanide ion was confirmed with an MQuant 2 Koening Cyanide test indicator from EM sciences.
tert-Butyl 4-[4-(4-fluorophenyl)-2-methylbut-3-yn-2-yl]piperazine-1-carboxylate (1): A 250 mL flame-dried, round-bottom flask was cooled under argon and then charged with 1-ethynyl-4-fluorobenzene 4 (1.98 g, 16.5mmol) in 50 mL of anhydrous THF. This solution was cooled with an external ice-bath. A commercial solution of methyl magnesium bromide (5.25 mL, 16.5 mmol) (Acros, $3.2 M in THF, assayed against anhydrous diphenyl acetic acid with 2 mg 1,10-phenanthroline as an indicator) was added with slow dropwise addition over 10 minutes. Computer programs: SMART and SAINT (Bruker, 1998), SHELXS97 (Sheldrick, 2008), SHELXL2018/3 (Sheldrick, 2015) and CrystalMaker (Palmer, 2014). was stirred at ice-bath temperature for an additional 20 minutes, which resulted in a pale-yellow solution. A solution of tert-butyl 4-(2-cyanopropan-2-yl)piperazine-1-carboxylate 3 (Firth et al., 2016) (2.33 g, 9.2 mmol) in 25 mL THF was added dropwise to this mixture over 10 minutes; the internal temperature was maintained between 274-275.3 K. This deepyellow solution was permitted to stir with the external ice-bath slowly melting and rising to room temperature, while progress was monitored by TLC (R f of product at 0.6 1:1 H:EA, SiO 2 plates, SWUV and I 2 visualization). Following stirring for 12 h at 296 K, the crude reaction mixture was cooled to ice-bath temperature and the reaction was quenched with the addition of 10 mL of ice-cold water at a rate of addition that maintained the internal temperature below 278 K. After quenching the organo-base, an additional 50 mL of water were added. Small aliquots of brine and ethanol were used, as required, to break the emulsion in the following extraction. This mixture was extracted with 3 Â 20 mL of ethyl acetate, washed (3 Â 10 mL H 2 O, 3 Â 10 mL brine) dried (Na 2 SO 4 ), decanted, and the solvent removed under reduced pressure to afford 30.6 g of a yellow solid. This was separated on 50 g of SiO 2 with hexane/ethyl acetate (1/1) as the eluent to yield tert-butyl 4-[4-(4-fluorophenyl)-2-methylbut-3-yn-
Refinement
Crystal data, data collection, and structure refinement details are summarized in Table 3. H atoms were localized in a difference-Fourier map. C-bound H atoms were treated as riding, with C-H = 0.93, 0.96 or 0.97 Å , and with U iso (H) = 1.2U eq (C) for aromatic and 1.5U eq (C) for methyl groups. where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max < 0.001 Δρ max = 0.12 e Å −3 Δρ min = −0.11 e Å −3 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
Crystal data
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
|
2021-05-02T05:16:34.873Z
|
2021-03-05T00:00:00.000
|
{
"year": 2021,
"sha1": "c35842b694dcf784251de0bec164247edcc6839c",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/e/issues/2021/04/00/zl5007/zl5007.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c35842b694dcf784251de0bec164247edcc6839c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247854677
|
pes2o/s2orc
|
v3-fos-license
|
Challenges and complexities in designing cluster headache prevention clinical trials: A narrative review
Abstract Objective To provide a review of challenges in clinical trials for the preventive treatment of cluster headache (CH) and highlight considerations for future studies. Background Current guidelines for preventive treatment of CH are largely based on off‐label therapies supported by a limited number of small randomized controlled trials. Guidelines for clinical trial design for CH treatments from the International Headache Society were last issued in 1995. Methods/Results Randomized controlled clinical trials were identified in the European and/or United States clinical trial registries with a search term of “cluster headache,” and manually reviewed. Cumulatively, there were 27 unique placebo‐controlled prevention trials for episodic and/or chronic CH, of which 12 were either ongoing, not yet recruiting, or the status was unknown. Of the remaining 15 trials, 5 were terminated early and 7 of the 10 completed trials enrolled fewer patients than planned or did not report the planned sample size. A systematic search of PubMed was also utilized to identify published manuscripts reporting results from placebo‐controlled preventive trials of CH. This search yielded 16 publications, of which 7 were registered. Through critical review of trial data and published manuscripts, challenges and complexities encountered in clinical trials for the preventive treatment of CH were identified. For example, the excruciating pain associated with CH demands a suitably limited baseline duration, rapid treatment efficacy onset, and poses a specific issue regarding duration of investigational treatment period and length of exposure to placebo. In episodic CH, spontaneous remission as part of natural history, and the unpredictability and irregularity of cluster periods across patients present additional key challenges. Conclusions Optimal CH trial design should balance sound methodology to demonstrate efficacy of a potential treatment with patient needs and the natural history of the disease, including unique outcome measures and endpoint timings for chronic versus episodic CH.
INTRODUC TI ON
The 1-year cluster headache (CH) prevalence (53 per 100,000) 1 is similar to other major disabling neurological disorders, such as multiple sclerosis (21 per 100,000) 2 and Parkinson's disease (106 per 100,000). 2 Episodic cluster headache (ECH) is characterized by an average of 1 to 2 cluster periods per year with a mean cluster period duration of 4 to 9 weeks. 3-10 A circannual periodicity is delineated by periods of remission 5 ranging from 3 months up to a period of years ( Figure 1). 11,12 Chronic cluster headache (CCH) is characterized by active cluster cycles lasting anywhere between 1 and 10 years 8,11 with brief (<3 months) or no remission periods ( Figure 1). 11 While patients with CCH may not experience remissions, they may report a circannual pattern of lessening and worsening of attack frequency. 5 Cluster headache has a substantial impact on quality of life with high levels of associated disability and frequent suicidal ideation. [13][14][15][16][17][18][19][20][21] Considering both the debilitating clinical symptoms and the burden to quality of life, there remains a large unmet need for additional therapeutic options.
The excruciating pain and cranial autonomic symptoms, often occurring with a circadian and circannual rhythm, have been linked to activation of the trigeminovascular and cranial parasympathetic systems and the hypothalamus. 12,22,23 This activation is associated with a release of neuropeptides: calcitonin gene-related peptide (CGRP), vasoactive intestinal peptide (VIP), and pituitary adenylate cyclase-activating peptide (PACAP38). 12,22,24,25 Intravenous infusion of CGRP, 22 VIP, 25 and PACAP38 25 can induce CH attacks. Interestingly, the attack induction rate after CGRP infusion is lower in CCH patients (50%) compared to ECH patients (89%) suggesting there may be subtle pathophysiological differences between subtypes. 22 Based on retrospective reports of attack frequency in the month prior to CGRP infusion, it was postulated that attack frequency in CCH may signal a susceptibility threshold to CGRP attacks, with higher attack frequency associated with increased susceptibility to CGRP provocation. 22 However, the authors cautioned these data should be interpreted in light of the acknowledged limitations. 22 Additional evidence suggesting subtle pathophysiological differences between patients with ECH and CCH include differences in response to the same treatment, as seen in examples from clinical trials to date with lithium 26,27 (efficacious in CCH but not ECH) and galcanezumab 28,29 (efficacious in ECH but not CCH) in preventive treatment, as well as non-invasive vagus nerve stimulation for acute treatment (efficacious in ECH but not CCH). 30,31 However, some CH treatments, particularly acute treatments such as subcutaneous and intranasal triptans and oxygen are efficacious in both ECH and CCH, [32][33][34][35][36] although some studies have reported differences in the magnitude of response. 32,33,35 Treatments to interrupt cluster periods or reduce the frequency of attacks (i.e., preventive treatment) are generally based on recommendations from treatment guidelines. 37,38 However, these guidelines are based on a small number of randomized controlled trials (RCTs) supplemented with data from uncontrolled trials. 37,38 A lack of RCTs has resulted in a limited selection of medications approved for CH prevention, which has led to off-label prescription of agents with limited efficacy evidence. 39 Table 1 lists a summary of current trial design recommendations in the International Headache Society (IHS) guidelines for controlled trials of preventive drugs in CH. 40 Currently there are no CH preventive treatments approved by the European Medicines Agency; some locally approved preventive treatments vary by country and primarily include lithium and pizotifen. In the United States, only galcanezumab has been approved for the treatment of ECH. 41 With this scenario in mind, we undertook this review to provide an overview of challenges and complexities encountered in clinical trials for the preventive treatment of CH and highlight considerations for future studies.
ME THODS
Prevention trials for CH were identified via two methods: (1) a search of the European 42 and/or US clinical trial registries 43 ; and (2) a PubMed database search. As of September 2021, the search term "cluster headache" returned 27 unique results in the European clinical trial registry 42 from which 13 randomized, K E Y W O R D S chronic cluster headache, clinical trial design, episodic cluster headache F I G U R E 1 Depiction of the International Classification of Headache Disorders, 3rd edition (ICHD-3) criteria from the International Headache Society for episodic cluster headache (ECH) and chronic cluster headache (CCH). (A) According to ICHD-3 criteria, ECH is defined as at least 2 cluster periods with a duration of 7 days to 1 year per period, with a remission period of at least 3 months between cluster periods. (B) According to the ICHD-3 criteria, CCH is defined as attacks occurring for at least a year, with no remission period or remission periods less than 3 months Table 3.
Challenges and complexities in the design of RCTs for prevention of attacks in cluster headache
Guidelines and recommendations
Cluster period characteristics in ECH
The episodic nature of attacks, spontaneous remission, variation in attack frequency, and typical cluster period dura- • Limit prospective baseline periods to minimal duration as noted above • Limit length of efficacy assessments to minimal time needed based on the expected onset of action for the investigative treatment • Enroll patients with consistent ECH episode duration that is of sufficient length to exceed the key efficacy endpoints and that have good response to the allowed acute CH treatments
• CCH
• To minimize potential of spontaneous remission (although it is much less common for patients with CCH) • ECH • To minimize potential of spontaneous remission during assessment of the primary and key secondary outcomes • To minimize time spent for patients exposed to placebo or an ineffective treatment • To maximize the number of enrolled patients who will experience an active bout during the clinical trial period TA B L E 4 (Continued) will not use these types of excluded substances, which may not be detectable in a urine drug screen. If a sponsor or investigator feels compelled to allow these substances, consideration should be given to the suggestions outlined in Table 4 for concomitant preventive therapies.
The appropriate comparator for a new investigational treatment in an RCT may be placebo, standard-of-care, or both. Some trials have allowed concomitant preventive therapies (primarily CCH and mixed ECH/CCH studies), the most successful of which include oral or injectable steroids that were included as add-ons to concurrent preventive or verapamil. 45,52,55 Other trials permitting the use of non-steroid concomitant preventive treatments have failed to meet their primary endpoint. 28,49,50,60 Whether the concomitant preventive treatment contributed to the failure of these studies to meet their primary endpoint is difficult to ascertain.
Other potential reasons for failure, such as treatment duration, dosage or dosing frequency, or incorrect method of administration, are equally plausible. 28,49,50,60 For ECH, we believe concomitant preventive therapies should not be considered in an RCT; this is possible with an appropriate trial design that allows acute treatments and limits time on placebo.
For CCH, concomitant preventives should be allowed, provided patients have been on a stable dose prior to enrollment and the dose is maintained for the double-blind study period. Corticosteroids or interventional procedures (e.g., occipital or trigeminal nerve blocks) should not be allowed.
Study site selection
Guidelines recommend conducting studies at multiple centers to increase the population size and ensure the study is appropriately powered. 40 Using headache centers as study sites, with headache specialists on staff, ensures study quality and appropriate patient select ion. [27][28][29][45][46][47]50,52,54,55 However, exclusively using headache centers may limit the number of eligible patients and challenge feasibility of completing the trial. If non-headache centers were included, verification of the CH diagnosis (and any other comorbid headache conditions) may be accomplished by implementing third-party confirmation with a headache specialist. Electronic medical records may make it easier to utilize non-headache centers, as they aid in quickly and accurately identifying patients with a documented diagnosis of CH (assuming records have been coded correctly). Therefore, site eligibility should be based on the number of active patients with CH at that site (e.g., seen within ≤2 years), preferably after outreach to patients to determine interest in a clinical trial. This method is currently utilized in many headache clinics. Furthermore, if clinicians work in conjunction with CH support and advocacy groups (e.g., Clusterbusters, OUCH UK, American Migraine Foundation), there is a possibility of increasing patient awareness of available clinical trials and improving recruitment and enrollment, particularly if organized and/or co-chaired by CH support groups or patient advocacy organizations.
Category Considerations for RCT design Justification
Statistical considerations • CCH and ECH • Statistical methods to assess efficacy should be based on ability to accommodate missing data, while still achieving accurate estimate (e.g., mixed model with repeated measures) • Reporting reduction in attack counts as a percentage of patients meeting a defined response threshold (≥x%) can be estimated for each treatment using a categorical, pseudo-likelihood-based repeated measures analysis of longitudinal binary outcomes • Confounding factors must be accounted for in all analyses (e.g., sex, baseline attack frequency, length of current bout, history of treatment responsiveness, concomitant medication use) • CCH and ECH • Low diary compliance or noncompleters may contribute to a smaller sample size than intended • These methods can account for a smaller than intended sample size by including partial data • Confounding factors may have an impact on treatment efficacy and thus should be accounted for in all analyses Abbreviations: CCH, chronic cluster headache; CH, cluster headache; ECH, episodic cluster headache; IHS, International Headache Society; RCT, randomized controlled trials.
TA B L E 4 (Continued)
Incorporation of a baseline period As shown in Tables 2 and 3 Table 3).
However, this study failed to meet its primary endpoint due to the absence of a significant difference between active and placebo groups when attack frequency during week 3 of treatment was compared to the pseudo-baseline period. 48 Another option is a retrospective baseline, for which some may We suggest a primary efficacy outcome of active cluster period termination or reduced attack frequency for ECH. This outcome should be evaluated within 2 to 3 weeks of treatment. Rapid onset of treatment effect is essential for ECH. Thus, we believe this timing would be a compromise between allowing some time for an intervention to be effective but not so long that the utility and value of a treatment for patients with ECH is called into question. Slightly different outcomes will likely be needed in the case of CCH. We suggest a reduction of attacks over a period of weeks in association with the persistence of the effect over longer periods.
Secondary outcomes
Secondary efficacy outcomes in CH prevention trials include patient or clinician perception of improvement, pain severity and/or duration, acute treatment use, the proportion of patients considered responders (e.g., ≥50% reduction in attack frequency), and remission (Table 3). While patient or clinician perception of improvement (e.g., Patient Global Impression of Improvement) are widely accepted as useful outcomes, there is no consensus on the optimal timing or frequency of patient/clinician perception; we would suggest the same timepoint as the primary efficacy parameter, within 2 to 3 weeks of treatment onset. Assessing improvements in pain severity or duration is complicated by the necessity of allowing acute treatments that might reduce pain severity and attack duration, a factor that clearly complicates accurately measuring this outcome in prevention trials.
The restricted 5-point (0-4) scale, 73 commonly used to assess pain severity in CH RCTs, makes it difficult to interpret average reductions from the standpoint of being clinically meaningful. Endpoints related to changes in acute medication use have the potential to be unreliable because of between-patient heterogeneity in attack frequency; however, if assessed within patients, this concern may be alleviated. The reliability of the measures seems higher intraindividually, as CH patients seem to be able to perceive clearly and report when an acute medication is more or less effective on their attacks in routine practice. However, it must be noted that patients with CH often use a variety of acute treatments for pain relief including treatments which may treat a less intense headache (e.g., non-steroidal anti-inflammatory drugs). Targeting medications or treatments used specifically for acute CH treatment, such as subcutaneous or intranasal triptans or oxygen, may provide a better picture of treatment efficacy. If not the designated primary outcome, response rates are an important secondary outcome, and as discussed in the primary outcome section, more than one response rate may be considered.
There is no expert consensus on a standard definition for remission, but we would suggest a 7-day period free of cluster attacks.
Sleep disruption, quality of life, and psychological/psychosocial outcomes are also assessed as secondary outcomes and are appropriate given the high disease burden (
CON CLUS IONS
This report highlights challenges and potential considerations for
CO N FLI C T O F I NTE R E S T
DWD reports the following conflicts within the past 12 months:
|
2022-04-02T06:23:37.577Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2acdc3c7dad41414c553a87546b588775156adb1",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/head.14292",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "a08aa35a2906975e776c1d0771701e379167bdf8",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
164779609
|
pes2o/s2orc
|
v3-fos-license
|
Pressed for Space: The Effects of Justification and the Printing Process on Fifteenth-Century Orthography
ABSTRACT There is a long-held belief that, prior to the standardisation of written English, printers altered spellings to justify their type. I investigate this claim through an analysis of spelling changes in William Caxton’s two editions of the Canterbury Tales—by examining text within one book, written by one author and set by one compositor, the only difference between the sections of verse and the sections of prose should be the requirement for justification within the latter. Were the compositors altering spellings to justify their type, we would expect to see a greater number of altered spellings in the prose sections of text. This is not what the results of this study show—instead there is no statistically significant difference between the frequency of spelling changes in justified and non-justified text. However, there is a significantly higher number of abbreviations introduced into the justified text. These results suggest that the compositor of Caxton’s second edition Canterbury Tales did not change spellings to justify his type.
The Printing Process
Caxton was the first to introduce moveable type and the printing press to England in 1476. Throughout the process of preparing a new book, the printing house needed a copy text to work from. The copy text was marked up by the master compositor, who was responsible for preparing the copy text for printing. At this point, any changes to the previous edition were incorporated into the copy text. For this study I used Caxton's two editions of Chaucer's Canterbury Tales 8 and Pynson's edition of Caxton's Reynard the Fox. 9 Caxton's first edition of the Canterbury Tales (Cx1) was used as the copy text for his second edition (Cx2), with alterations made in line with an unidentified manuscript copy. 10 The master compositor would have marked up a copy of Cx1, adding corrections from the manuscript. Most of the text in Cx2 is the same as Cx1, though 277 lines were added and 89 removed. Changes to spelling were not likely to have been made while the copy text was with the master compositor; instead, it is more likely that spellings were altered by the compositor when setting the type. 11 Once the master compositor had made any changes to the text itself, the copy text was marked up to show the end of each page in the new edition-a process known as casting off. Casting off was necessary for two reasons. Firstly, because of the way the paper would be folded after printing, compositors did not set pages in the order that they would appear in the final copy. The compositors would have printed pages 1 and 8 on the same side of a quarto sheet, and 2 and 7 on the other side. The printers did not have enough type to keep eight pages standing, so it was necessary for the compositor to know where page 7 ended, in order that page 8 could be set immediately after page 1. And here I differentiate between type-the small metal blocks that the compositors set out-and text-the ink impression of the type on the page. The second reason to cast off the copy text is that printing houses usually employed more than one compositor. When setting Cx2, Caxton employed at least two compositors, 12 if not three. 13 Compositors needed to know where their sections were due to end and begin in order to work effectively together.
If the text being copied was verse, then casting off was a fairly straightforward process. Caxton's two editions of the Canterbury Tales include sections of verse and prose. Chaucer's lines of verse are not long enough to reach the right-hand margin. Therefore the master compositor could simply count the number of lines in the copy text and mark where each page would end in the new copy. For example, in Cx1 the Nun's Priest's Tale has twenty-nine lines of verse per page; in Cx2 there are thirty-eight lines of verse per page. The master compositor worked out the length of Cx1 divided into pages of thirty-eight lines each, to get the total number of pages needed. Setting prose was more complicated than setting verse. The lineation would change unless the type and page size of the new edition remained the same as that of the copy text. In Caxton's case, the type changed from Type 2 14 in Cx1 to the similarly styled but smaller Type 4 in Cx2. Caxton's master compositor would have had to work out the size of a page's worth of Type 4 and mark this on the copy text.
Once the copy text has been amended and cast off, the compositors began setting out the type. The master compositor would have specified the size of the printing area for the new edition, so the compositor set his composing stick-the thin rule on which he sets the type-to the width of the printing area. The compositor set out the type as it is on the copy text, revisions included. If setting verse, or text where the type is not aligned to both margins, then the compositor's line is filled using spacing-type that takes up space but that does not leave an impression during printing. Printers had spacing of different widths to ensure that each line was tightly filled with type. This process of fitting the type into a rectangle the shape of the printing area is called justification. When setting 10 Hellinga and Painter; Bordalejo, "Caxton's Second Canterbury Tales." 11 Bordalejo, "Caxton's Editing of the Canterbury Tales." 12 Hellinga and Painter. 13 Bordalejo, "Caxton's Second Canterbury Tales." 14 The naming of Caxton's typefaces in this paper follows that used in Blades.
prose, the process of justification became more difficult, as I shall discuss below. After the compositor has set out a page of type, this was tightly pinned together in a chase-the frame used to hold the type together for printing. The page of type was moved onto the press. Once a page was set, the compositor printed off a copy for error-checking by the corrector. The corrector read the new copy in front of him while the original copy text was read aloud. The corrector amended the proof accordingly, and any changes were made by the compositor. The corrector rarely made changes to spelling, however. 15 If the corrector was happy with the proof, the type was given to the pressmen who began printing copies for the new edition. The compositor began to set the next page and the process started again.
Spelling and the Printing Press
Now we understand the processes involved in printing, we can consider which aspects of this process could introduce spelling changes into the new copy. Within the printing process there are two main factors that could cause compositors to alter spellings when setting type: (1) Not having enough type (2) The requirement for justification The first point covers a range of problems, including the possibility of the printer having too few spaces, too few abbreviations/punctuation marks, or the wrong ratios of letters for the language he is printing in. However, printers were not usually inconvenienced by these problems, and when they did occur they were able to manoeuvre around them. When Caxton began printing, there was no one casting and selling type in England. He was, therefore, obliged to buy his type from the Low Countries-where he learned to print and made connections within the book trade during his time as a mercer and merchant adventurer. Founts of type-a whole set of one typeface, including all the letters, punctuation and abbreviation marks-were made up of numbers of each letter relative to their use in the language they were intended for. 16 Printers intentionally bought founts of type that only just had enough letters to print a couple of pages at once. 17 Therefore, although this practice meant that the printers could not keep many pages in standing type ready to print, they would have had enough type to print two pages at any one time.
The printers, then, were unlikely to have too little type for their needs. But if they did, when the most used letters began to wear out, then they would work with the amount of type that they had, and borrow from other typefaces as necessary. For example, when printing Parliament of Fowls Caxton's compositor was short of capital T in Type 2 and borrowed from Type 3. 18 I have demonstrated that a lack of type is unlikely to cause compositors to change spellings. However, the necessity of justification may have caused compositors to alter the length of words by changing spellings. This requirement that the type fits tightly into a 15 Hellinga, Texts in Transit. 16 Febvre.
17
Ibid., 59. 18 Painter,95. rectangle the shape of the printing area is one of the chief ways in which printing differs from copying a text by hand. Scholars have often suggested that printers altered spellings in order to justify their type.
Justification
Justification has a dual definition within book studies, one a physical requirement and the other a visual effect. We have already discussed the physical requirement in Section 1, namely that the type must be made to fit exactly the width of the printed area. The visual effect is what we see in Figure 1, above.
We can see that in Figure 1 19 the text forms a rectangle that reflects the shape of the type used to print it. In Figure 2, below, the text is aligned to the left margin but not to the right. Though the text stops midway across the printed area, the right-hand side of each line will be filled with spaces in order that the type still exactly fills the chase. In this way, a text isn't necessarily visually justified, as in Figure 2, but the type is always physically justified during the printing process. There are several things that can go wrong if the type is not properly justified, most of which involves the type coming apart. The first problem would occur when transferring the type onto the press. After the compositor has set out a page of type, this is tightly pinned together in a chase. At this point, the type needs to be transferred onto the press. If the type is not tightly justified, then it will fall out and the compositor will have to set the page again. Compositors were paid a wage on the basis of their setting a particular number of pages a day, 20 and so it would not be desirable to spend time setting the same page twice. If the compositor did manage to get a loosely justified piece of text onto the press, more problems could occur during printing. The ink used by printers was sticky. Ink was pressed onto the type with inking balls-balls of sheep's wool covered in animal skin. Because of the stickiness of the ink, any loose type could be pulled out during inking. The result is "fallen type": a piece of type that has fallen between the rest of the type and the paper. It causes the paper to tent slightly, leaving an impression of the piece of type surrounded by a loose halo in the final copy. Fallen type is rare in extant copies. Owing to the number of pages that a compositor was expected Gaskell,54. to set in a day in order to receive payment, most compositors probably made sure that their work was tightly justified. When setting prose, this meant not only the effort of physically justifying the text, but the extra effort involved visually justifying it, too.
It is unlikely that one line of type would justify visually without some alteration. Printers used spacing of different widths to fill any surplus space, to ensure the type is tightly wedged into the chase. Where one space would not fill the gap, a combination of spaces was used. On the second line of Figure 3, 21 we can see a rising space on the line which shows how extra spacing would be inserted between words. Between the words that and man, two spaces have been inserted. However, the type has not been justified tightly enough and so one of the two spaces between the words has risen up to make an impression on the page. The printer would not choose to use two spaces unless necessary: for each fount of type there would have been cast one size of space that was considered the ideal width between words. This ideal space would be used between words unless the compositor needed to alter the spacing when the line did not justify. In Figure 3, either the original space was doubled, or the original was taken out and replaced by two thinner spaces which together were thicker than the original. This process of swapping the different spaces and combining them so as to create a series of spaces of different widths would have been repeated throughout the text.
What is of particular interest is where this additional spacing is used. In text that aligns only to the left, any extra spacing is inserted to the right of the line (cf. Figure 2, above) and the spacing between words remains the same. The extra spacing ensures that the type as a whole fits perfectly into the rectangle of the chase. However, where the text is visually justified, the spacing that would otherwise have been inserted to the right must be evenly distributed through the line, so that the text appears as a rectangle on the page, as in Figure 1, above. One way to do this is through increasing the spacing between words. This is a delicate operation-the spacing needs to be expanded enough so that the line fills the width of the printed area, but not so much as to create noticeably wide gaps between words. This procedure makes justification harder to achieve in text that aligns both to the right and left, and scholarly speculation suggests that printers altered spelling in order to achieve visual justification: So long as such spelling variants were acceptable in printing, compositors used them as an aid to justification. Thus our man might set "doe" according to his usual practice and then, finding that his line was … too long for the measure, change "doe" to "do" by discarding the e, rather than go to the greater trouble of throwing out spaces and finding thinner ones. 22 The actual sequence of typesetting dictated the points at which the compositors would have to conform to limits of space … the compositor could change spelling or vocabulary, but he could also add or omit text. 23 There were also numerous ways in which a scribe could reduce or expand his language, and many of these ways were available to the compositor as well. The most common was to use or alternatively to expand abbreviations. … It was possible to vary the spelling of words in many languages so that they become longer or shorter. In English the addition or omission of final -e and the spelling of words ending in a single consonant with a double consonant with e to give the variants ship : shippe are well known. 24 There is, however, no empirical investigation of whether fifteenth-century printers did alter spellings for the sake of justifying their texts. It seems likely that scholarly opinion has been influenced by John Hart's statement in 1569, in which he argued that spellings deviated from the copy text "onely to fill vp the paper in writing : or the Compositors line in printing : to make a garnishing or furnishing therof with superfluous letters". 25 Hart's supposition might not have been correct-instead of changing spellings to fit type on the composing stick, the changes could result from the compositor introducing his own spellings into the copy. Yet, Hart's assertions have been used to argue that compositors altered spellings for the sake of justification in the fifteenth century. 26 In the following section, I explain how I investigated the question of whether compositorial spelling changes were made to justify their lines, or were representative of normal spelling variation in English at this time.
Method of Analysis
The difficulty in this research has been differentiating among types of orthographic change-this study aims to examine any changes the compositors made to make their type fit on their composing stick. Though the standardisation of written English was already well underway by the time Caxton began printing in England, a great deal of variability was still permitted in spelling. These variable spellings needed to be differentiated from those the compositor introduced intentionally to justify type. A further complication is the effect that the copy text can have on the language of the copy. It has long been accepted that when hand-copying a text, scribes were most likely to produce a copy that was a mixture of the spellings of the copy text and the scribe's own forms. 27 So the spellings in Caxton's second edition of the Canterbury Tales (Cx2) are a combination of the spellings in the copy text, the compositor's own spellings (whether introduced intentionally or otherwise) and any spellings the compositor changed to justify the type. Caxton's second edition of the Canterbury Tales is ideal for this study because it enables us to separate these three different sets of spellings from one another, so that we can focus our attention on spellings changed for the sake of justification.
We first need to identify the spellings that have been influenced by the copy text. The original Canterbury Tales was written by one author-Geoffrey Chaucer. It is important to 23 Hellinga, "Hands of Printers," 5. 24 Blake, "Manuscript to Print," 409. 25 Hart, 15r. 26 Salmon, 19. 27 Benskin and Laing, 56. examine texts written by a single author because Caxton had different editing practices depending upon the author of the text in question. Simon Horobin demonstrates that Caxton's attitude when printing Chaucer's work is similar to his approach to Gower-Caxton retains the dialectal features most associated with the author. 28 In addition to the text having been written by one author, we also know that Cx1 was used as the copy text in creating Cx2. This is important because the copy text could have a great impact on the language of the new copy. By looking at Cx2 in conjunction with its copy text, I analysed only spellings that were changed in the second edition, that is, spellings that were not the same as in Cx1. These spellings, if not influenced by the copy text, must have been changed either because they are the compositor's own spellings, or because the compositor needed to change them in order to justify the type.
The copy text for Cx2 has been identified as Cx1 with corrections from a manuscript that is no longer extant. 29 Caxton claims in the prologue for Cx2 that he is issuing a new edition because a "gentylman" told him that his first edition was faulty, and that he could provide a better edition for Caxton to reprint. 30 Though it has been suggested that this was just an excuse to print a new edition complete with new woodcuts, 31 Caxton does appear to have inserted alterations from an unknown manuscript source onto Cx1, which was used as the base text. 32 Though the copy text contains additions from the manuscript, this does not cause problems for this study. I compared the spellings in Cx1 with their exact counterparts in Cx2. Any additions made to Cx2 from the manuscript could not have been examined, because they did not have a counterpart in Cx1. Furthermore, the spellings from Cx1 are unlikely to have been changed to match the manuscript; Barbara Bordalejo explains that, though many significant changes have been made to all the Tales in Cx2, spellings changed in Cx2 are likely to have been introduced by the compositor. 33 Now that we have found a way to exclude spellings that were influenced by the copy text, we need to separate the compositor's preferred spellings from those he changed to justify the type. To separate these two types of spellings, we need to compare sections of text that have been visually justified with text that has not, and be satisfied that both sections have been set by the same compositor. To do this, I compared sections of verse with sections of prose. In the verse Tales, the text never comes close to the right-hand margin, and so cannot be visually justified. This is because Chaucer's verse lines are short, at least relative to the width of Caxton's page. The prose sections are entirely visually justified, however. Therefore, any differences between the prose and verse, such as a change in the frequency of spelling changes or an increase in abbreviation rates, should be a result of the requirement of justification in the prose. In this way, the verse acts as a baseline measure for the amount of variation we would expect to find in Caxton's prints at this time. By comparing the types of spelling changes that occur in the prose with those that occur in the verse, I aim to isolate the changes that the compositor made to justify his type.
Finally, the Tales selected for analysis were set by the same compositor: the Parson's Tale and the Tale of Melibee-both prose-and the Nun's Priest's Tale and the Manciple's 28 Horobin. 29 Bordalejo, "Caxton's Editing"; Bordalejo, "Caxton's Second Canterbury Tales"; Blake, "Caxton and Chaucer." Prologue and Tale-all verse. Lotte Hellinga and Bordalejo claim independently that these Tales were set by one compositor. 34 The sample size from each tale is about twenty-five hundred words-the length of the Manciple's Prologue and Manciple's Tale combined. Because the prose and verse sections in Cx2 were both set by the same compositor, we would expect any habitual changes to spellings to occur in both prose and verse. For example, the compositor hardly ever changed y to i but frequently changed i to y (697 examples out of a corpus of 1,637 total changes). The y variant appears to be the compositor's preferred spelling, and of the 697 changes, 376 of them appear in the verse and 321 in the prose. This split is quite even (53.9% to 46.1%), and suggests that because the changes were made in the verse as well as the prose-that is, these changes were made regardless of the requirement for justification-the changes were made because they were the compositor's usual spellings. We would expect changes that were made for the sake of justification to appear mainly in the prose. Now that we are looking at text within one book, written by one author and set by one compositor, the only difference between the sections of verse and the sections of prose should be the requirement for justification within the latter. When compiling the record of spelling changes between the two editions, I recorded whether the new spelling would have taken up more or less space on the compositor's line. This is an important distinction. In order for a printer to justify type by changing spellings, it only makes sense to change the spellings to ones that take up more or less space on the compositor's stick, and therefore wedge the type more tightly into the chase. Any changes that did not alter the space taken up by the type would not aid in justification, and were either one of the compositor's own spellings, or a mistake. It is important, then, to record whether the new word took up more or less space than the original.
The result of this investigation is a database of spelling changes that occurred between the first and second editions of Caxton's Canterbury Tales, broken down into text typewhether prose or poetry-and the type of spelling change-whether the new spellings took up more space on the compositor's line than the originals. It was this database that I analysed to investigate whether printers changed spellings to justify their type.
Pynson's Reynard
Justifying type is difficult because extra space must be redistributed between words. When this process is complete, the line needs to look neat without the spacing being either noticeably wide or narrow. In the prose of Cx2 this is not an unduly difficult task-the line lengths have an average length of fifteen words, so there are fourteen gaps in which to redistribute spacing. However, for other texts, justification was made harder through having far shorter line lengths. Richard Pynson's 1494 print of Reynard the Fox 35 is one such example. The text is printed in two columns with an average line length of only six words. To justify Pynson's lines, any excess spacing can be spread only across the five spaces between words, potentially leaving wide gaps in the line. Compared with the average fifteen words a line in Cx2, this makes Pynson's task of justifying the text far harder. An excess of spacing would be very noticeable on a shorter line, making justification more difficult (see Figure 4, below, compared with Figure 5).
We would expect that if Caxton's compositor changed spellings in Cx2 in order to justify his type, then Pynson's compositor would alter spellings with greater frequency because of the increased difficulty in justifying Reynard the Fox. To investigate whether this is the case, I compared Pynson's Reynard with its copy text-Caxton's translation and first edition of Reynard 36 -identified by Norman Blake. 37 Adding Pynson's Reynard to the analysis fulfils several roles. Firstly, because of the shorter lines on Pynson's two-column layout, we can see that use of justification methods increases when the compositors have less space. Secondly, it enables us to see that the compositors of at least two printing houses-Caxton and Pynson-used the same methods of justification. Finally, Pynson's Reynard was printed later than Caxton's Canterbury Tales text-1494 to Caxton's 1477 and 1483-therefore covering the majority of the printing period in England in the fifteenth century. 38 This shows us that printers did not change to different justification methods within the fifteenth century.
Spellings were not Changed to Justify Type
My results suggest that Caxton's compositor did not change spellings in order to justify his type. This finding was contrary to my expectations on setting out. I had expected that the printers would alter the length of the line by changing spellings for longer or shorter variants. Vivian Salmon states that printers were more likely to use shorter spellings when altering a text: avoiding doubling letters and final -e. 39 Gaskell also says that printers preferred to remove letters, rather than add them. 40 We would have expected a higher proportion of uses of had over hadde, for example, in the printed texts, in comparison with a scribal copy from the same time. However, there is an almost equal distribution of spelling changes that shorten and lengthen words in question (245 total number of words lengthened by adding letters; 249 words shortened). Within these numbers, of the 245 lengthened words, 172 are lengthened through adding final -e; of the 249 shortened words, 142 are shortened by deleting final -e. The compositor did not appear to move towards actively adding or removing final -e from the text, as Salmon suggested he might.
Frequency of Changes
The frequencies of change support the conclusion that the compositors under examination did not change spellings to justify type. The frequencies of change do not differ notably between justified and non-justified text. In order for a printer to justify type by changing spellings, it only makes sense to change the spellings to ones that take up more or less space on the compositor's stick. Therefore, it was necessary to classify spelling change by the spacing that the new spelling took up. I classified three types of spelling change: addition, replacement, reduction. Addition is when the word in the second edition was respelled with more letters than in the first edition and therefore took up more space on the compositor's line; replacement comprises the set of words whose spelling is altered, but the length of the word remains the same; reduction is when the second edition features a shorter word than the first. We can see the distribution of these types of change in Table 1, below. The frequencies, whether considered together or broken down into these three categories, do not suggest that there was any great difference between the prose and the verse texts. In fact, a greater number of spellings was changed in the verse than in the prose. As the verse acts as a baseline measure for the variation without justification, the fact that the prose has a similar distribution of locations and similar frequencies of change supports my hypothesis that the prose is not altered for the purposes of justification. 38 Caxton opened shop in Westminster in 1476, and died in 1491. 39 Salmon, 19. 40 Gaskell,349. Additionally, there was no patterning as to where the spelling changes occurred on the page. Philip Gaskell stated that compositors were more likely to change spellings towards the ends of lines, or towards the ends of pages when they realised that they were running out of space. 41 This tendency would make sense: it is easier for the compositor to change words that are near the rightmost side of the line, purely because of the difficulty in getting letters in and out that are already sandwiched between type. Therefore, words which have been changed at the end of a line are also perhaps more likely to be candidates for change for justification. This is not what I have found in this study, however. Instead, I found that in both the prose and the verse, there was no statistically significant result as to where on the page the spelling changes occurred. Figure 6 represents the cumulative distribution of the location on the page of spelling changes in the prose samples of Cx2. Here, the x-axis represents the distance across the page that spelling changes occurred, and the y-axis shows the line number of the pageeach page I analysed in Cx2 had thirty-eight lines of type. The area of the graph represents the printable area on Caxton's page. Each circle represents one spelling change within the prose, located by its line number and its distance across the width of the page. These are the changes of all spelling changes I observed in Cx2 superimposed onto one graph. We can see that the spelling changes are distributed across all areas of the page. Had the compositors been altering spellings towards the ends of the lines, as Salmon suggested, we would see a far stronger clustering of changes along the far right of the page, or in this case, the graph. As it is, the spelling changes are distributed evenly across the page both in terms of how far across they occur, and how far down the page they occur.
The spelling changes are similarly distributed in Pynson's Reynard. Figure 7 shows the distribution of spelling changes in Pynson's Reynard. The graph shows clearly the gap between the two columns. Additionally, several pages of the sample that I examined had an extra line thirty-nine on the rightmost column, and this can be seen here through the six spelling changes that occurred on that line throughout the text. Like the distribution of Cx2, there is no significance to the patterning of spelling changes within Pynson's Reynard. Justification does not, then, seem to have an impact with regard to where on the page the spelling changes were made.
My research suggests that there was no difference between the way that these compositors changed spellings in the prose and verse sections of text. We find similar frequencies of spelling changes in justified and non-justified text, and the compositor is not more likely to use longer or shorter spelling variants in justified prose. If the compositor does not change more spellings when setting prose then it is likely that he is not introducing changes to justify his type.
How did Printers Justify their Texts?
If the compositors did not justify the texts used in this study through altering spellings, then how did they justify their texts? The results of my investigation suggest that Caxton's compositor justified his text through three main methods: (1) Breaking words over lines (2) Abbreviation (3) Altering spaces between words
Breaking Words over Lines
In both Cx2 and Pynson's Reynard, some lines have not been fully justified. Instead, the last word in the line is hyphenated and completed on the next line, or in some cases the line breaks mid-word without hyphenation. We can see this in Figure 8, below. In the sample in Figure 8, most of the line-final words are incomplete. Many of the words are hyphenated at the end of the line, that is, the ends of lines 3, 4, 7 and 8. However, on line 1 penthecoste is split after pen, on line 6 smellynge is split after smel and on the final line kynge is split after kyn.
Breaking words over lines is one of the most frequent methods used by compositors to justify their texts. Within Caxton's prose, 12.8% of lines are not fully justified. The number is greater for Pynson's Reynard, in which 29.33% are not fully justified. It seems likely that the greater spatial pressure on Pynson's compositor has caused an increased use of line breaks to justify the text. On these hyphenated lines, abbreviations are particularly unlikely to occur: only on 4.65% of hyphenated lines in Pynson's Reynard is there an abbreviation. The use of only one of these methods on any one line suggests that the compositor chooses to use either abbreviation or word hyphenation as an active attempt to justify the text. 42
Abbreviation
The study showed that printers added abbreviation twice as often in Caxton's prose than verse. The rate of abbreviation in the prose is double that in the verse: 13.47 43 tokens per 1,000 words in justified text, compared with 6.39 per 1,000 words in text that has not been visually justified. The only difference between the verse and the prose is the requirement for justification. It follows that the difference in abbreviation rates is a result of the compositor using abbreviation to justify his type.
Supporting evidence is provided through analysis of Pynson's Reynard. Here the rate of abbreviation is far higher, where the columns are thin and the spacing is tight. Pynson uses 25.49 abbreviations per 1,000 words of text. The most common abbreviation was the replacement of and with an ampersand. However, other abbreviations, such as that > þ t and the > þ e are also commonly used in all texts. Figure 9 shows the graphical representation of the number of abbreviations per 1,000 words for each of the types of text I have examined. The lowest rate of abbreviation occurs in the verse, where justification is not required. This may be taken as the amount of abbreviation we would expect to be added to any printed text without the requirement for justification. The rate of abbreviation for Caxton's prose is twice that of Caxton's verse. This means that, although a large number of abbreviations may have already existed in Cx1, the compositor added more at a rate of 13.47 per 1,000 words. The highest rate of abbreviation occurs in Pynson's Reynard, where a greater number of abbreviations is required in order to justify the type. This is contrary to what I had expected: Blake tells us that printers preferred not to use abbreviations. 44 The distribution of abbreviations is also worth examining, as we can see in Figure 10, below. As already discussed, the locations of spelling changes on the page appear to be random; altered spellings are not more likely to appear at the ends of lines-a hypothesis 42 Breaking words over lines was never a method utilised in the poetry. This is due to the line lengths; on no occasions were the lines long enough in the Nun's Priest's Tale or the Manciple's Tale to come close to the right margin. The lines never run onto the next line, and so it would not be possible to break the final word over two lines. 43 Frequencies have been shortened to two decimal places. 44 Blake, History of English, 205. However, see Norman Blake's earlier paper for completely the opposite point of view: "The most common [way to justify the text] was to use or alternatively to expand abbreviations" ("Manuscript to Print," 409).
put forward to support the use of spelling change as a method of justification. However, the distribution of abbreviations is not random. Abbreviations introduced to Cx2 by the compositor are more likely to occur at the ends of lines. Figure 10 shows the number of spelling changes that occurred in each tenth of the width of the page. We can see that the greatest number of abbreviations occurs in the area closest to the right-hand margin-that is, the bar on the graph furthest to the right. We would expect to see this when printers realised that they were running out of space and had to use abbreviations to justify their type. We see the same distribution in Pynson's Reynard. In Figure 11, again we can see the gap between the two columns. We can also see that at the rightmost side of each column, abbreviations are far more likely to occur. Again, this finding supports the hypothesis that compositors did use justification methods when they realised that they were running out of space, and so abbreviations are more likely to occur near the end of the compositor's line. However, spelling changes on the whole were not more likely to occur near the ends of lines. So while the compositors of Cx2 and Pynson's Reynard used abbreviations to justify their type, they did not change spellings to this end.
Altering Spaces between Words
Altering spacing between words was the most frequent method by which Caxton and Pynson justified the lines of their texts. It is difficult to determine the distance between the left/rightmost edge of the letter itself and the edge of the type it sits on. This then makes it difficult to discern the amount of spacing between words. The design of the type could add more spacing between words depending upon the letters involved. That said, in every justified line that I examined, the size of the spacing between words differed, even on lines where abbreviations were used, as can be observed in Figure 5 (see above). For example, on line 1 we can see the difference in spacing between called and was, and between was and mellebeus; on line 2 we can see the difference in spacing between vpon and his, his and wyf and so forth. We do not see the same variability of spacing in the verse, demonstrated in Figure 12, below. In the poetry, one width of spacing appears to be used between words, and the line is justified by moving spacing to the right side of the type.
There is one clear reason that compositors chose to respace their lines in order to justify the text: speed. Speed is important-compositors had a set number of pages that they needed to set each day in order to get paid, 45 so to make a decent living they would need to be both fast and accurate. Altering the spacing between words is a quick and easy way to justify type because it involves few processes. The compositor sets out the type as demonstrated in the copy text until it becomes clear that the last word on his line will not fit. At this point he goes back over the line and adds more spacing between the words, or replaces spacing with two thinner spaces that together are slightly wider than the original-as we saw in Figure 3. This is the fastest way to justify type because there are few processes involved: if the type does not fit, the compositor adds more spaces. If this does not work, he uses a combination of spaces that are collectively the width required.
Abbreviating words involves more processes, and this may be why they were used as a justification method with relative infrequency compared with respacing. To abbreviate, the compositor must remember all the words that can be abbreviated, such as and > ampersand and so forth. Then the compositor has to look back over the line, examining each word (written left to right, but appearing to him upside down and each letter a mirror image). It takes time to read back what he has done, and to assess whether any of the words on his line can be replaced with a valid abbreviation. It takes more time still to select the word to be abbreviated, remove the letters on the line and replace them with the corresponding abbreviation. Even then the line may need respacing if it has not justified perfectly. This procedure is clearly not as quick as padding gaps out with extra spacing.
It is unlikely then that the compositors would have used respelling as an alternative to the processes that they already used. It would take more time and involve more processes to change spelling in order to justify type than it would for abbreviation, which I have already suggested was time consuming. 46 Changing spelling is not too dissimilar to Gaskell,132. 46 There is, however, the possibility that compositors would be able to guess accurately the space that words would take up on their composing stick. However, the number of words would vary slightly line to line, so it seems likely that this would be an inaccurate skill and lines would still require justification. abbreviation: instead of remembering which words can be abbreviated (a small and limited list), the printer instead has to analyse every word to determine which could be spelt differently, which variant spellings were generally accepted and which ones would add (or remove) the length he needed in order to justify the line. The key difference here is that abbreviation involves just a small number of words, but respelling could involve far more; a high proportion of words could be spelt variably during the fifteenth century. I found that, in line with this observation, at least the compositors of the texts under examination did not alter spellings to justify their texts. When spellings were changed between editions, these were introduced by the compositor, either because they were his preferred spellings, or because they were representative of the natural variation in spelling at that time.
Concluding Remarks
This study has shown that compositors working for Caxton and Pynson were not changing spellings in order to justify their texts, even when justification was particularly difficult to achieve, as was the case in Pynson's Reynard. The frequencies of altered spellings in justified and non-justified text are not significantly different; had the compositors been changing spellings to justify their type we would expect higher frequencies of spellings changed in the justified text. Nor is there any statistical significance behind the distribution of spelling changes on the page. It seems most likely that printers did not utilise spelling as a method of justification because of the time it would take, relative to that of the other options at their disposal.
Instead, compositors justified their type by using a far greater number of abbreviations, altering the spacing between words and breaking whole words over two lines. Spelling changes were evenly distributed throughout the page in both prose and verse, and changes were not more likely to occur at the rightmost side of lines or towards the end of the page, as has been suggested. 47 However, in justified text we see both a higher frequency of use and significant placement in the use of abbreviation. It appears possible that printers used abbreviation to justify their type. The idea that the printer altered spellings to fit type to the page is a prevalent one in the history of English narrative. This paper suggests that at least two compositors in the fifteenth century did not utilise this practice in their work. Instead, spelling change is introduced by the individual setting the type.
Funding
This work was supported by the AHRC [grant number AH/L503848/1] through the White Rose College of the Arts and Humanities doctoral training centre.
Disclosure statement
No potential conflict of interest was reported by the author. 47 Salmon.
|
2018-12-15T18:17:39.954Z
|
2017-04-03T00:00:00.000
|
{
"year": 2017,
"sha1": "b6cf1a4d85efe205f9a26adc03cabda9fe61ee96",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/0013838X.2017.1250197?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3cea957a74a07b36fe7ad71759d423c57bb0ca0e",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
}
|
256304160
|
pes2o/s2orc
|
v3-fos-license
|
PHARMACY DEDUCTIBLES CAN COMPLICATE THE RELATIONSHIP BETWEEN MEASURES OF PATIENT COST SHARING AND MEDICATION ADHERENCE
DISCLOSURES: This letter pertains to our recent publication in JMCP, which describes a study that was jointly funded by the Pharmacy Quality Alliance and the National Pharmaceutical Council.
Following the publication of our article "Predictors of adherence to oral anticancer medications: An analysis of 2010-2018 US nationwide claims" in the August 2022 issue of JMCP, 1 we received a personal communication from a peer researcher about classification bias resulting from our method for calculating mean monthly out-ofpocket (OOP) cost and its relationship with medication adherence.
We determined the mean patient OOP cost for oral anticancer medications (OAMs) by summing copays and deductible payments for each prescription claim during a 6-month period. Average OOP costs per 30-day supply of an OAM were calculated by dividing the total OOP costs by the total days of medication supply during the 6-month follow-up period and then multiplying by 30. This approach was informed by other research published in JMCP. 2 For the multivariable modeling, we dichotomized OOP spending at the highest quartile of OOP spending.
The bias of concern relates to deductible payments being higher for initial dispensings and lower after the deductible was met. This dynamic is problematic when assessing the relationship between mean monthly OOP costs and medication adherence because some patients with fewer dispensings will have higher OOP costs due to the role of deductibles.
To examine this issue, we performed a sensitivity analysis for a subgroup of patients with blood cancers. This cohort had the highest percentage of patients with a pharmacy deductible among all cancer types assessed in our study: 24.8% (1620/6523). After excluding cases with a deductible, the adjusted odds ratio for the relationship between mean monthly OOP costs and nonadherence was 3.15 (95% CI = 2.62-3.80), compared with an adjusted odds ratio of 2.89 (95% CI = 2.48-3.37) for the original analysis. Both the results of the sensitivity analysis and the original analysis found that those in the highest quartile of OOP spending were roughly 3 times more likely to be nonadherent, controlling for covariates.
In our study, the bias resulting from declining deductible payments over time may have been limited by 3 factors. First, for the database that we analyzed, most patients with blood cancer (75.2%) did not have a pharmacy deductible, and 83% of the patients with a deductible were enrollees of Medicare Advantage, for which deductibles were modest. Only 4.2% were patients in a commercial plan with a higher deductible. Second, the high cost of oral anticancer medications may have overpowered the bias resulting from declining deductible amounts over time. Third, dichotomizing mean monthly OOP costs at the fourth quartile of spending may have attenuated the effect of deductible payments.
Although the sensitivity analysis reinforced our original findings, we nevertheless thought it important to update our limitations to note that the approach we used to calculate average monthly OOP costs should be applied with caution. Future research on this topic should be mindful of the role of deductible payments, which may be more influential in other studies, particularly given the increased use of pharmacy deductibles and benefit designs that eliminate cost sharing after the deductible is met. 3
DISCLOSURES
This letter pertains to our recent publication in JMCP, which describes a study that was jointly funded by the Pharmacy Quality Alliance and the National Pharmaceutical Council.
|
2023-01-28T06:16:16.078Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ea416c7f169be7f440c8acb29a0eb9d9b5e5f345",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fd0200828c856e5b8d8e9ddf13788d0eec2d248c",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
273124686
|
pes2o/s2orc
|
v3-fos-license
|
Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling
Deep learning has enabled breakthroughs in automated diagnosis from medical imaging, with many successful applications in ophthalmology. However, standard medical image classification approaches only assess disease presence at the time of acquisition, neglecting the common clinical setting of longitudinal imaging. For slow, progressive eye diseases like age-related macular degeneration (AMD) and primary open-angle glaucoma (POAG), patients undergo repeated imaging over time to track disease progression and forecasting the future risk of developing disease is critical to properly plan treatment. Our proposed Longitudinal Transformer for Survival Analysis (LTSA) enables dynamic disease prognosis from longitudinal medical imaging, modeling the time to disease from sequences of fundus photography images captured over long, irregular time periods. Using longitudinal imaging data from the Age-Related Eye Disease Study (AREDS) and Ocular Hypertension Treatment Study (OHTS), LTSA significantly outperformed a single-image baseline in 19/20 head-to-head comparisons on late AMD prognosis and 18/20 comparisons on POAG prognosis. A temporal attention analysis also suggested that, while the most recent image is typically the most influential, prior imaging still provides additional prognostic value.
Introduction
4][5][6] However, standard image classification techniques for disease diagnosis bear some major limitations: (i) they can only accommodate a single image of a patient, and (ii) they can only assess if the patient presents with the disease at the time of image acquisition.For slowly progressive eye diseases like late-stage age-related macular degeneration (late AMD) and primary open-angle glaucoma (POAG), it is common for patients to undergo repeated imaging over long periods of time to track disease progression.][9][10][11] In addition, for patients who do not present with the disease at the time of acquisition but may be at increased risk of developing it in the next few years, it is critical to identify this risk as early as possible to plan management.Further, patients in different subphenotypes might have varying eye disease progression speeds in earlier and later stages; long-term epidemiological studies have shown that many factors "dynamically" influence AMD or POAG progression. 12,13r these reasons, we aim to develop a method for disease prognosis, forecasting future risk of developing disease based on longitudinal imaging.Prior work has used color fundus photography imaging for AMD [7][8][9][10] and POAG 14,15 prognosis, some incorporating prior imaging.For example, Peng et al. 7 adopted a two-stage approach where a convolutional neural network (CNN) was pretrained on AMD-related tasks to be used as a fundus image feature extractor.Next, these embeddings were used alongside patient demographics and genomic data in a Cox proportional hazards model for survival analysis modeling of "time to late AMD" from the last available image.Yan et al. 10 developed an end-to-end deep learning approach for late AMD progression by training a CNN to predict the risk of developing late AMD over k years, where k = 2, 3, . . ., 7, again based on individual fundus images.More recently, Lee et al. 8 and Ghahramani et al. 9 began to incorporate longitudinal fundus imaging for automated late AMD progression with a CNN feature extractor and separate long short-term memory (LSTM) module to model temporal patterns in the imaging.Lee et al. classified whether the eye would develop late AMD in fixed windows of 2 or 5 years from the last image, similar to Yan et al., while Ghahramani et al. employed a two-stage approach with survival modeling based on deep fundus image features, much like Peng et al.Similarly, for POAG prognosis, Li et al. 14 adopted a two-stage approach that extracts deep features from a baseline fundus image and performs survival modeling.Later, Lin et al. 15 used a siamese CNN architecture to model changes between a baseline and follow-up fundus image, roughly modeling progression by classifying whether the eye would develop POAG within 2 and 5 years from the last visit.
Overall, these prior efforts often formulate automated prognosis as a binary classification task, for example, predicting whether a patient will develop the disease within fixed durations from the last visit (e.g., 2-year or 5-year prognosis).As a result, these approaches are limited to the time horizon of choice.Converting a 5-year risk classifier to a 2-year risk classifier would potentially entail creating a new patient cohort and retraining the model from scratch on this new classification task.Additionally, this model can only provide a single scalar describing the probability of developing disease at any time within the specified time window, unable to produce a time-varying risk assessment.Finally, many of these methods are only capable of accommodating a single fundus image, failing to model temporal changes in the eye's presentation that may be crucial for assessing the rate of progression and, thus, proper prognosis.To avoid these pitfalls, we adopt a survival analysis approach to disease prognosis from longitudinal imaging data, aiming to model a time-toevent outcome (e.g., years until death or developing a disease) based on time-varying patient measurements.Such an approach is far more flexible and clinically valuable than prior efforts toward eye disease prognosis, as it incorporates longitudinal patient imaging and produces dynamic and long-term risk assessments.For example, this approach would be particularly informative for a patient who has already "survived" several years with no current signs of disease or an early stage of eye disease.Further, our method is end-to-end, meaning it directly accepts longitudinal fundus images and outputs time-varying probabilities describing the risk of developing the disease of interest.
In this work, we propose a Longitudinal Transformer for Survival Analysis (LTSA), a Transformer-based method for end-to-end survival analysis based on longitudinal imaging.Like words in a sentence, we represent the collection of longitudinal images over time as a sequence fit for modeling with Transformers. 16owever, unlike words in a sentence, consecutive images are not "equally spaced," potentially with months or years between visits.To account for this, a temporal positional encoder embeds the acquisition time (time elapsed since the first visit) of each image and fuses this information with the learned image embedding.A Transformer encoder then performs repeated "causal" masked self-attention operations, learning associations between the image from each visit and all prior imaging.The model is optimized to directly predict the discrete hazard distribution, a fundamental object of interest in survival analysis, from which we can construct eye-specific survival curves.Despite advances in deep learning for survival analysis, [17][18][19][20][21][22][23] existing methods either accommodate non-longitudinal imaging or non-imaging longitudinal (time series) data.LTSA is unique in its ability to perform end-to-end time-varying image representation learning and survival modeling from longitudinal imaging data using Transformer-based sequential modeling.
We validate LTSA on the prognosis of two eye diseases, late AMD and POAG.AMD is the leading cause of legal blindness in developed countries, 24,25 and the number of people with AMD worldwide is projected to reach 288 million by 2040. 26The disease is broadly classified into early, intermediate, and late stages.While early and even intermediate AMD are typically asymptomatic, late AMD is often associated with central vision loss, occurring in two forms: geographic atrophy and neovascular ("wet") AMD (Fig. 1). 27o improve management plans for patients, it is important to understand the individualized risk of AMD progression.Patients with low risk may adopt a management plan that will minimize costs and burden of care on the healthcare system.In contrast, high-risk patients may receive a more aggressive management plan earlier in the disease progression in order to maintain vision as long as possible.POAG, too, is one of the leading causes of blindness in the United States and worldwide, 28 as well as the leading cause among African Americans and Latinos. 29,30The disease is projected to affect nearly 112 million people by 2040, over 5 million of whom may become bilaterally blind. 31Similar to AMD, POAG is asymptomatic until it reaches an advanced stage when visual field loss occurs.However, early detection and treatment can avoid most blindness caused by POAG. 32For both late AMD and POAG, accurately identifying high-risk patients as early as possible is critical to clinical decision-making, helping inform management, treatment planning, or patient monitoring.
To evaluate LTSA, we performed extensive experiments comparing our proposed method to a single-image baseline, which only uses the single last available image.This study leveraged two large, longitudinal imaging datasets: 49,452 images from 4,274 participants from the Age-Related Eye Disease Study (AREDS) for late AMD prognosis and 30,932 images from 1,597 participants from the Ocular Hypertension Treatment Study (OHTS) for POAG prognosis.As measured by the time-dependent concordance index, LTSA demonstrates consistently superior discrimination of disease risk on both late AMD and POAG prognosis.LTSA significantly outperforms the single-image baseline on 37 out of 40 head-to-head comparisons across a wide variety of prediction, or "landmark" times, and time horizons.Our results also strongly suggest the benefit of longitudinal image modeling for prognosis, where incorporating prior imaging enhances disease prognosis.Further, since LTSA leverages a temporal attention mechanism over the sequence of images, analysis of the learned attention weights uncovers which visits contribute most to prognosis.Indeed, the most recent visit is usually the single most important; however, we are able to characterize the relationship between time since the final examination and the relative influence of each exam on the predicted prognosis, which may have real-world, real-time implications for ophthalmologists making assessments of risk and prognosis.
Beyond the improved discriminative performance of our proposed method, this study offers a potential answer to the growing demand for dynamic and explainable prognoses for eye diseases.LTSA can enhance our understanding of temporal image patterns contributing to eye disease progression, serving to demystify "black-box" deep learning models.Such clarity could potentially promote greater utilization and trust of deep survival analysis models among ophthalmologists, bridging the gap between technical innovation and clinical practice.In longitudinal medical imaging, patients undergo repeated imaging over long periods of time at irregular intervals (a).Rather than predict the presence of disease at the time of imaging, our method leverages a patient's longitudinal imaging history to forecast the future risk of developing disease through a survival analysis framework (b).Our approach represents the collection of fundus images for an eye over time as a sequence fit for modeling with Transformers.To accommodate large, irregular intervals between consecutive visits, a temporal positional encoder fuses this information with the image embeddings from each visit.A Transformer encoder then employs causal temporal attention over the sequence, only attending to prior visits.The entire model is optimized end-to-end to predict the time-varying hazard function for each unique sequence of consecutive visits.From the hazard function, we compute eye-specific survival curves, allowing for dynamic eye disease risk prognosis evaluated through the framework of longitudinal survival analysis (c).
Development and evaluation of LTSA
Our proposed LTSA is trained on sequences of consecutive fundus images to directly predict the timevarying hazard distribution, allowing for disease prognosis through a survival analysis framework (Fig. 2).Instead of standard positional encoding, a temporal positional encoder accounts for irregularly spaced imaging by embedding the time of each visit and fusing this information with the associated image embedding.Then, a Transformer encoder performs causal, temporal attention over the sequence of longitudinal images before a survival layer predicts the sequence-specific hazard function.
LTSA was trained and evaluated using longitudinal fundus imaging and time-to-event data from the Age-Related Eye Disease Study (AREDS) and Ocular Hypertension Treatment Study (OHTS).The AREDS data consisted of 49,592 images from 4,274 unique patients and 7,818 unique eyes, and the OHTS data consisted of 30,932 images from 1,597 patients and 3,188 eyes (see "Methods" for further details).In the AREDS dataset, eyes underwent an average of 6.34 visits over the course of 6.47 years, with a minimum of 6 months between visits.Approximately 12.2% of eyes developed late AMD (87.8% censoring rate) with a mean time to event of 5.27 years.For OHTS, eyes were examined an average of 9.70 times over 9.20 years with a minimum of 1 year between visits.Approximately 11.9% of eyes developed glaucoma (88.1% censored) in an average of 7.22 years.For model development and evaluation, each dataset was then randomly partitioned into training (70%), validation (10%), and test (20%) sets at the patient level.
To account for censoring and time-varying inputs and outputs, we assess the prognostic ability of our models with a time-dependent concordance, denoted C(t, ∆t).This metric measures the ability to accurately rank pairs of eyes by risk (e.g., predicting higher risk for eyes that will develop disease sooner) for a given prediction time t (time of last visit) and evaluation time ∆t (time horizon into the future over which we assess risk).We compare the performance of LTSA with a single-image baseline that only uses the most recent available image.Models are evaluated by C(t, ∆t) across 20 combinations of prediction times t ∈ {1, 2, 3, 5, 8} and evaluation times ∆t ∈ {1, 2, 5, 8}, where time is measured in years.Finally, evaluation is performed at the eye, not patient, level since each eye can have its own unique disease status.
Validation of LTSA on POAG risk prognosis
Fig. 4 shows that LTSA significantly outperformed the baseline on POAG prognosis for 18 out of 20 combinations of t and ∆t, as measured by the time-dependent concordance index.While the baseline reached an overall mean time-dependent concordance index of 0.866 (95% CI: [0.795, 0.925]), LTSA reached 0.911 (95% CI: [0.869, 0.948]).Full numerical results for POAG prognosis can be found in Supplementary Table 2. Once again, the performance gap between LTSA and the single-image baseline widened as more prior images became available; as prediction time t increased, the number of significant improvements (and the magnitude of these improvements) of LTSA over the baseline was nondecreasing.LTSA also demonstrated an advantage in long-term POAG prognosis, even with early prediction times.For example, LTSA significantly outperformed the baseline for 5-and 8-year prognosis across all prediction times (P ≤ 0.0001 for all 10 comparisons).While LTSA provided a small but significant boost over the baseline for 5-year prognosis from year 1 -C(1, 5) of 0.852 (95% CI: [0.785, 0.910]) for the baseline vs. 0.861 (95% CI: [0.800, 0.920], P ≤ 0.001) for LTSA -this gap only widened with increasing prediction time: C(3, 5) was 0.801 (95% CI: [0.732, 0.865]) for the baseline vs. 0.885 (95% CI: [0.846, 0.920]) for LTSA, and C(8, 5) was 0.899 (95% CI: [0.863, 0.928]) for the baseline vs. 0.950 (95% CI: [0.936, 0.962]) for LTSA.The same pattern could be observed for an 8-year POAG risk prognosis across the range of longitudinal prediction times.Auxiliary POAG prognosis results by time-dependent Brier score can be found in Supplementary Figure 2.
Effect of longitudinal modeling on prognosis
Based on the predicted time-varying hazard probabilities, LTSA can be used to dynamically construct eyespecific survival curves beginning from any time of interest.Fig. 5a depicts survival curves for two unique eyes in the AREDS test set, comparing the predicted survival trajectories of the single-image baseline (dashed line) to those of LTSA (solid line).Eye #1 (blue) and eye #2 (orange) both last underwent imaging 4 years from enrollment, though eye #1 developed late AMD 2 years later and eye #2 developed late AMD 6 years later.For both eyes, LTSA correctly predicts lower survival (higher risk) than the baseline, consistent with the fact that both eyes would go on to develop the disease.Additionally, LTSA correctly ranks the eyes with respect to risk, while the single-image baseline does not -that is, LTSA assigning lower survival probabilities to eye #1 than eye #2 is consistent with the fact that eye #1 will go on to develop late AMD 4 years sooner than eye #2.Similarly, Fig. 5b depicts an analogous pair of eyes from the OHTS test set, with predicted survival curves from LTSA and the single-image baseline.Eye #1 (blue) was last observed during year 9, developing POAG 4 years later, while eye #2 (orange) was last examined during year 4, developing POAG 6 years later.Here, the same pattern can be observed, where LTSA delivers a more accurate risk assessment than the baseline for both eyes.Notably, LTSA also properly ranks the two eyes according to glaucoma risk, a feat that the baseline does not.Conditional survival plots for these cases can be found in Supplementary Figure 3.
Temporal attention analysis
Since LTSA leverages a causal attention mechanism to process longitudinal imaging, the learned attention weights can reveal which visits are most influential to the model's disease prognosis.In support of common clinical practice and knowledge, temporal attention analysis suggests that, in the aggregate, more recent imaging is more important for late AMD risk prognosis (Fig. 6a).Across all unique eyes in the AREDS test Attention scores and AMD severity scores for a sequence of visits from a healthy eye with typical attention patterns; the more recent visits are more influential than the earlier visits (b).Attention scores and AMD severity scores for a sequence of visits from an eye that developed late AMD with atypical attention scores; here, the second-to-last image received the greatest attention weight, corresponding with a jump in AMD severity from 5 to 8 (c).Attention scores are normalized such that the maximum score in each eye's sequence of images is 1 to account for variable sequence length.Images from 10 or more visits before the final visit are binned together to aid visualization.AMD = age-related macular degeneration.
set, the last available image was given the highest attention score in nearly 96% of cases.However, we find that LTSA still attends to prior imaging in a monotonically time-decaying fashion; the median normalized attention score -relative to the maximum attention score in each sequence -was 1 (most important) for the last visit, 0.864 (86.4% as important) for the second-to-last visit, 0.812 (81.2% as important) for the third-to-last visit, etc. with a strong negative linear association (r = −0.912).Alongside the quantitative results demonstrating the benefit of longitudinal modeling, this attention analysis suggests that, while the most recent imaging is often the most important, prior imaging can still provide additional prognostic value.
While, on average, more recent imaging is more pertinent for risk prognosis, analysis of the learned attention weights for individual sequences of eyes can illuminate abnormal cases worth further study.Fig. 6b shows the raw attention weights and ground-truth, ophthalmologist-determined AMD severity scores for a typical, healthy eye adhering to the overall pattern of attention scores -the more recent the imaging, the higher the attention weight.However, Fig. 6c shows an atypical case, where the eye went on to develop late AMD and, more importantly, the second-to-last visit received the greatest attention weight.For this eye, the second-tolast image was most influential to LTSA's prognosis, consistent with a jump in AMD severity from 5 to 8, potentially suggesting rapid progression of AMD.
Discussion
In this work, we presented a novel method for survival analysis from longitudinal imaging, LTSA, and validated our approach on two eye disease prognosis tasks.Both quantitative and qualitative analysis demonstrated clear superiority of LTSA over a single-image baseline -representing standard clinical practicefor both late AMD and glaucoma prognosis.LTSA outperformed the baseline on 19 out of 20 head-to-head comparisons with the baseline for late AMD prognosis and 18 out of 20 comparisons for POAG prognosis when evaluated by the time-dependent concordance index.Given the longitudinal survival analysis setting with time-varying inputs and outputs, evaluation was performed across a wide range of prediction times t and evaluation times ∆t.LTSA particularly shined over the single-image baseline for t ≥ 2, at which point multiple longitudinal images are often available for a given eye; in such cases, LTSA's causal attention mechanism allows for rich representation learning of changes apparent in the longitudinal image measurements of an eye over time.Qualitative analysis of predicted survival trajectories also showed that LTSA can produce more accurate risk assessments than the baseline for a given eye and accurately rank eyes with respect to risk when the single-image baseline cannot.
Our results suggest that longitudinal modeling can improve eye disease risk prognosis, providing evidence that prior imaging can provide added prognostic value.While longitudinal analysis is known to be clinically valuable, particularly for glaucoma prognosis, it can be time-consuming to perform such comparative imagebased analysis in clinical practice.A temporal attention analysis of the learned attention weights by LTSA revealed that the last visit is almost always (∼96% of the time) the most influential for prognosis.However, in the aggregate, we find that LTSA consistently attends to prior imaging in order to make risk predictions, with more distant imaging becoming less important in a linearly decreasing manner.The unique sequencebased representation of longitudinal medical imaging and repeated temporal attention operations of LTSA enable us to study and uncover the importance of prior imaging in the context of eye disease prognosis.Meanwhile, existing medical image classification techniques are neither able to process sequences of images over time nor quantify their relative contribution to the predicted outcome.
8][19][20][21] However, these methods all operate on single images, neglecting the common clinical scenario of longitudinal imaging, where patients undergo repeated imaging measurements over time in order to monitor changes in disease status.In recent years, several deep learning methods have also been proposed modeling time-to-event outcomes from longitudinal data.4][35] Unlike the select few related methods capable of survival analysis from high-dimensional longitudinal medical imaging data, 36,37 LTSA critically (i) is an end-to-end method (i.e., no multi-stage training), (ii) uses a Transformer encoder with causal attention to process sequences of medical images, and (iii) leverages an auxiliary "step-ahead" prediction task, whereby the model is tasked to predict the image-derived features from the next visit (see "Methods" for full description).
This study has certain limitations.First, LTSA was developed for discrete-time survival modeling, when certain applications may call for more fine-grained continuous-time survival estimates.While discretizing time can often serve as a simplifying assumption, we adopted a discrete-time model because the AREDS and OHTS datasets followed up with patients at discrete 6-month (AREDS) or 1-year (OHTS) minimum intervals.Second, the validation of LTSA was limited to the prognosis of two eye diseases.However, the method can be readily applied to any disease prognosis task for which one has longitudinal imaging and time-to-event data.For example, LTSA could be used to predict Alzheimer's conversion from longitudinal neuroimaging 38,39 or survival of cancer patients based on various medical imaging modalities. 40,41Third, while late AMD and POAG progression is unique to each eye, future work may incorporate binocular imaging to predict patient-level risk of disease progression.Also, incorporating tabular demographic and risk factor information may further improve prognostic performance.While this study used a fixed set of hyperparameters to enable fair comparison across methods, further hyperparameter optimization could be employed, particularly to analyze the impact of the regularization term β on downstream prognosis performance.Finally, this study represents a retrospective analysis of clinical trial data that, even with broad eligibility criteria, may not generalize well to real-world populations.To bridge the gap toward clinical translation, real-world clinical assessment is needed to determine whether patients can practically benefit from these predictions, and if the approach is clinically safe and efficient.This will also critically allow us to refine benchtop-derived artificial intelligence according to bedside clinical demands.
Study cohorts
In this study, we included two independent datasets (Table 1).These datasets are derived from large, population-based studies, and the research adhered to the principles outlined in the Declaration of Helsinki.
In addition, all participants provided informed consent upon entry into the original studies.The study protocol was approved by the Institutional Review Board (IRB) at Weill Cornell Medicine.
The Age-Related Eye Disease Study (AREDS) for late AMD prognosis.AREDS was a clinical trial conducted from 1992-2001 across 11 retinal specialty clinics throughout the U.S. to study the risk factors for AMD and cataracts and the effect of certain dietary supplements on AMD progression. 42The study followed 4,757 participants aged 55-80 at the time of enrollment for a median of 6.5 years; the inclusion criteria were broad, ranging from no AMD in either eye to late AMD in one eye.Certified technicians captured color fundus photography images at baseline and in 6-month-to-1-year follow-up periods using a standardized imaging protocol; however, adherence to this protocol was imperfect, meaning visits could occur at any year or half-year mark after enrollment.AMD severity grades from each visit were then determined by human expert graders at the University of Wisconsin Fundus Photograph Reading Center.
While AREDS involved the collection of many different types of patient information, the data used in the present study included 66,060 fundus images, the time (with 6-month temporal resolution) that each image was acquired, and the ophthalmologist-determined AMD severity score using the 9-step severity scale. 43ate AMD was defined as the presence of one or more neovascular AMD abnormalities or atrophic AMD with geographic atrophy; otherwise, the eye was deemed to be censored, since the true late AMD status could not be known.All images acquired during the final visit for a given eye were removed, as this visit was solely used to determine the time-to-event outcome.Removing the final visit from the study cohort also ensured that no images were presented with late AMD at the time of acquisition, forcing the model to truly forecast the future risk of developing disease.The remaining images from the remaining 4,274 patients were then randomly split into training (70%), validation (10%), and test (20%) sets at the patient level to prevent any potential data leakage.All eligible images were included, regardless of image quality, to maximize the size of the training set and to ensure robustness to variations in image quality at test time.
The Ocular Hypertension Treatment Study (OHTS) to predict the onset of POAG .OHTS was one of the largest longitudinal clinical trials for POAG, spanning 22 centers across 16 U.S. states. 44The study followed 1,636 participants aged 40-80 with other inclusion criteria, such as intraocular pressure between 24-32 mmHg in one eye and 21-32 mmHg in the other eye.Color fundus photography images were captured annually, and POAG status was determined at the Optic Disc Reading Center.Much like the AREDS dataset, though visits were scheduled annually, adherence to this protocol was not exact, meaning visits could occur at any year mark after patient enrollment.In brief, two masked, certified readers were tasked to independently detect optic disc deterioration.If the two readers disagreed, a third senior reader reviewed it in a masked fashion.In a quality control sample of 86 eyes, POAG diagnosis showed a test-retest agreement of κ = 0.70 (95% CI: [0.55, 0.85]); more details of the reading center workflow have been described in Gorden et al. 45 For the present study, 37,399 fundus images, their acquisition times (with 1-year temporal resolution), and POAG diagnoses were used from OHTS.As outlined above for AREDS, we also removed all images from the final visit for each eye in the OHTS data.Additionally, in rare cases where there were multiple images acquired during a visit, we only kept the first listed image in our final OHTS cohort for simplicity.The 30,932 images from 1,597 patients were then randomly partitioned into training (70%), validation (10%), and test (20%) sets at the patient level.Like the AREDS data, no images were removed for image quality reasons to encourage robustness to variations in quality.
Longitudinal survival analysis
We approach disease prognosis through the lens of survival analysis, which aims to model a "time-to-event" outcome from potentially time-varying input features.We adopted a discrete-time survival model, given that imaging measurements were either acquired at intervals as short as 6 months (AREDS) or 1 year (OHTS), and we assumed uninformative right-censoring.The collection of longitudinal images for eye i can be written where J i is the number of longitudinal images acquired for eye i, t i,j is the time (in years) of the j th image measurement for eye i, and x i (t i,j ) ∈ R H×W is the fundus image (of height H and width W ) acquired at time t i,j .Similar to the formulation of DynamicDeepHit, 33 we distinguish between discrete time steps j and actual elapsed times t since images are acquired at irregular intervals and the number of images per eye, J i , is variable.In other words, X i (t) represents the collection of longitudinal images of eye i acquired up until time t; for shorthand, we use X i to denote the full available sequence of longitudinal images for eye i (i.e., X i = X i (t J i )).For each X i , we also have a time to event τ i , which is either the time at which the event occurred (e.g., eye developed late AMD -denoted c i = 0) or the censoring time (e.g., the patient was lost to follow-up or the study ended -denoted c i = 1).
The goal of deep survival analysis in longitudinal imaging is to approximate a function that links the time to event to our time-varying image measurements.A typical way to reason about the time to event is through the hazard function the conditional probability that eye i develops the disease at a discrete time step j, based on longitudinal measurements X i , given that the true event time step is greater than or equal to j.From the hazard function, we can readily compute the survival function the probability that the eye i does not develop the disease ("survives") past the time step j.
Specifically, in this study, we train a neural network f (•) to directly map from a longitudinal imaging sequence to the discrete hazard distribution where J max is the total number of discrete time steps, typically chosen based on properties of the dataset, time-to-event task, and computational constraints.While hazards are computed for all time steps, including those that have already occurred before the time step t, our primary interest lies in the future hazards (i.e., s = J i + 1, . . ., J max ).As explained below, we mask out prior time steps to properly optimize and evaluate models.For models trained on AREDS data -captured in 6-month intervals with a maximum observed follow-up time of 13 years -we have set J max = 27.For models trained on OHTS data -acquired in 1-year intervals with a maximum follow-up of 14 years -we have set J max = 15.
LTSA model
Input representation.The input to LTSA consists of a collection of longitudinal images X i = {x i (t i,j ), j = 1, . . ., J i } and their "visit times" {v i,j , j = 1, . . ., J i }, denoting the time (in months since study enrollment) that image j of eye i was acquired.To handle variable-length sequences, we right-pad the sequence with zeros to the maximum observed sequence length l in the dataset (l = 14 for both AREDS and OHTS) when necessary to produce a padded sequence X * i ∈ R l×3×H×W .Critically, these padded inputs will be masked during modeling, optimization, and evaluation as described in the following sections.Much like how sentences are represented as sequences of words in deep natural language processing (NLP), 16 we represent the longitudinal imaging of an eye as a time-varying sequence fit for modeling with Transformers.However, unlike words in a sentence, the longitudinal images in each sequence are not "equally spaced," potentially years having passed between consecutive visits.
Temporal positional encoder.Transformers typically use positional encoding (PE) 16 to inform the model as to the order of elements in an input sequence.This can be accomplished using a fixed sinusoidal PE that maps the position of an element in a sequence to a higher-dimensional representation fit for deep neural network modeling: for i = 0, ..., d/2, where k ∈ Z ≥0 represents the position of a given element in the input sequence.To account for long, irregular time periods between consecutive longitudinal images, we adapt traditional PE to directly embed the visit time v (measured in months) via for i = 0, ..., d/2.After computing the timestep encoding for the entire sequence, this produces a temporal positional embedding e time ∈ R l×d .This approach is similar to continuous positional encoding in Sriram et al. 46 except that we use the absolute visit time rather than visit time relative to the final visit.
Image encoder.While the timestep encoder produces our time step embeddings e time , an image encoder is separately used to learn visit-level image embeddings.To do so, the padded sequence of images X * i is flattened along the batch dimension and fed into a 2D image encoder f img (•) to produce image embeddings where d is the dimensionality of the image embedding.In LTSA, f img (•) is parameterized by a ResNet18 47 convolutional neural network, which maps each image to a d = 512-dimensional feature vector.However, in principle, f img (•) can be parameterized with any 2D image encoder.
Transformer modeling.Following the practice of many Transformer networks, 16,48 we inject knowledge of visit time via elementwise addition of the time step embeddings and image embeddings: e = e time +e img .Our time-infused embeddings e ∈ R l×d are then fed into a Transformer encoder that employs repeated selfattention operations to learn temporal associations across the sequence of longitudinal images for each eye.Unlike a typical Transformer encoder for NLP, where the model may learn associations between all words in a passage of text, a clinician can only rely on current and prior imaging to form a diagnostic decision.
For this reason, we adopt "decoder-style" causal attention masking, where a diagonal mask is applied to the attention weight matrix, enforcing that the model only attends to current and prior visits in adherence to clinical reality.Additionally, a padding mask is applied, where features resulting from the zero-padded inputs do not contribute to the attention computations.The output of this Transformer T (•) is then Survival prediction.After Transformer modeling, our embedding features ẽ are then used to directly predict the discrete-time hazard function.To achieve this, we use a simple fully-connected layer with J max output neurons, followed by a sigmoid activation: where σ(•) is the sigmoid function and Dropout(•) is the regularization technique that randomly zeroes out a specified fraction of weights. 49Since the fully-connected layer is applied in parallel to all l elements of the sequence, the final output ĥ(ẽ) represents the discrete hazard distributions for all l subsequences of consecutive visits in the original sequence.That is, we obtain a full survival prediction based on the longitudinal history of every visit.However, we are often only interested in the prediction based on the full longitudinal history for eye i, ĥ(ẽ) J i ∈ R Jmax .
Step-ahead feature prediction.In addition to the primary task of predicting the hazard function, we also leverage an auxiliary prediction task, whereby we use the features from each subsequence of consecutive visits to directly predict the learned image embedding from the next visit.Alongside survival modeling, this encourages the model to learn features from longitudinal imaging measurements that are predictive of future imaging.This approach has been shown to improve discriminative performance in related methods such as DynamicDeepHit 33 and TransformerJM 34 .
Since a future visit can occur any time after the most recent visit, this becomes a time-varying prediction problem.To inform the model as to the time period over which to predict future imaging features, we adopt a version of the temporal positional encoding explained above.Rather than embedding the visit time v, we embed the relative time elapsed between the current and subsequent visit This enables the model to flexibly control feature prediction in a time-dependent manner.Since the final discrete difference r l does not exist, we set it to 0 and mask it out as explained below.
Formally, we compute the predicted features via Similar to the survival prediction outlined above, these "step-ahead" predictions are computed for every subsequence of consecutive visits in the original sequence.However, since a future image only exists for the first J i − 1 subsequences, we mask out all other step-ahead predictions.
Loss functions.Models were trained to predict the discrete-time hazard distribution by optimizing a crossentropy survival loss from Chen et al.: 20 where is the main cross-entropy-based survival term and is a regularization term to provide additional weight to uncensored cases.We use β = 0.15 following the default value in the implementation of Chen et al. 4 The model was additionally trained to predict the image features corresponding to the next visit in a longitudinal sequence by minimizing the mean squared error L pred between predicted step-ahead features x(ẽ) and the corresponding image embeddings e img during the same forward pass.As explained above, this loss is only computed for the first J i − 1 valid subsequences, for which there exists a subsequent longitudinal image of eye i.
Finally, LTSA was trained by optimizing the sum of these two loss terms
Single-image baseline
The problem formulation for our single-image baseline, which only uses the last available image for modeling, is obtained by simply modifying the input as follows: Now, X i (t) is no longer a collection of images, but rather the single most recent available image for eye i up until time t.Here, the shorthand X i would simply refer to the last image of eye i.
The baseline model consisted of an image encoder f img (•), also parameterized by a ResNet18, trained and evaluated on the last available image for each eye.The model utilized the same survival output layer and was trained with the survival loss L surv only.In other words, this baseline lacked all longitudinal modeling components: visit times, sequence representation, Transformer modeling, and step-ahead prediction.
Model evaluation
To evaluate the prognostic ability of our models, we use a time-dependent concordance index for a given prediction time t (when the prediction is made) and evaluation time ∆t (period into the future over which we are assessing risk).This measures the proportion of "concordant pairs" of eyes, where the model predicts higher risk -over the time window (t, t + ∆t] -for the eye that develops the disease earlier (or lower risk for the eye that develops the disease later).Here, R(t + ∆t | X i (t)) is a risk score representing the predicted probability of experiencing the event within (t, t + ∆t] based on longitudinal measurements of eye i up until time t.Specifically, this risk score is calculated from the predicted survival probabilities via Since we are only interested in risk assessment from the prediction time over the specified evaluation time, this is equivalent to "masking" out risk predictions from irrelevant time steps. Compared to the original concordance index, 50 a standard measure of discriminative ability in survival analysis, this metric allows for dynamic, time-varying risk predictions over arbitrary time horizons of interest.This metric is very similar to the time-dependent concordance index used in Lee et al. 33 , except that we assess our model based on the predicted hazards (rather than "hitting times").We use this metric for a single-risk outcome.
Additional evaluation was performed by time-dependent Brier score to assess model calibration: where I(•) is the indicator function.This definition follows that of Lee et al. 33 Models were evaluated across a range of prediction times t ∈ {1, 2, 3, 5, 8} and evaluation times ∆t ∈ {1, 2, 5, 8}.While it is common to consider 2-and 5-year risk for late AMD 7,8,51 and POAG, 14,15 , we also include an 8-year risk assessment to showcase the long-range prognostic capabilities of LTSA.
Implementation details
All models were implemented and trained with PyTorch v2.0.1 52 .Before training, all AREDS and OHTS images were downsampled to 224 x 224 resolution with bilinear interpolation to accelerate data loading.After loading each image, the following data augmentations were applied, each with probability 0.5: random rotation, color jitter, Gaussian blur, and a random resized crop back to 224 x 224.Each image was then standardized with the channel-wise mean and standard deviation across all training set images.The image encoder f img (•) was an ImageNet-pretrained ResNet18, with weights made available through torchvision v0.15.2 (https://pytorch.org/vision/stable/index.html).This architecture was chosen because it is lightweight and demonstrated sufficient performance compared to more sophisticated and memory-intensive architectures in preliminary experiments.The Transformer encoder of LTSA contained four Transformer layers, each with eight attention heads, a feature dimensionality of d = 512, ReLU activation, and dropout 0.25.The Transformer was trained from scratch with a diagonal causal attention mask prohibiting the model from attending to future elements of a sequence.
All classification heads (survival layer and step-ahead prediction layer) used a dropout of 0.25 on the incoming feature vectors.Both the baseline and LTSA were trained for a maximum of 50 epochs using early stopping with a "patience" of 10 epochs using a validation metric of mean time-dependent concordance index across all 20 combinations of t and ∆t; specifically, if the validation metric did not improve for 10 consecutive epochs, training was terminated and weights from the best-performing epoch were used for evaluation to prevent overfitting.Both models were trained with the Adam optimizer 53 and initial learning rate 1×10 −4 with a "reduce on plateau" scheduler that halved the learning rate whenever the validation metric did not improve for 3 consecutive epochs.Since LTSA was trained with a batch size of 32 (sequences of length 14 each), the single-image baseline used a batch size of 448 (images) to match the number of examples seen per minibatch for fair comparison.
Statistical analysis
All performance metrics in this study are represented by the mean and 95% confidence interval obtained by bootstrapping the test set at eye level.Specifically, 1,000 samples with replacements of the same size as the original test set were drawn, and nonparametric confidence intervals were obtained through the percentile method.All P -values were obtained by a one-sided Welch's t-test with the null hypothesis that the mean of the bootstrapped time-dependent concordance indices for LTSA exceeded that of the baseline.To control the family-wise error rate and account for multiple comparisons, we apply the Bonferroni correction 54 to all P -values by multiplying each raw P -value by 40, the number of performance comparisons made in this study.Significance levels were determined by the adjusted P -values as follows: ****=P ≤ 0.0001, *** = P ≤ 0.001, ** = P ≤ 0.01, * = P ≤ 0.05, ns = no significant difference.The enhanced box plot, or "letter-value plot," seen in Fig. 5, adapts the traditional box plot more appropriately for long-tailed distributions. 55
Figure 1 :
Figure 1: Stages of AMD progression.Color fundus photography images illustrating the various stages of AMD, a progressive eye disease affecting the macula.Images come from the AREDS dataset accompanied by a 9-step AMD severity score; a score over 9 indicates late-stage AMD, which can cause blurring and loss of central vision.Green boxes highlight "drusen", yellowish deposits of protein under the retina, which can be an early sign of AMD.There are two forms of late AMD: "dry", or atrophic, AMD (also called geographic atrophy) and "wet", or neovascular, AMD.AMD = age-related macular degeneration; AREDS = Age-Related Eye Disease Study.
Figure 2 :
Figure2: Overview of proposed longitudinal survival analysis approach.In longitudinal medical imaging, patients undergo repeated imaging over long periods of time at irregular intervals (a).Rather than predict the presence of disease at the time of imaging, our method leverages a patient's longitudinal imaging history to forecast the future risk of developing disease through a survival analysis framework (b).Our approach represents the collection of fundus images for an eye over time as a sequence fit for modeling with Transformers.To accommodate large, irregular intervals between consecutive visits, a temporal positional encoder fuses this information with the image embeddings from each visit.A Transformer encoder then employs causal temporal attention over the sequence, only attending to prior visits.The entire model is optimized end-to-end to predict the time-varying hazard function for each unique sequence of consecutive visits.From the hazard function, we compute eye-specific survival curves, allowing for dynamic eye disease risk prognosis evaluated through the framework of longitudinal survival analysis (c).
Figure 3 :
Figure 3: Late AMD prognosis results.Time-dependent concordance index C(t, ∆t) for various values of prediction time t and evaluation time ∆t comparing the single-image baseline model (blue) to LTSA, which incorporates all prior visits (orange).Box plots depict the values computed from 1,000 bootstrap samples of the test set (center line = median, box = IQR, whiskers = 1.5x the IQR from the box).Significance levels are determined from Bonferroni-adjusted P -values as follows: **** = P ≤ 0.0001, *** = P ≤ 0.001, ** = P ≤ 0.01, * = P ≤ 0.05, ns = no significant difference.AMD = age-related macular degeneration; IQR = interquartile range.
Figure 5 :
Figure5: Longitudinal modeling better captures eye disease risk.Predicted survival curves comparing the baseline model (only using last available visit) and our longitudinal model (using all available visits) prognoses for two unique eyes in the AREDS test set (a) and two unique eyes in the OHTS test set (b). Visualizations below each panel depict the longitudinal visit times, event times, and prognosis horizons for each eye in panels a and b, respectively.The longitudinal model not only correctly predicts higher risk (lower survival) than the baseline for each eye, but also correctly ranks the two eyes in terms of risk in accordance with how rapidly the eye will develop the disease.AMD = age-related macular degeneration.AREDS = Age-Related Eye Disease Study; OHTS = Ocular Hypertension Treatment Study.
Figure 6 :
Figure6: Temporal attention analysis reveals which images contribute most to late AMD risk prognosis.Enhanced box plot of normalized attention scores for each AREDS test set image grouped by visit number from least to most recent (a).Attention scores and AMD severity scores for a sequence of visits from a healthy eye with typical attention patterns; the more recent visits are more influential than the earlier visits (b).Attention scores and AMD severity scores for a sequence of visits from an eye that developed late AMD with atypical attention scores; here, the second-to-last image received the greatest attention weight, corresponding with a jump in AMD severity from 5 to 8 (c).Attention scores are normalized such that the maximum score in each eye's sequence of images is 1 to account for variable sequence length.Images from 10 or more visits before the final visit are binned together to aid visualization.AMD = age-related macular degeneration.
Supplementary Figure 1 :Supplementary Figure 3 :
Auxiliary late AMD prognosis results.Time-dependent Brier score B(t, ∆t) for various values of prediction time t and evaluation time ∆t comparing the single-image baseline model (blue) to LTSA, which incorporates all prior visits (orange).Box plots depict the values computed from 1,000 bootstrap samples of the test set (center line = median, box = IQR, whiskers = 1.5x the IQR from the box).Significance levels are determined from Bonferroni-adjusted P -values as follows: **** = P ≤ 0.0001, *** = P ≤ 0.001, ** = P ≤ 0.01, * = P ≤ 0.05, ns = no significant difference.AMD = age-related macular degeneration; IQR = interquartile range.Longitudinal modeling better captures eye disease risk.Predicted conditional survival curves comparing the baseline model (only using last available visit) and our longitudinal model (using all available visits) prognoses for two unique eyes in the AREDS test set (a) and two unique eyes in the OHTS test set (b). Visualizations below each panel depict the longitudinal visit times, event times, and prognosis horizons for each eye in panels a and b, respectively.AMD = age-related macular degeneration.AREDS = Age-Related Eye Disease Study; OHTS = Ocular Hypertension Treatment Study; POAG = primary open-angle glaucoma.
Table 1 :
Description of longitudinal eye disease datasets.
Characteristics of AREDS and OHTS longitudinal imaging datasets for late AMD and POAG prognosis, respectively.The number of visits and years observed are reported per eye since each eye can possess a distinct disease status.The censoring rate is reported as a percentage of the number of unique eyes in the dataset, and the number of years to develop the disease is computed only from uncensored eyes.AMD = Age-related Macular Degeneration; AREDS = Age-Related Eye Disease Study; OHTS = Ocular Hypertension Treatment Study; POAG = Primary Open-Angle Glaucoma; sd = standard deviation.
|
2024-05-15T06:45:07.185Z
|
2024-05-14T00:00:00.000
|
{
"year": 2024,
"sha1": "fb9f2f4f948ab7f7ba62a9c06423744d689a5e15",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "ArXiv",
"pdf_hash": "fb9f2f4f948ab7f7ba62a9c06423744d689a5e15",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
243792355
|
pes2o/s2orc
|
v3-fos-license
|
Going home for tea and medals: How members of the flood risk management authorities in England construct flooding and flood risk management
The construction of flooding and flood risk management are complex and there is potential for dissonance between individual and institutional understanding and experience of both. In this article, we start by investigating how flooding is managed and the change in paradigm from flood defence to more adaptive approaches, which embed resilience into flood risk management. Using analysis of semi‐structured interviews with members of the flood authorities in England, we explore how flood management authorities construct ‘flooding’ and establish that it is often defined by in‐the‐moment impacts. Whilst these in‐the‐moment impacts are understood to be devastating, there is less appreciation of long‐term human impacts of living at risk of flooding. We uncover how the construction of ‘flood risk management’ by the flood authorities is complicated by factors, such as the construction of resilience, availability of funding, technical expertise and responsibility fragmentation that the Floods and Water Management Act (2010) has created. We conclude that the differing constructions of flooding and flood risk management between flood management authorities in England hinder how flooding is managed. Therefore, we propose that a more nuanced understanding of flooding and flood risk management is essential for effective partnership working between flood risk management authorities and communities.
| INTRODUCTION
The constructions of 'flooding' and 'flood risk management' are complex and yet a collective understanding of these terms is fundamental to ensuring that the Phiala Mehring: Thames Regional Flood and Coastal Committee. organisations and institutions who are involved in managing flooding can communicate and work together effectively, particularly with flood communities.
Our earlier work (Mehring et al., 2018) uncovered differences in how key words associated with floods and flooding were constructed. For example, the dissonance in how 'partnership working' is constructed by flood management authorities often leads to flood communities feeling that their voices are not being heard because they do not see their knowledge or experience of flooding reflected in how flooding is managed. As observed by Roth et al. (2017) partnership working appears to frame participation as being upon a level playing field yet in practice there are hidden inequalities of resources and power.
This raises questions around how far these differing constructions of flooding and flood risk management reach and whether over-arching words like 'flooding' are constructed in the same way by those who experience it, namely communities living at risk of flooding, and those whose role it is to manage it, namely flood risk management authorities. This article seeks to fill the gap in our knowledge to understand how members of the organisations who manage flooding In England construct flooding and subsequently flood risk management.
What a 'flood' is may seem obvious to the casual observer, within this article, we aim to complicate the idea of flooding to reveal how it is constructed differently amongst the flood risk management authority members. We will take you through the development of a thematic understanding of how members of the flood authorities construct and experience flooding and flooding risk management. Before we delve into the constructions of flooding and flood risk management and understanding how they are framed, it is important to first understand how flooding is managed in England and how flood management authorities are organised.
| The paradigm of flood risk management
Approaches to flood risk management and the authorities involved in managing flooding vary from country to country, with the histories, policies and flood risk strategies of each country undoubtedly impacting how members of the flood management authorities' approach and understand flooding, and what flood risk management is and should be. We demonstrate this by outlining the approaches in England, The Netherlands, Germany and USA, drawing out the similarities and differences between contexts, as well as making links to their own context from our work.
In England, the current paradigm of flood risk management has developed from an initial stance of flood defence, namely defending productive land from water (Scrase & Sheate, 2005;Werritty, 2006). The substantial flood events in the 1940's and 1950's, where a large number of people sadly died (Lumbroso & Vinet, 2011;Scrase & Sheate, 2005), and the subsequent government reviews, shifted the paradigm from flood defence to one of flood risk management with the emphasis on keeping people safe (Donaldson et al., 2013;Nye et al., 2011).
This new approach to managing flooding has, over time, moved to an understanding that flooding cannot be stopped, it can only be managed and mitigated (Brown & Damery, 2002;Scott et al., 2013). Hence resilience has become a key feature of managing and mitigating flood risk and it now plays a dominant role in policy in England (Bottazzi et al., 2018;EA, 2020;Gov.UK, 2016). However, the concept of flood resilience is complex, in particular as 'resilience' is framed in many ways with many definitions (Bertilsson et al., 2019;Campbell et al., 2019). In addition, resilience is complicated by geography, finance, type of flooding and changes in patterns of flooding (Bubeck et al., 2017).
This policy shift towards resilience is moving flood risk management to focus more on anticipating, absorbing, and adapting to flood disasters (Bottazzi et al., 2018) where the aim of policy and protocols is for damage prevention, speedy recovery, and preservation of community functionality (Bertilsson et al., 2019;Ritzema & Loon-Steensma, 2018).
It is not just in England where flood risk management is adapting over time, shifting to a paradigm more centred on resilience. For centuries, The Netherlands has relied on protection as a means of managing flooding (Bubeck et al., 2017;Doberstein et al., 2018;Van Loon-Steensma & Vellinga, 2019). Flooding plays a dominant role in the Netherlands with 26% of land located below sea level and a further 29% is sensitive to flooding (Roth et al., 2017). Flood management is predominantly a state responsibility (Wiering & Winnubst, 2017) set at two levels: nationally (Rijkswaterstaat) and through more regional water authorities or boards.
The impacts of development, climate change (Roth et al., 2017) and increased flood risk have led the Netherlands to review how it approached flood risk management, such that at the turn of the century the concept of "room for the rivers" was developed (Doberstein et al., 2018;Hegger et al., 2016) which reframed flood risk management around approaches of avoid, accommodate and retreat. The subsequent Delta Programme further builds on this (Hegger et al., 2016;Zevenbergen et al., 2018) taking a longer view of the potential impacts of climate change, further shifting the flood risk management to more flexible and adaptive approaches.
In Germany flood risk management is the responsibility of the federal states (Länder) (Bubeck et al., 2017) and in addition to structural flood protection measures, other non-structural approaches are utilised, for example, spatial planning policies. There is also an increasing responsibility of flood-prone residents and business to contribute to damage prevention. Germany takes this approach of resilience a stage further by demanding by law that private adaptation and resilience measures are taken by owners of flood prone properties (Kuhlicke, 2010).
Likewise, USA policy strongly emphasis's individual responsibility (Bubeck et al., 2017). In addition, an important feature of flood risk management in the USA is the federal National Flood Insurance Program (Bubeck et al., 2017).
This journey of policy from defence to flood risk management utilising concepts of resilience and adaptation, has resulted in changes in who is involved in managing flooding. Flood defence was a very technocratic (Penning-Rowsell et al., 2006) and top down paradigm, predicated on the role of the flood risk management authorities who carry out work on the behalf of flood communities. It excluded the participation of flood communities from its ways of working (Donaldson et al., 2013). By contrast, modern flood risk management aspires to achieve a more integrated approach, one that aims for flood authorities to engage with flood communities .
After the 2007 floods in the UK, the Government conducted a(nother) review of flooding (Bubeck et al., 2017); The Pitt Review (Pitt, 2008), guided the development and enactment of the Floods and Water Management Act 2010 (Gov.UK, 2010) that shapes the current role of flood authorities today. The aim of this Act was to create a simpler and more effective means of managing the risk of flood and coastal erosion.
To simplify flood risk management the Act sets out which bodies are responsible for different elements of flooding effectively laying out who the flood authorities are within England (Figure 1).
The apparent simplicity of the above structure belies the complications, which rise from water knowing no political nor administrative boundaries. Rainwater flows across catchments unhindered by these human constructed boundaries. It makes no deference to being pluvial, fluvial or groundwater in source.
One of the key themes which came out of the Pitt Review (Pitt, 2008) was the need for the Flood Risk Management authorities to work in partnership to deliver more effective flood risk management, which seeks F I G U R E 1 The organisations involved in flood risk management according to the Flood and Water Management Act (Gov.UK, 2010) greater benefits through co-operation. This was duly embedded (section 13: Co-operation and arrangements) into the Floods and Water Management Act 2010 (Gov. UK, 2010). It is worthy of note, that although the Pitt Review did discuss the involvement of flood communities in flood risk management, for example, in identifying the importance of engaging communities and how this can develop connectivity to flooding (Pitt, 2008), and the importance of community knowledge (Pitt, 2008), these elements did not make their way into the final Act. There is no onus or duty for flood authorities to engage, involve or work with the people and communities impacted by flooding.
Nonetheless over the last few decades there has been an increased acceptance of the importance of involving flood communities in flood risk management (Challies et al., 2016;EA, 2020;Evers et al., 2016;Mehring et al., 2018;. There is a clear perception that involvement can increase flood communities connectivity to flooding through developing understanding about the sources and pathways of flooding (Ntontis et al., 2019) which can lead to increases in community resilience (Bark & Sutherland, 2019) and preparedness. This more integrated approach to managing flood risk is gaining increasing importance as climate change impacts the frequency and intensity of storms (Bark & Sutherland, 2019;CCC, 2016;Gov.UK, 2016;Thorne, 2014).
| The complexity in the construction of flooding
Flooding has many constructions: a physical construction, water where it should not be; an experiential construction, living through water entering a person's home; an emotional construction, the fear of rain, anxiety when leaving your home alone; a financial construction, no money to repair domestic damage; a climate change construction, the risk of increased flooding; amongst many, many others. An individual's construction of flooding is going to be defined through their experience of it.
Some of the above constructions are framed around 'in-the-moment' events, whilst others are related to longterm human impacts of flooding as illustrated in Figure 2. Buildings dry out and can be recovered, whilst the more human impacts of flooding, the psychological, the emotional, the financial, the impact on relationships, can and do go on for years (Walker-Springett et al., 2017). If flooding is constructed differently within and between the authorities who manage flooding this could readily hinder the communication and engagement that is F I G U R E 2 Some of the long-term human impacts of flooding required for effective flood risk management potentially leading to conflict between the very groups, organisations and people that should be working closely together to manage flooding in an integrated manner (Roth et al., 2021). It is therefore important to understand how flooding is constructed by members of the authorities who manage flooding in England and identify any differences between the overarching authorities. This article aims to fill this knowledge gap.
| Data collection
The data for this research was gathered through 30 semistructured interviews with members of flood Risk Management Authorities from across England: 13 from the Environment Agency; 9 from LLFA's (including interviewees from Highways departments, or Internal Drainage Boards [IDBs]); and 8 from Water Companies. All interviewees had professional roles which required engaging and working with flood communities. For reasons of confidentiality, the geographical location of the interviewees will not be disclosed. The interviews were conducted from December 2018 running through to June 2019, which covered the flooding from Storms Ciara and Dennis ( Figure 3).
The interviews were set against the backdrop of a substantial consultation period for the Flood and Coastal Erosion Management (FCERM) National Strategy which lead to the publication of the National Flood and Coastal Erosion Risk Management Strategy (EA, 2020) for England on 14th July 2020. This consultation will undoubtedly have had an influence on some of the interviewees with a number of them making direct observations about the consultation.
| Interview questions
The interviews were framed around questions designed to access experiences and constructions of flooding and flood risk management such as: • What is your experience of flooding? • How would you describe flooding to someone who does not work in flood risk management/live at risk of flooding? • What does flood risk management mean to you?
• What is working well in flood risk management?
Thematic analysis was used to understand and interpret the information gathered in the interviews. This approach enabled the layering of meaning to understand sense and themes within the sections/sentences and within the information as a whole (Kitchin & Tate, 2000). Care was taken to avoid identifying themes purely based on frequency of use by an individual interviewee, as this runs the risk of biasing themes in the full analysis that might only be relevant to, although frequently mentioned, by a few individuals.
| Thematic framework
The entire thematic framework consisted of 142 subthemes which sat under 17 over-arching themes (Table 1).
F I G U R E 3 Flood events timeline during interviews
The individual sub-themes added greater detail to the meaning of the over-arching themes (Bark & Sutherland, 2019). For example, the 'impact of flooding' had 19 sub-themes, of which examples are listed in Table 2.
| WHAT IS FLOODING AND FLOOD RISK MANAGEMENT TO MEMBERS OF FLOOD RISK MANAGEMENT AUTHORITIES?
In this section, we start by deconstructing flooding to appreciate the balance between the comprehension of flooding as an 'in-the-moment' impact versus the longterm human impacts. We gain an understanding of the role that personally experiencing flooding plays in the construction of flooding for members of flood management authorities, before moving on to flood risk management and gaining an understanding of how expertise, funding and responsibility fragmentation (Hegger et al., 2016) impact the construction of flood risk management. Finally, we unpack the differing constructions of flooding and flood risk management amongst the differing flood authorities.
| Flood authority perceptions of what flooding means to those who flood
The interviews conducted for this research highlighted that the in-the-moment impacts of flooding are understood to be devastating by the flood authorities.
So, it's devastating. I think that word really, sums it up, you really need to get under the skin of devastating [Interviewee-FA13].
Including the impacts immediately following a flood.
XXXX (names individual) describes about how the family had to go to Sainsbury's, to get a shower, to wash, to use the loo [Interviewee-FA11].
Bound up in this understanding of the 'in-the-moment' impacts of flooding, is the knowledge that many of these impacts can only be fully understood by witnessing them. That is, the lived experience of flooding or being involved in a flood response is critical to understanding the emotional impact of a member of the publics home being flooded.
I'm probably one of the only people, well myself and [named individual] who's involved and he's a highways drainage per-son……. We're some of the only people who've got that direct experience of going out to people's properties after a large scale flood event or even a small scale flood event………. that's no fault of any of the other officers and no fault of their own, we just haven't that than major flood event like 2007 and 2012, two pretty bad years [Interviewee-FA21].
The comprehension of the devastation of flooding extends to some understanding of the stresses and strains of the long-drawn-out process of recovering from flooding. Yet this understanding is rather one The disruption/having to move out/life in caravans/the long recovery period afterwards dimensional, framed around the physical elements of recovery and less so the emotional and psychological components.
It's awful, its devasting, it takes months to dry out and the damage is immense, that's quite worrying. [Interviewee-FA16].
These physical in-the-moment discussions dominated much of the understanding of what flooding is from a flood authority perspective. However, within some flood authority interviews there was some appreciation of the mental health impacts of flooding, the lived experience of flooding. These were understood as ranging from the fear of leaving your home to the constant need to track the weather. These are all symptoms of an individual feeling the stress of facing something over which they have no control (Gutteling et al., 2017). Whilst to the outsider, these behaviours may seem mal-adaptive and at worst pathological, they provide a means for the individual to take back some control, to increase their coping capacity (Wamsler & Brink, 2014).
The need for these types of coping behaviour is not well understood by those who do not flood, and the 'irrationality' of these behaviours is often perceived as not being helpful, that they cannot stop flooding and potentially only act to intensify feelings of stress.
I am not criticising the guy I am just sort of saying how much time it takes you to go through 23 apps. …. it is almost addictive and there are people that are, how can I put this, that are so obsessed with the flood action group that I am not sure that it is that healthy for them because they are just obsessing on the subject [Interviewee-FA15].
Whilst other flood authority interviewees understood a little more. They appreciated that the stress and strains of living at risk of flooding are hard to comprehend without fully understanding the situation. I think we are too quick to judge and say, you know, that, that's irrational and that's it. But you don't, none of us know the backstory what they've been through to get them to that state where that it is normal for them [Interviewee-FA8].
| Moving to understanding long-term impacts
There is evidence from our research that personal experiences/witnessing of flooding can alter and morph an individual's construction of flooding leading to a better more nuanced understanding of the more long-term impacts. For example, one of the interviewees had their construction of flooding altered by visiting a flooded home: I came to a place, a remote farm, a house next to the river XXXX and hung up on the washing line were individual photos, each one of them with an individual peg on it, drying [Interviewee-FA22].
Being faced with such a visceral depiction of what flooding is, led the interviewee to reframe their construction of flooding.
When the blue lights turn off and the river levels dropped off to a benign level, it's very easy to think that the flood has finished. It hasn't, has it? Hydraulically it has finished, you know, on, you know, on our data screens it might have finished. But it's only just beginning for flood victims [Interviewee-FA22].
Here the construction of flooding moves from a simple framing of hydraulics to a much more complex understanding that flooding has very human impacts which linger long after the flood waters have receded. This change in the construction of flooding was only achieved through a lived experience of flooding.
This demonstrates how a more nuanced understanding of what flooding is, is possible when members of flood risk management authorities have direct experience of flooding, for example, by visiting flooded homes, talking to people whose homes have flooded or being directly involved in the recovery process.
| Beyond 'In-the-moment' impacts
Some flood authority interviewees understood that flooding extended beyond the immediate immersion of the home in water and that the stresses, strains, emotional and psychological impacts of flooding went through the recovery process and continued into life after flooding. For example, that some people are so terrified of leaving their homes that they do not go on holiday, or they very closely monitor the 'home' situation whilst on holiday.
I mean it does affect people, people are afraid to leave their home as consequence…. it does affect people's enjoyment of their holidays, and in fact whether they'll go far on that holiday and things like that. You know, they are, they keep watching, they're on holiday, but they're watching the weather back home [Interviewee-FA19].
| What is flooding to the flood authorities?
For members of flood management authorities in England, flooding is regularly constructed as an in-themoment event, which often excludes the potential for long-term human impacts such as the stress of having to live with the possibility of flooding again. This, despite the fact that it is now recognised that Post Traumatic Stress Disorder (PTSD) is quite common after flooding (Waite et al., 2017). Through the researcher's own experience of having flooded friends with PTSD, they have witnessed people whose lives become dominated by behaviours, which appear irrational to those who have not experienced their home flooding.
| What is flood risk management?
The interviewees construction of flood risk management was more individualised, shaped not only by the flood authority that the interviewee worked for, but also through the personal experiences of those interviewed and linked to the individuals' work life in the days or weeks running up to the interview. Here their work biographies and the emotions of past experience readily influence their construction of flood risk management (North & Nurse, 2014).
In some instances, there were events that stuck in the memory of the interviewee. For example, several interviewees talked about engagement that had gone terribly wrong. One interviewee referred to a public meeting where members of the local community demanded a very confrontational approach to the meeting with a top table of flood authorities and an audience of locals who had flooded. Here, flood authorities felt a sense of confrontation and anger from flood communities, and a sense of being attacked or the need to defend themselves.
it's a lynching, they want to give you a lynching, you know, and they want it to be very public [Interviewee-FA8].
This can have the effect of altering approaches to community engagement and hence the part that community engagement has in flood risk management. Many public meetings are often now structured as drop-ins to reduce the risk of confrontation. The danger here is that the construction of flood risk management is moving away from the important holistic approach required to manage a systemic complex risk like flooding (Castaños & Lomnitz, 2009;Renn, 2015) to more individual approaches.
| The role of 'expertise'
Flood risk management is also frequently constructed around technical expertise (Wiering & Winnubst, 2017), where the expertise of the flood authorities takes precedence. The old technocratic approaches to managing flooding can still retain a firm grip on ways of working. Some interviewees understood this was not a positive stance to take, with the humorous sarcasm in this comment leading to the title of this article.
we are the experts. This is what we'll do for you. That's brilliant isn't it. Yeah. Are you happy with that, thank you very much? We'll go home for tea and medals…… Defining oneself as 'the expert' can lead to flood risk management projects being designed without any engagement with the local community (Barnes & Schmitz, 2016). Walker et al. (2006) liken this to a 'rubber-stamping' of approach by government agencies, in that they determine what is required and what is 'good' and 'necessary'.
Thus, consulting the local community becomes more about informing, telling people what you, as the flood authority, as an expert, are going to do to for them (the community) on their behalf. I think we often use the word, for example, consult when we mean inform [Interviewee-FA18].
These technocratic ways of working drive knowledge and power hierarchies (Mehring et al., 2018; and there are risks associated with this approach, not least the exclusion of local knowledge which could be vital to the success of the project, for example, that dropped kerbs direct water in a direction that models do not predict which is well understood by the local community but not the models. In addition, the exclusion of communities from decisions that will affect and impact their lives (Yamamoto, 2012) presents a risk to effective flood risk management. Here flood risk management is framed around technocratic ways of working which exclude community knowledge and experience.
| The role of funding
How flood risk management is funded, is clearly important, never more so than during a period of national financial constraints. Without funding, infrastructure cannot be built, community groups cannot be supported, and modelling cannot be run. Funding is not just about who pays for what, it is also an important element in the construction of flood risk management. No matter what the flood authority interviewees thought flood risk management should ideally be, almost everyone felt that it was shaped, if not hindered by a lack of funding. I think it just always comes down to funding at the end of the day. Yeah. That, that's always always a really, really big challenge. .
This in term can lead to the 'fobbing off' of problems to avoid paying for the resolution of them.
It's people again, organizations not wanting to put our hands up and admit to the issue because there's a budgetary impact of it [Interviewee-FA21].
The lack of funding is perceived to be a real problem in implementing flood risk management schemes and projects. It is experienced as the flood authorities being stymied in how, and if, they can get funding for projects and funding to engage with flood communities, which can lead to very guarded approaches to communities. This creates conflict amongst the various flood actors (Thaler & Priest, 2014) and it can result in the flood authorities being very cautious about flood risk management and feeling the need to manage the expectations of flood communities. Many of the flood authority interviewees felt that it is hard to talk to a community about reducing their flood risk when they are worrying about getting funding. Better to manage expectations that nothing may happen than promise a solution only to find no one will pay for it.
And there's a scheme that just doesn't stack up financially, which is really difficult to tell people just doesn't fit, square pegs in round holes and all of that [interviewee-FA13].
Here, the construction of flood risk management is very much shaped by the lack of funding.
We've hit a point I think where it's now become, um, it's not cost effective to deliver, to fix flooding anymore [Interviewee-FA28].
| Responsibility fragmentation
One of the objectives of the Pitt Review was to simplify flood risk management. Yet many interviewees verbalised a concern that current flood risk management policy creates additional complexity through the fragmentation of responsibility. With a systemic risk like flooding there is rarely a single problem to be solved. Splitting responsibility amongst the flood authorities creates silos leaving elements of flooding not clearly owned by a flood authority, resulting in situations where there is no apparent ownership of them.
And uh, it being so fragmented sometimes, no one actually, um, um, grabs it as an issue and says, we're going to take the, um, you know, we're going to, we're actually going to run with this. So, I can appreciate that completely. And, um, as a, um, somebody who's worked in flood risk management it also is a frustration .
Rather than having one organisation or government department responsible for flooding, the Floods Act broke down responsibility according to the source of the flooding. From the interviews we heard that far from increasing the focus on responsibility it diluted it. One of the impacts of this dilution is that flood problems can be 'fobbed off' between organisations.
When you see some flooding out in your road and you're a normal resident who has never experienced flooding before. You have absolutely no idea who is responsible. And the agencies can justifiably fob you off on each other for years and years and years and years because the water company will say 'the road is flooding' we are a sewage company we don't have a connection. The Highways agency are saying the road is flooding but it (rain) can't get into the sewer. And the environment agency will say, well, just because the sewer is full doesn't mean you shouldn't have an alternative way to discharge. Yeah. It's a little bit rubbish, I think because essentially no one's responsible. I think it's a highly fragmented legislated framework [Interviewee-FA24].
Fragmentation of responsibility leads to problems 'slipping through the net' and hence poor flood risk management (Challies et al., 2016). A systemic risk like flooding requires a holistic approach (Renn et al., 2011), one that considers all facets of flooding from source, pathway through to impact (and the very human impacts of flooding). Here flood risk management is framed around the complexity of navigating who is responsible for what.
| What about resilience?
Given that resilience has become a key theme of current flood risk management policy within and beyond the UK, one would therefore have expected that a holistic construction of resilience would feature heavily in the semi-structured interviews. However, this is not the case. Whilst 20 of the 30 interviewees did use the word 'resilience' at least once, these mentions were often associated with a singular dimension of resilience, for example, property level resilience, being part of a resilience team at work or simply used in such a manner where it is hard to pin down exactly what the individual meant by the term resilience.
From all the mentions of the word resilience, over half were from members of the Environment Agency. There is a clear sense that the word 'resilience' is now embedded into the language of the Environment Agency, but that it might not always hold practical understanding. It is worth noting here, these interviews were carried out before the launch of the Environment Agency's new Flood Risk Management Strategy (2020), which represents another shift in focus towards resilience.
Resilience is complex, with a range of definitions and meanings, this is clearly reflected in the way that the interviewees talked about resilience. Resilience was used in a somewhat arbitrary and very diverse manner in many of the interviews.
We look after the flood warning service. So, we look after changing triggers, changing extents, making sure that they are at the right level, trying to get new telemetry gauges to create new flood warning areas and making that the best it can be. And then we deliver, um, resilience as well [Interviewee-FA1].
And for some interviewee's the only mention of resilience in their interview was in relation to a team name only. At a time where climate change impacts require a unified response to flood risk management, it is disconcerting, although understandable given the global lack of consensus about what resilience is, to find that the construction of resilience is so fluid and meaning different things to different people. This was summed up well by one of the interviewees.
I think when, you know, when we talk about resilience as an organization and we're very, we're very sort of verbal, talk about, you know, can we improve resilience for this. Actually, it doesn't mean anything to people is, it's just another word really at the end of the day [Interviewee-FA12].
| The differing constructions of flooding and flood risk management between the flood risk management authorities
Our research interviews identified themes around the construction of flood risk management that run across and through the flood authorities with the emphasis on various elements changing for different organisations. The Environment Agency responses to the interview questions put more of an emphasis on themes like the practicalities of flood risk management; the need for a holistic approach; the need for people to be aware of their flood risk; and the impact that the lack of funding has on flood risk management. The human impacts of flooding had lesser prevalence in these interviews and when they were discussed, the discussion was often framed around the emotional response to rain and to flooding that is, inthe-moment rather than lifelong impacts. It is not true to say that this is the case for every Environment Agency individual interviewed. A number of individuals understood very well the long-term human impacts of living at risk of flooding and articulated this well.
These perspectives challenge the outputs of the Pitt Review and the resultant Flood and Water Management Act (2010) and questions whether the Environment Agency work and their desire to engage with flood communities is stymied by national policy and how the Environment Agency is funded.
For the LLFA's the human impacts of flooding had as much importance to the interviewees as the lack of funding. Other important themes are framed around engagement, working together, communication and partnership working. This reflects the differences in the way that the Environment Agency and LLFA's work. LLFA's are, by their very role, more embedded into local communities. They have 'constituents' with whom they work, their role is for the effective management of the borough/ region and the people living within it. It is possible that it is this proximity to the communities that engenders a more community and partnership focus with an emphasis on engaging with their local communities. This, of course, is not the panacea for the perfect flood risk management approach, LLFA's themselves encounter and create flood risk management problems. However, it does offer some opportunity for understanding what does work and which ways of working could be emulated elsewhere.
For the water companies the picture was more complex. This complexity could, of course, come from the fact that as companies with shareholders it is their boards that set the dynamic for the company and the business goals. And ultimately, the board along with OFWAT determinations, which determines what can and cannot happen or what can or cannot be funded, all set within the context of business interests. This will inevitably impact ways of working.
Nonetheless, there were still common water company themes. Every person interviewed from a water company spoke about engagement as a means of getting a message across, for example, 'bin it don't flush it', educating customers not to flush items down their toilet which should not be put down the toilet. This came across in interviews often as customers causing self-inflicted flooding.
80% of that 98% (flooding incidents) are due to self-inflicted, so putting in rags or wipes or that sort of thing which creates a blockage .
The use of the word customer was also interesting, if not obvious. Occasionally in Environment Agency and LLFA interviews communities were referred to as customers, but because of the business nature of the water companies, they are dealing with customers and duly call them so. The best example in this research is the concept that customer is 'King', where individual customer complaints often result in the company having to respond quickly to ad-hoc events and this can result in 'knee jerk' responses. Providing excellent customer service is one of the four key themes of OFWATs, 2019 price review (PR19) (OFWAT, 2019). Therefore, if a customer makes a complaint an urgent response is required, and this can take precedence over more long-term flood risk management plans.
I can say a lot of challenges internally from many directors about this customer is King and where we want all that PR when we're trying to keep under the radar, uh, not get bad press [interviewee-FA28].
Because at the moment we're rated based on when a customers got a problem with us, how do we react basically [Interviewee-FA28].
Another interesting theme which came through a number of Water Company interviews was that Water Companies can feel that they are being portrayed as the 'bad guys' of flooding.
…… I'm sure we're still seen as a bit of a bad guy [Interviewee-FA27].
Sometimes there was a perception that other flood authorities think water companies have lots of money available and therefore should be footing the bill.
uh, and they also don't understand what companies can do and what they can't do in terms of what they spend the money on. You know, if we spend water bill payers money, we have to spend it in a way that benefits customers in terms of the use of our assets. We can't just get someone money cause it feels like a jolly nice thing to do. They don't even associate what we do with capped limits .
Water Companies also perceive that their proximity with customers leaves them exposed to complaints which may not be their fault simply because they are working face to face with customers.
For Water Companies, flood risk management is constructed around business requirements and the need to keep their customer happy and not making complaints.
| Differing constructions
These differing constructions of flood risk management amongst the flood authorities can readily add to the complication of responsibility fragmentation by making communication more complex. If you consider the situation where a flood risk management partnership between flood authorities is being set up to manage flooding effectively. If one of those partner flood authorities constructs flood risk management through the development of engineering schemes utilising their own expertise and another partner authority constructs flood risk management as working together to pool knowledge, looking at all options including natural flood management and developing a collaborative proposal, differences will inevitably occur. This could readily lead to a situation where each authority and other flood actors, feels that the others are not 'doing' good flood risk management because it does not match their own construction of what flood risk management is.
| HOW IS FLOODING AND FLOOD RISK MANAGEMENT CONSTRUCTED BY MEMBERS OF FLOOD MANAGEMENT AUTHORITIES IN ENGLAND?
The construction of flooding and flood risk management by members of the English flood authorities is complex; heavily framed around in-the-moment issues and impacts. Although these constructions contain some understanding of the temporal elements of flooding, the human long-term impacts of living at risk of flooding are not fully understood yet and therefore play a limited role in the construction of flooding and flood risk management. Whilst the fear of rain and the associated behaviour to monitor rainfall might be recognised as a symptom of living at risk of flooding, the reasoning behind why this behaviour occurs is not well understood by the flood authorities. It is troubled by a perception that these anxiety-inducing reactions are making the situation worse and ideally the individuals at risk of flooding should stop doing it.
If the flood authorities were to gain a better understanding of the visceral, emotional and psychological causes and impacts of these behaviours and build these concepts into their construction of flooding, it would enable them to better communicate with flood communities and each other, relate to the flood reality communities are living through and facilitate partnership working. All of which could assist in better management both of flooding and the long-term human impacts of it.
It is important to observe that whilst national flood policy is objective in its aims, what actually happens 'on the ground' is framed through the experiences and interpretation of the individual members of the flood authorities. This juxtaposition of the institutional function of flood risk management compared to the personal experience raises questions about how these two influences could (or should?) be balanced to optimise effective flood risk management or whether this undermines the practice of effective flood risk management.
If the Flood and Water Act sought to simplify flood risk management through making flood authorities responsible for managing specific elements of flooding, it has failed to do this. The flood risk management structure it imposes on the flood authorities' fragments responsibility, it creates disconnects between the flood authorities and it fails to recognise that flooding and flood risk management are constructed in completely different ways amongst the various flood actors.
The results of this research identify the need to acknowledge that flooding and flood risk management are complex in their construction and mean different things to different people, that is, there is dissonance in their constructions. From this acknowledgement and understanding can come equitable and effective partnership working. Flood authorities and other flood actors need to work together and with flood communities to develop constructions of flooding and flood risk management that are meaningful and accessible to all involved. Top-down approaches to community engagement need to be addressed and converted to more bottom-up ways of working that connect and resonate more with communities. Our research suggests that this is a priority and something that requires further research.
In addition, the complication around 'what is resilience' from a flood authority perspective risks rendering this important concept as arbitrary, again meaning totally different things to different people. As already acknowledged, diluting resilience through a lack of clear construction and definition is dangerous in a world where climate changing is seriously impacting flood risk.
This research also highlights the importance of the flood authorities and other flood actors understanding that flooding is not purely a single 'event', that it has long-term human impacts. That water in someone's home is only the start of flooding for the flood community. Without this understanding, managing an increasing and systemic risk like flooding will be challenging with flood actors all working along different trajectories. ding invites to individual the authors wished to speak to. Without Josie making contact with the interviewees would have been a tortuous task. Cloke was supported through the NERC EVOFLOOD project (Evaluation of Global Flood Risk (NE/SO15590/1). Clark and Cloke were supported through the NERC Understanding the Effectiveness of Natural Flood Management programme's NERC LANDWISE project (NE/R004668/1), and Clark was supported through EPSRC Twenty65 project (EP/N010124/1). All photos authors own.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
|
2021-11-06T15:15:14.052Z
|
2021-11-03T00:00:00.000
|
{
"year": 2022,
"sha1": "b444ab479d16a9faae9ebdf9fa3029106aabf5b9",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jfr3.12768",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "ba0c0c0c45e9b7af8a6e46882d2c8cded2e6cca5",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Sociology"
],
"extfieldsofstudy": []
}
|
198405865
|
pes2o/s2orc
|
v3-fos-license
|
A Multi-Scale Wavelet 3D-CNN for Hyperspectral Image Super-Resolution
: Super-resolution (SR) is significant for hyperspectral image (HSI) applications. In single-frame HSI SR, how to reconstruct detailed image structures in high resolution (HR) HSI is challenging since there is no auxiliary image (e.g., HR multispectral image) providing structural information. Wavelet could capture image structures in di ff erent orientations, and emphasis on predicting high-frequency wavelet sub-bands is helpful for recovering the detailed structures in HSI SR. In this study, we propose a multi-scale wavelet 3D convolutional neural network (MW-3D-CNN) for HSI SR, which predicts the wavelet coe ffi cients of HR HSI rather than directly reconstructing the HR HSI. To exploit the correlation in the spectral and spatial domains, the MW-3D-CNN is built with 3D convolutional layers. An embedding subnet and a predicting subnet constitute the MW-3D-CNN, the embedding subnet extracts deep spatial-spectral features from the low resolution (LR) HSI and represents the LR HSI as a set of feature cubes. The feature cubes are then fed to the predicting subnet. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet sub-band and predicts the wavelet coe ffi cients of HR HSI. The HR HSI can be obtained by applying inverse wavelet transform to the predicted wavelet coe ffi cients. In the training stage, we propose to train the MW-3D-CNN with L1 norm loss, which is more suitable than the conventional L2 norm loss for penalizing the errors in di ff erent wavelet sub-bands. Experiments on both simulated and real spaceborne HSI demonstrate that the proposed algorithm is competitive with other state-of-the-art HSI SR methods.
Introduction
Hyperspectral image (HSI) is collected in contiguous bands over a certain electromagnetic spectrum, and the spectral and spatial information in HSI is helpful for identifying and discriminating different materials in the scene. HSI has been applied to many fields, including target detection [1], environment monitoring [2], and land-cover classification [3]. However, the spatial resolution of HSI is often limited due to the trade-off between the spatial and spectral resolutions. Some Earth Observation applications, such as urban mapping [4] and fine mineral mapping [5], require high resolution (HR) HSI. Therefore, enhancing the spatial resolution of HSI is of significance for the application of HSI.
There are several ways to enhance the spatial resolution of HSI. Some auxiliary images, e.g., panchromatic image and multispectral image (MSI), often have higher spatial resolution [6]. Hyperspectral pan-sharpening reconstructs HR HSI by fusing the low resolution (LR) HSI with a HR panchromatic image taken over the same area at the same time (or a similar period). Pan-sharpening could be implemented by
•
Unlike the previous deep learning models that reconstruct HR HSI directly [29][30][31][32], the proposed network predicts the wavelet coefficients of the latent HR HSI, which is beneficial for reconstructing detailed textures in HSI.
•
In the predicting subnet, different branches corresponding to different wavelet sub-bands are trained jointly in a unified network, and the inter sub-band correlation can be utilized.
•
The network is built based on 3D convolutional layers, which could exploit the correlation in both spectral and spatial domains of HSI.
•
Instead of the conventional L2 norm, we propose to train the network with the L1 norm loss, which is fit for both low-and high-frequency wavelet sub-bands.
The remainder of the paper is organized as follows. In Section 2, we introduce some related works. In Section 3, we present the proposed HSI SR method, including the architecture and training of the network. The experimental results are given in Section 4. In Section 5, we present some analyses and discussion on the experiment. In Section 6, we conclude with observations specific to the potential of our approach to single-frame HSI SR.
CNN Based Single Image SR
CNN could extract features from the local neighborhood of image by convolving with trainable kernels, which makes it easy to exploit spatial correlation in an image. CNN has become the most popular deep learning model in many image processing tasks, particularly in image SR [40][41][42][43][44][45][46].
In [38], Dong et al. proposed to learn the mapping between the LR and HR images using a CNN. The HR image can be inferred from its LR version with the trained network. Inspired by this idea, several CNN based single image SR methods have been proposed [41][42][43][44][45][46]. In [41], a very deep CNN for SR was proposed and trained with a residual learning strategy. Trainable parameters would drastically increase in very deep CNN, and a recursive CNN was proposed to address this issue by sharing the parameters of different layers in [42]. Most CNN SR methods employed the high-level features for reconstruction and neglected the low-and mid-level features. In [43,44], the authors proposed a residual dense network for SR, in which layers were densely connected to make full use of the hierarchical features. To address the challenge of super-resolving an image by large factors, the authors in [45] proposed progressive deep learning models to upscale the image gradually. Similarly, a Laplacian Pyramid SR CNN (LapSRN) was proposed in [46], which could progressively reconstruct the high-frequency details of different sub-bands of the latent HR image.
Application of Wavelet in SR
Wavelet describes image structures in different orientations. Employing wavelet in image SR, particularly the high-frequency wavelet sub-bands, is beneficial for preserving the detailed image structures. Many wavelet based SR methods have been proposed [47][48][49][50]. In [47], the LR image was decomposed into different wavelet sub-bands, the high-frequency sub-bands were interpolated and then combined with the LR image to generate HR image via inverse wavelet transformation. Similarly, the LR image was decomposed by two types of wavelets, and the high-frequency sub-bands of the two wavelets were then combined and followed by inverse wavelet transformation [48]. In [49,50], edge prior was utilized in the high-frequency sub-bands estimation to make the SR result sharper. Wavelet could also be used in CNN to better infer image details and enhance the sparsity of the network. For example, in [34,35], the mapping between the LR and HR images was learned by a CNN in wavelet domain for single image SR. However, these SR methods were designed for a single image, therefore applying these methods to HSI in band-by-band fashion would neglect the spectral correlation in HSI and lead to high spectral distortion.
Multi-Scale Wavelet 3D CNN For HSI SR
In this study, we transform the HSI SR problem into predicting the wavelet coefficients of HSI. In this section, we first introduce some basics on wavelet package analysis and 3D CNN, then we propose the MW-3D-CNN for HSI SR, including the architecture and the loss function.
Wavelet Package Analysis
Wavelet package transformation (WPT) could transform an image into a serial of wavelet coefficients sub-bands with the same size. An example of WPT with Haar wavelet function is given in Figure 1. The one-level decomposition is shown in Figure 1b. It can be found that the low-frequency sub-band (i.e., the top-left patch) describes the global topology. The detailed structures in vertical, horizontal, and diagonal orientations can be captured by different high-frequency sub-bands (i.e., the rest patches). By repeating the decomposition to each sub-band recursively, we can obtain higher-level WPT results, such as the two-level decomposition in Figure 1c. It is noted that the decomposition is applied to both the low-and high-frequency sub-bands, so the sub-bands of higher-level decomposition are of the same size. The original image can be reconstructed from these sub-bands via inverse WPT. Wavelet describes image structures in different orientations. Employing wavelet in image SR, particularly the high-frequency wavelet sub-bands, is beneficial for preserving the detailed image structures. Many wavelet based SR methods have been proposed [47][48][49][50]. In [47], the LR image was decomposed into different wavelet sub-bands, the high-frequency sub-bands were interpolated and then combined with the LR image to generate HR image via inverse wavelet transformation. Similarly, the LR image was decomposed by two types of wavelets, and the high-frequency subbands of the two wavelets were then combined and followed by inverse wavelet transformation [48]. In [49,50], edge prior was utilized in the high-frequency sub-bands estimation to make the SR result sharper. Wavelet could also be used in CNN to better infer image details and enhance the sparsity of the network. For example, in [34,35], the mapping between the LR and HR images was learned by a CNN in wavelet domain for single image SR. However, these SR methods were designed for a single image, therefore applying these methods to HSI in band-by-band fashion would neglect the spectral correlation in HSI and lead to high spectral distortion.
Multi-Scale Wavelet 3D CNN For HSI SR
In this study, we transform the HSI SR problem into predicting the wavelet coefficients of HSI. In this section, we first introduce some basics on wavelet package analysis and 3D CNN, then we propose the MW-3D-CNN for HSI SR, including the architecture and the loss function.
Wavelet Package Analysis
Wavelet package transformation (WPT) could transform an image into a serial of wavelet coefficients sub-bands with the same size. An example of WPT with Haar wavelet function is given in Figure 1. The one-level decomposition is shown in Figure 1b. It can be found that the low-frequency sub-band (i.e., the top-left patch) describes the global topology. The detailed structures in vertical, horizontal, and diagonal orientations can be captured by different high-frequency sub-bands (i.e., the rest patches). By repeating the decomposition to each sub-band recursively, we can obtain higherlevel WPT results, such as the two-level decomposition in Figure 1c. It is noted that the decomposition is applied to both the low-and high-frequency sub-bands, so the sub-bands of higher-level decomposition are of the same size. The original image can be reconstructed from these sub-bands via inverse WPT.
3D CNN
For HSI, both spatial and spectral domains should be exploited in feature extraction. By convolving with 3D kernels, 3D CNN could extract features from different domains of volumetric data. The activity of the k-th feature cube in the d-th layer following formulation in [51] can be written as
3D CNN
For HSI, both spatial and spectral domains should be exploited in feature extraction. By convolving with 3D kernels, 3D CNN could extract features from different domains of volumetric data. The activity of the k-th feature cube in the d-th layer following formulation in [51] can be written as where, c means the set of feature cubes in the (d-1)-th layer connected to the k-th feature cube in the d-th layer, d,k,c (u,v,w) is the value at position (u, v, w) of the 3D kernel associated with the k-th feature cube. The size of the 3D kernel is U × V × W. F d,k (x,y,z) is the value at position (x, y, z) of the k-th feature cube in the d-th layer. g(·) is a non-linear activation function such as Rectified Linear Unit (ReLU) and Sigmoid function, etc. By convolving with different kernels, several 3D feature cubes can be extracted in each layer of 3D CNN, as shown in Figure 2b. Pixels of spatial neighborhood and adjacent bands are involved in 3D convolution, and the spectral-spatial correlation in HSI can be jointly exploited in feature extraction [52,53].
Remote Sens. 2019, 11, x FOR PEER REVIEW 5 of 22 where, c means the set of feature cubes in the (d-1)-th layer connected to the k-th feature cube in the d-th layer, is the value at position ( , , ) u v w of the 3D kernel associated with the k-th feature cube. The size of the 3D kernel is is the value at position ( , , ) x y z of the k-th feature cube in the d-th layer. ( ) g ⋅ is a non-linear activation function such as Rectified Linear Unit (ReLU) and Sigmoid function, etc. By convolving with different kernels, several 3D feature cubes can be extracted in each layer of 3D CNN, as shown in Figure 2b. Pixels of spatial neighborhood and adjacent bands are involved in 3D convolution, and the spectral-spatial correlation in HSI can be jointly exploited in feature extraction [52,53].
Network Architecture of MW-3D-CNN
The correlation exists not only in the spatial and spectral domains, but also among the wavelet package sub-bands of HSI. Considering the inter wavelet package sub-bands correlation, an embedding subnet is designed to learn shared features for different wavelet package sub-bands. These shared features are then fed to a predicting subnet to infer the wavelet package coefficients. Both of the embedding and predicting subnets are built based on 3D convolutional layers, which could naturally exploit the spectral-spatial correlation in HSI. The overall architecture of MW-3D-CNN is shown in Figure 3.
Embedding Subnet
The embedding subnet projects the LR HSI into deep feature space and represents it as a set of feature cubes that are shared by different wavelet package sub-bands. 3D convolutional layers and non-linear activation layers are alternatively stacked in the embedding subnet. The embedding subnet extracts feature cubes from the LR HSI , where m, n, and L are the number of rows, columns, and spectral bands, respectively. Both spectral and spatial information of HSI can be encoded by 3D convolution during the feature extraction, after several 3D convolutional layers, the LR HSI X could be represented by a serial of spectral-spatial feature cubes, which are expressed as ( ) , where S is the number of feature cubes, denotes the function of embedding subnet. It is noted that zero padding is adopted in each convolutional layer to make the feature cubes the same size with the LR HSI.
Predicting Subnet
The embedding subnet is followed by a predicting subnet, which infers wavelet package coefficients. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet package sub-band. The predicting subnet takes the feature cubes extracted by the embedding subnet as input, each branch of the predicting subnet is trained to infer the wavelet coefficients at each sub-band. Similar to the embedding subnet, each branch in the predicting subnet
Network Architecture of MW-3D-CNN
The correlation exists not only in the spatial and spectral domains, but also among the wavelet package sub-bands of HSI. Considering the inter wavelet package sub-bands correlation, an embedding subnet is designed to learn shared features for different wavelet package sub-bands. These shared features are then fed to a predicting subnet to infer the wavelet package coefficients. Both of the embedding and predicting subnets are built based on 3D convolutional layers, which could naturally exploit the spectral-spatial correlation in HSI. The overall architecture of MW-3D-CNN is shown in Figure 3.
Embedding Subnet
The embedding subnet projects the LR HSI into deep feature space and represents it as a set of feature cubes that are shared by different wavelet package sub-bands. 3D convolutional layers and non-linear activation layers are alternatively stacked in the embedding subnet. The embedding subnet extracts feature cubes from the LR HSI X ∈ R m×n×L , where m, n, and L are the number of rows, columns, and spectral bands, respectively. Both spectral and spatial information of HSI can be encoded by 3D convolution during the feature extraction, after several 3D convolutional layers, the LR HSI X could be represented by a serial of spectral-spatial feature cubes, which are expressed as ψ(X) ∈ R m×n×L×S , where S is the number of feature cubes, ψ : R m×n×L → R m×n×L×S denotes the function of embedding subnet. It is noted that zero padding is adopted in each convolutional layer to make the feature cubes the same size with the LR HSI.
Predicting Subnet
The embedding subnet is followed by a predicting subnet, which infers wavelet package coefficients. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet package sub-band. The predicting subnet takes the feature cubes extracted by the embedding subnet as input, each branch of the predicting subnet is trained to infer the wavelet coefficients at each sub-band. Similar to the embedding subnet, each branch in the predicting subnet is also stacked by 3D convolutional layers and non-linear activation layers with zero padding strategy adopted, and the predicted wavelet coefficients have the same spatial size with the LR HSI. The desired HR HSI is obtained by applying inverse WPT to the predicted wavelet coefficients, so the upscaling factor of SR depends on the number of WPT levels. Specifically, suppose the number of WPT levels is l, there would be N w = 4 l wavelet package sub-bands, and the number of output branches in the predicting subnet is also 4 l . Taking the shared feature cubes ψ(X) as input, the i-th branch ϕ i predicts the i-th wavelet package sub-band as ϕ i (ψ(X)) ∈ R m×n×L , where ϕ i : R m×n×L×S → R m×n×L , i = 1, 2, . . . , N w denotes the function of the i-th branch. The output of MW-3D-CNN can be denoted as a set of wavelet package coefficients: In the training stage, the MW-3D-CNN learns the mapping between the LR HSI and the wavelet package coefficients of the latent HR HSI. In the testing stage, given the LR HSI, the MW-3D-CNN would infer the wavelet package coefficients at each sub-band. Applying inverse WPT to the predicted wavelet package coefficients, the HR HSI can be obtained: where, φ denotes inverse WPT,Ŷ ∈ R (r×m)×(r×n)×L is the estimated HR HSI, r = 2 l is the upscaling factor of SR. Different wavelet sub-bands share the common deep layers in the embedding subnet due to the inter wavelet sub-bands correlation. The embedding subnet learns the shared feature cubes and the predicting subnet optimizes with respect to each wavelet package sub-band. The embedding subnet connects different branches into a unified predicting subnet and allows them to be jointly optimized. Specifically, the errors in each wavelet package sub-band can be jointly back-propagated to the embedding subnet to learn the shared features, and the embedding subnet will refine different branches in the predicting subnet. Compared with training each branch independently, such joint training could make different branches facilitate each other and implicitly capture the correlation among different wavelet sub-bands. is also stacked by 3D convolutional layers and non-linear activation layers with zero padding strategy adopted, and the predicted wavelet coefficients have the same spatial size with the LR HSI. The desired HR HSI is obtained by applying inverse WPT to the predicted wavelet coefficients, so the upscaling factor of SR depends on the number of WPT levels. Specifically, suppose the number of WPT levels is l, there would be 4 l w N = wavelet package sub-bands, and the number of output branches in the predicting subnet is also 4 l . Taking the shared feature cubes ( ) ψ X as input, the ith branch i ϕ predicts the i-th wavelet package sub-band as denotes the function of the i-th branch. The output of MW-3D-CNN can be denoted as a set of wavelet package coefficients: 1 2 { ( ( )), ( ( )),..., ( ( )),..., ( ( ))} 1, 2,..., In the training stage, the MW-3D-CNN learns the mapping between the LR HSI and the wavelet package coefficients of the latent HR HSI. In the testing stage, given the LR HSI, the MW-3D-CNN would infer the wavelet package coefficients at each sub-band. Applying inverse WPT to the predicted wavelet package coefficients, the HR HSI can be obtained: where, φ denotes inverse WPT, is the upscaling factor of SR. Different wavelet sub-bands share the common deep layers in the embedding subnet due to the inter wavelet sub-bands correlation. The embedding subnet learns the shared feature cubes and the predicting subnet optimizes with respect to each wavelet package sub-band. The embedding subnet connects different branches into a unified predicting subnet and allows them to be jointly optimized. Specifically, the errors in each wavelet package sub-band can be jointly back-propagated to the embedding subnet to learn the shared features, and the embedding subnet will refine different branches in the predicting subnet. Compared with training each branch independently, such joint training could make different branches facilitate each other and implicitly capture the correlation among different wavelet sub-bands. Our MW-3D-CNN focuses on predicting the wavelet package coefficients of HR HSI, compared with predicting the HR HSI directly, we consider three advantages. Firstly, wavelet coefficients
Sub-band
Inverse WPT LR HSI
Estimated Wavelet Coefficients
Input Figure 3. The architecture of the proposed multi-scale wavelet (MW)-3D-CNN, the number and the size of convolutional kernels are denoted at each layer, and the embedding subnet and predicting subnet have three and four layers respectively.
Our MW-3D-CNN focuses on predicting the wavelet package coefficients of HR HSI, compared with predicting the HR HSI directly, we consider three advantages. Firstly, wavelet coefficients describe the detailed textural information in HSI. Training the MW-3D-CNN to predict the wavelet coefficients is beneficial for recovering the detailed structures in HSI [33,36]. Secondly, a network with sparse activations is easier to be trained [34,35]. Wavelet coefficients have sparsity characteristics in the high-frequency sub-bands, and predicting wavelet coefficients promotes the sparsity of the MW-3D-CNN and makes the training easier and the trained network more robust. Finally, the MW-3D-CNN extracts features from the LR HSI directly. Compared with extracting features from the interpolated LR HSI, such as in [40,41], information in larger receptive field can be exploited.
Training of MW-3D-CNN
All the convolutional kernels and bias in the embedding and predicting subnets are trained in an end-to-end manner. L2 norm, which measures mean square error, is often used in loss function in the conventional CNN based image SR methods. However, the output of our network is the wavelet coefficients, which have larger values in the low-frequency sub-band and smaller values in the high-frequency sub-bands, as shown in the histograms in Figure 4. The L2 norm loss penalizes heavily on larger errors and is less sensitive to smaller errors [37]. On the contrary, the L1 norm loss penalizes equally on both larger and smaller errors, and it is more suitable than the L2 norm loss for wavelet coefficients prediction. In addition, compared with the L2 norm loss, the L1 norm loss is helpful for recovering sharper image structures with faster convergence [38]. Therefore, we propose to train the MW-3D-CNN with the L1 norm loss, the loss function is written as where, C i j andĈ i j = ϕ i (ψ(X j )) are the ground truth and the predicted wavelet package coefficients of the i-th sub-band respectively, j = 1, 2, . . . , N, N is the number of training samples, i = 1, 2, . . . , N w , N w = 4 l is the number of sub-bands. X j is the LR HSI of the j-th training sample. λ i is the weight balancing the trade-off between different wavelet sub-bands, which is set to 1 for simplicity in the experiment. The loss function is optimized using the adaptive moment estimation (ADAM) method with standard back propagation. The trainable convolutional kernels and bias are updated according to the following rule [54]: where, θ (t) denotes the trainable parameters (i.e., convolutional kernels and bias) at the t-th iteration, α is learning rate, ε is a constant to stabilize the updating, which is set to 10 −6 . m (t) and v (t) are bias-corrected first and second moment estimates respectively: where ∂θ is the gradient with respect to the trainable parameters θ. β 1 and β 2 are two exponential decay rates for the moment estimation. In our implementation, the learning rate α is initially set to 0.001 and decreased by half for every 50 training epochs. The exponential decay rates β 1 and β 2 are set to 0.9 and 0.999 respectively. The batch size is set to 64. The number of training epochs is 200.
Experimental Results
In this section, we compare the MW-3D-CNN with other state-of-the-art HSI SR methods on several simulated HSI datasets. In order to demonstrate the applicability of MW-3D-CNN, we also validate it on real spaceborne Hyperion HSI. Since there is no reference HSI for SR assessment in real data case, we use the no-reference HSI assessment method in [55] to evaluate the SR performance.
Experiment Setting
Three datasets were used in the experiment. The first one is the Reflective Optics System Imaging Spectrometer (ROSIS) dataset, which contains two images taken over Pavia University and Pavia Center with sizes of 610 × 340 and 1096 × 715, respectively. The spatial resolution is 1.3 m. After discarding the noisy bands, there are 100 bands remained in the spectral range 430~860 nm. The second dataset was collected by Headwall Hyperspec-VNIR-C imaging sensor over Chikusei, Japan, on July 29, 2014 [56]. The size is 2517 × 2335 with spatial resolution 2.5 m. There are 128 bands in the spectral range of 363~1018nm. The third dataset is 2018 IEEE GRSS Data Fusion Contest data (denoted as "grss_dfc_2018"), which was acquired by the National Center for Airborne Laser Mapping (NCALM) over Houston University, on February 16, 2017 [57]. The size of this data is 1202 × 4172. The spatial resolution is 1 m. It has 48 bands in the spectral range of 380~1050 nm.
The above data was treated as original HR HSI, the LR HSI was simulated via Gaussian downsampling, which is a process of simulating LR HSI via applying a Gaussian filter to HR HSI and then down-sampling it in both vertical and horizontal directions. The Gaussian down-sampling was implemented using the "Hyperspectral and Multispectral Data Fusion Toolbox" [16]. For downsampling by a factor of two, the Gaussian filter was of size 2 × 2 with zero mean and standard deviation 0.8493; for down-sampling by a factor of four, the Gaussian filter was of size 4 × 4 with zero mean and standard deviation 1.6986. All these parameters in down-sampling are suggested in [16,17].
We cropped three sub-images with rich textures from the original HSI as testing data, and the remainder was used as training data. About 100,000 LR-HR pairs were extracted as training samples to train the MW-3D-CNN. Each LR HSI sample was of size 16 × 16 × 16. For training the MW-3D-CNN by an upscaling factor of two, there were four branches in the predicting subnet, the output wavelet coefficients in each branch were of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 32 × 32 × 16. For training the MW-3D-CNN by an upscaling factor of four, there were 16 branches in the predicting subnet, the output wavelet coefficients in each branch were also of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 64 × 64 × 16. It is noted that there was no overlapping between the training and testing regions. The network parameters of MW-3D-CNN were set according to network parameters in Figure 3. Haar wavelet function was used in WPT.
Comparison with State-of-the-Art SR Methods
In this sub-section, we compare the proposed method with other state-of-the-art HSI SR methods. Spectral-spatial group sparse representation HSI SR method (denoted as SSG) [27], and two
Experimental Results
In this section, we compare the MW-3D-CNN with other state-of-the-art HSI SR methods on several simulated HSI datasets. In order to demonstrate the applicability of MW-3D-CNN, we also validate it on real spaceborne Hyperion HSI. Since there is no reference HSI for SR assessment in real data case, we use the no-reference HSI assessment method in [55] to evaluate the SR performance.
Experiment Setting
Three datasets were used in the experiment. The first one is the Reflective Optics System Imaging Spectrometer (ROSIS) dataset, which contains two images taken over Pavia University and Pavia Center with sizes of 610 × 340 and 1096 × 715, respectively. The spatial resolution is 1.3 m. After discarding the noisy bands, there are 100 bands remained in the spectral range 430~860 nm. The second dataset was collected by Headwall Hyperspec-VNIR-C imaging sensor over Chikusei, Japan, on July 29, 2014 [56]. The size is 2517 × 2335 with spatial resolution 2.5 m. There are 128 bands in the spectral range of 363~1018 nm. The third dataset is 2018 IEEE GRSS Data Fusion Contest data (denoted as "grss_dfc_2018"), which was acquired by the National Center for Airborne Laser Mapping (NCALM) over Houston University, on February 16, 2017 [57]. The size of this data is 1202 × 4172. The spatial resolution is 1 m. It has 48 bands in the spectral range of 380~1050 nm.
The above data was treated as original HR HSI, the LR HSI was simulated via Gaussian down-sampling, which is a process of simulating LR HSI via applying a Gaussian filter to HR HSI and then down-sampling it in both vertical and horizontal directions. The Gaussian down-sampling was implemented using the "Hyperspectral and Multispectral Data Fusion Toolbox" [16]. For down-sampling by a factor of two, the Gaussian filter was of size 2 × 2 with zero mean and standard deviation 0.8493; for down-sampling by a factor of four, the Gaussian filter was of size 4 × 4 with zero mean and standard deviation 1.6986. All these parameters in down-sampling are suggested in [16,17].
We cropped three sub-images with rich textures from the original HSI as testing data, and the remainder was used as training data. About 100,000 LR-HR pairs were extracted as training samples to train the MW-3D-CNN. Each LR HSI sample was of size 16 × 16 × 16. For training the MW-3D-CNN by an upscaling factor of two, there were four branches in the predicting subnet, the output wavelet coefficients in each branch were of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 32 × 32 × 16. For training the MW-3D-CNN by an upscaling factor of four, there were 16 branches in the predicting subnet, the output wavelet coefficients in each branch were also of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 64 × 64 × 16. It is noted that there was no overlapping between the training and testing regions. The network parameters of MW-3D-CNN were set according to network parameters in Figure 3. Haar wavelet function was used in WPT.
Comparison with State-of-the-Art SR Methods
In this sub-section, we compare the proposed method with other state-of-the-art HSI SR methods. Spectral-spatial group sparse representation HSI SR method (denoted as SSG) [27], and two CNN based SR algorithms, i.e., SRCNN [40] and 3D-CNN [32], were used for comparison. As an often used benchmark, bicubic interpolation was also compared. All the parameters of SSG, SRCNN, and 3D-CNN followed the default setting as described in [27,40], and [32]. The training samples and training epochs of SRCNN and 3D-CNN were the same with that of MW-3D-CNN, which guarantees the fairness of comparison.
The SR performance was assessed using peak-signal-noise-ratio (PSNR, dB), structural similarity index measurement (SSIM) [58], feature similarity index measurement (FSIM) [59], and spectral angle mean (SAM). We compute the PSNR, SSIM, and FSIM indices on each band, and then calculated the mean values over all the spectral bands.
The assessment indices of different SR methods are given in Tables 1 and 2. The scores of our method are better than the compared methods in most cases. The 3D-CNN in [32] could extract spectral-spatial features from HSI and jointly reconstruct different spectral bands, so it could lead to less spectral distortion than the SRCNN, as shown in Tables 1 and 2. Both 3D-CNN and MW-3D-CNN are in the framework of 3D CNN, and the MW-3D-CNN predicts the wavelet coefficients of the HR HSI, rather than directly predicting the HR HSI. Focusing on the wavelet coefficients makes the MW-3D-CNN more effective in preserving structures in HR HSI, so the results of MW-3D-CNN have higher PSNR values. In order to test the robustness of MW-3D-CNN over larger upscaling factor, we also implemented the SR by a factor of four and report the indices in Table 2. It can be found that the MW-3D-CNN can also achieve competitive results in most cases by an upscaling factor of four. In Figure 5, we plot the PSNR indices of different SR methods on each band. It is clear that the proposed method outperforms other methods on most spectral bands. CNN based SR algorithms, i.e., SRCNN [40] and 3D-CNN [32], were used for comparison. As an often used benchmark, bicubic interpolation was also compared. All the parameters of SSG, SRCNN, and 3D-CNN followed the default setting as described in [27,40], and [32]. The training samples and training epochs of SRCNN and 3D-CNN were the same with that of MW-3D-CNN, which guarantees the fairness of comparison. The SR performance was assessed using peak-signal-noise-ratio (PSNR, dB), structural similarity index measurement (SSIM) [58], feature similarity index measurement (FSIM) [59], and spectral angle mean (SAM). We compute the PSNR, SSIM, and FSIM indices on each band, and then calculated the mean values over all the spectral bands.
The assessment indices of different SR methods are given in Tables 1 and 2. The scores of our method are better than the compared methods in most cases. The 3D-CNN in [32] could extract spectral-spatial features from HSI and jointly reconstruct different spectral bands, so it could lead to less spectral distortion than the SRCNN, as shown in Tables 1 and 2. Both 3D-CNN and MW-3D-CNN are in the framework of 3D CNN, and the MW-3D-CNN predicts the wavelet coefficients of the HR HSI, rather than directly predicting the HR HSI. Focusing on the wavelet coefficients makes the MW-3D-CNN more effective in preserving structures in HR HSI, so the results of MW-3D-CNN have higher PSNR values. In order to test the robustness of MW-3D-CNN over larger upscaling factor, we also implemented the SR by a factor of four and report the indices in Table 2. It can be found that the MW-3D-CNN can also achieve competitive results in most cases by an upscaling factor of four. In Figure 5, we plot the PSNR indices of different SR methods on each band. It is clear that the proposed method outperforms other methods on most spectral bands. 9 and 11, we also give the residual maps of SR results, in which reconstruction error at each pixel can be reflected. In Figure 6, it is clear that the result of MW-3D-CNN is closer to the reference image, and the results of other compared methods are much brighter than the original HR image, which means that the spectral distortion is heavier. We also display some small areas by enlarging them to highlight the details of the SR results. In Figures 6 and 10, both SSG and SRCNN results suffer from artifacts with stripe-like patterns. By comparing the details in Figure 10, it can be found that our MW-3D-CNN SR results are sharper than the 3D-CNN results.
In the residual maps, it can be observed that all the SR results contain errors in the edges and details. Compared with other methods, our MW-3D-CNN method generates less errors. For example, in Figure 11, the error values in the MW-3D-CNN residual map are much sparser, which also demonstrates that predicting the wavelet coefficients is helpful for recovering the edges and detailed structures in the HR HSI.
We also present running time comparison of different SR methods in Tables 3 and 4. Most of the SR methods could infer HR HSI quickly. In the SSG method, dictionary learning and sparse coding is time consuming, so SSG takes the longest time to reconstruct HR HSI. The running time of MW-3D-CNN is comparable to 3D-CNN, as both of them could super-resolve HSI within 2 s. The running time comparison in Tables 3 and 4 indicates that our proposed method could achieve competitive performance in both SR accuracy and running time. SSG result [27], (c) SRCNN result [40], (d) 3D-CNN result [32], and (e) the proposed MW-3D-CNN result. The residual maps are displayed by scaling to the minimum and maximum errors. SSG result [27], (c) SRCNN result [40], (d) 3D-CNN result [32], and (e) the proposed MW-3D-CNN result. The residual maps are displayed by scaling to the minimum and maximum errors.
Application on Real Spaceborne HSI
In this sub-section, we also apply the MW-3D-CNN to real spaceborne HSI SR to demonstrate its applicability. Earth Observing-1 (EO-1)/Hyperion HSI was used as testing data. The spatial resolution of Hyperion HSI is 30 m. There are 242 spectral bands in the spectral range of 400~2500 nm. The Hyperion HSI suffers from noise, and after removing the noisy bands and water absorption bands, 83 bands remain. The Hyperion HSI in this experiment was taken over Lafayette, LA, USA on October 2015. We cropped a sub-image with size 341 × 365 from it as the study area.
As there is no HR HSI in real application, we used the Wald protocol to train the networks [24]. The original 30 m HSI was regarded as HR HSI, and LR HSI with resolution 60 m was simulated via down-sampling. The LR-HR HSI pairs were used to train the MW-3D-CNN that could super-resolve HSI by a factor of two. The trained MW-3D-CNN was then applied to the 30 m Hyperion HSI, and HR HSI with 15 m resolution could be obtained. The super-resolved Hyperion HSIs are shown in Figure 12. In Figures 13 and 14, we show, in zoom, the results of the compared methods. The resolution of Hyperion HSI is enhanced significantly through SR. Compared with other methods, the proposed MW-3D-CNN generates HSI with sharper edges and clearer structures, as indicated by the area highlighted in the dashed boxes.
Since there is no reference image for assessment, the traditional evaluation indices such as PSNR cannot be used here. We used the no-reference HSI quality assessment method in [55], which measures the deviation of reconstructed HSI from pristine HSI, to evaluate the super-resolved Hyperion HSIs. The original Hyperion images were first screened for noisy bands and water absorption bands. The remaining bands were used as training data, quality-sensitive features were extracted from the training data and a benchmark multivariate Gaussian model was learned for the no-reference HSI assessment. The no-reference HSI quality scores after SR are listed in Table 5. It shows that by an upscaling factor of two where the SR image is at 15 m resolution, the proposed MW-3D-CNN performs better than other methods with a lower score, which means that the SR result deviates less from the pristine HSI than other SR results.
Application on Real Spaceborne HSI
In this sub-section, we also apply the MW-3D-CNN to real spaceborne HSI SR to demonstrate its applicability. Earth Observing-1 (EO-1)/Hyperion HSI was used as testing data. The spatial resolution of Hyperion HSI is 30 m. There are 242 spectral bands in the spectral range of 400~2500 nm. The Hyperion HSI suffers from noise, and after removing the noisy bands and water absorption bands, 83 bands remain. The Hyperion HSI in this experiment was taken over Lafayette, LA, USA on October 2015. We cropped a sub-image with size 341 × 365 from it as the study area.
As there is no HR HSI in real application, we used the Wald protocol to train the networks [24]. The original 30 m HSI was regarded as HR HSI, and LR HSI with resolution 60 m was simulated via down-sampling. The LR-HR HSI pairs were used to train the MW-3D-CNN that could super-resolve HSI by a factor of two. The trained MW-3D-CNN was then applied to the 30 m Hyperion HSI, and HR HSI with 15 m resolution could be obtained. The super-resolved Hyperion HSIs are shown in Figure 12. In Figures 13 and 14, we show, in zoom, the results of the compared methods. The resolution of Hyperion HSI is enhanced significantly through SR. Compared with other methods, the proposed MW-3D-CNN generates HSI with sharper edges and clearer structures, as indicated by the area highlighted in the dashed boxes.
Since there is no reference image for assessment, the traditional evaluation indices such as PSNR cannot be used here. We used the no-reference HSI quality assessment method in [55], which measures the deviation of reconstructed HSI from pristine HSI, to evaluate the super-resolved Hyperion HSIs. The original Hyperion images were first screened for noisy bands and water absorption bands. The remaining bands were used as training data, quality-sensitive features were extracted from the training data and a benchmark multivariate Gaussian model was learned for the no-reference HSI assessment. The no-reference HSI quality scores after SR are listed in Table 5. It sho
Sensitivity Analysis on Network Parameters
It is theoretically hard to estimate the optimal network parameters of a deep learning architecture. We empirically tuned the network parameters and presented them in Figure 3. In this sub-section, we give the sensitivity analysis of MW-3D-CNN over the network parameters. We vary one network parameter and fix others, then observe the SR performance.
The sensitivity analysis over the size of 3D convolutional kernel is in Table 6. Proper large convolutional kernel size is necessary for collecting spatial and spectral information for HSI SR. It is clear that the best performance is achieved with convolutional kernel size 3 × 3 × 3. The performance decreases when the convolutional kernel size is set to 5 × 5 × 5. More spatial and spectral information can be exploited by larger convolutional kernel, but higher complexity of the network will be caused, and more parameters need to be trained. This may explain why the performance drops with the increase of kernel size.
The number of 3D convolutional kernels determines the number of feature cubes extracted by each layer. In our MW-3D-CNN, we set 32 convolutional kernels for each layer of the embedding subnet and 16 convolutional kernels for each layer of the predicting subnet, which leads to the best performance in most cases, as shown in Table 7. With the increase of convolutional kernel number, more feature cubes could be extracted, but the complexity of the network would be increased.
Usually, the deeper the network, the better the performance. With deeper architecture, the network would have larger capacity. In Table 8, it is shown that the best performance can be obtained in most cases when the number of convolutional layers in the embedding subnet and predicting subnet is set to three and four.
The Rationality Analysis of L1 Norm Loss
In order to verify the rationality of L1 norm loss, we trained the MW-3D-CNN using the L2 norm loss written as then compared it with the one trained using the L1 norm loss in Equation (4). The comparison is presented in Table 9. The L1 norm loss could mitigate the unbalance in penalizing low-and highfrequency wavelet package sub-bands caused by the L2 norm loss, so the MW-3D-CNN trained with the L1 norm loss performs better than the L2 norm loss on the testing data, as shown in Table 9.
In the training stage, the errors of the i-th wavelet package sub-band predicted by the MW-3D-CNN can be expressed as (C i j −Ĉ i j ), where j = 1, 2, . . . , N, N is the number of training sample. We present the histograms of the errors after 200 training epochs in Figure 15. It is clear that the errors of different wavelet package sub-bands have similar statistics, as most of the errors are close to zero and tend to follow Laplacian distributions. Compared with the L2 norm, the L1 norm is more suitable for penalizing the Laplacian-like errors, which demonstrates the rationality of the L1 norm loss as well.
The Rationality Analysis of L1 Norm Loss
In order to verify the rationality of L1 norm loss, we trained the MW-3D-CNN using the L2 norm loss written as then compared it with the one trained using the L1 norm loss in Equation (4). The comparison is presented in Table 9. The L1 norm loss could mitigate the unbalance in penalizing low-and highfrequency wavelet package sub-bands caused by the L2 norm loss, so the MW-3D-CNN trained with the L1 norm loss performs better than the L2 norm loss on the testing data, as shown in Table 9.
In the training stage, the errors of the i-th wavelet package sub-band predicted by the MW-3D-CNN can be expressed as ( ) , N is the number of training sample. We present the histograms of the errors after 200 training epochs in Figure 15. It is clear that the errors of different wavelet package sub-bands have similar statistics, as most of the errors are close to zero and tend to follow Laplacian distributions. Compared with the L2 norm, the L1 norm is more suitable for penalizing the Laplacian-like errors, which demonstrates the rationality of the L1 norm loss as well. Figure 15. The histograms of errors in different wavelet sub-bands after 200 training epochs. The training data is extracted from Pavia University, the MW-3D-CNN is trained with the L1 norm loss, and the upscaling factor is two.
The Rationality Analysis of 3D Convolution
In this sub-section, in order to analyze the advantage of 3D convolution over 2D convolution for HSI SR, we replaced all the 3D convolutional layers in the MW-3D-CNN with 2D convolutional layers. In this case, it reduces to the architecture as the wavelet-SRNet method in [36]. Then we compared the MW-3D-CNN with the wavelet-SRNet. The loss function of wavelet-SRNet was originally designed with L2 norm in [36]. Here, we also trained the wavelet-SRNet with L1 norm as loss function, and the corresponding results are denoted as wavelet-SRNet-L2 and wavelet-SRNet-L1. The comparison between the MW-3D-CNN and the wavelet-SRNet is presented in Table 10.
In Table 10, it can be found that the MW-3D-CNN performs better than the wavelet-SRNet on the three datasets. The MW-3D-CNN is based on 3D convolutional layers, which could naturally exploit the spectral correlation and reduce the spectral distortion in HSI SR. We could also find that when the L1 norm is used as loss function for the wavelet-SRNet, the SR performance is slightly better than the L2 norm, which also demonstrates the effectiveness of L1 norm. Figure 15. The histograms of errors in different wavelet sub-bands after 200 training epochs. The training data is extracted from Pavia University, the MW-3D-CNN is trained with the L1 norm loss, and the upscaling factor is two.
The Rationality Analysis of 3D Convolution
In this sub-section, in order to analyze the advantage of 3D convolution over 2D convolution for HSI SR, we replaced all the 3D convolutional layers in the MW-3D-CNN with 2D convolutional layers. In this case, it reduces to the architecture as the wavelet-SRNet method in [36]. Then we compared the MW-3D-CNN with the wavelet-SRNet. The loss function of wavelet-SRNet was originally designed with L2 norm in [36]. Here, we also trained the wavelet-SRNet with L1 norm as loss function, and the corresponding results are denoted as wavelet-SRNet-L2 and wavelet-SRNet-L1. The comparison between the MW-3D-CNN and the wavelet-SRNet is presented in Table 10.
In Table 10, it can be found that the MW-3D-CNN performs better than the wavelet-SRNet on the three datasets. The MW-3D-CNN is based on 3D convolutional layers, which could naturally exploit the spectral correlation and reduce the spectral distortion in HSI SR. We could also find that when the L1 norm is used as loss function for the wavelet-SRNet, the SR performance is slightly better than the L2 norm, which also demonstrates the effectiveness of L1 norm.
Robustness over Wavelet Functions
In the experiment, we used Haar wavelet function in WPT. In this sub-section, we also perform the MW-3D-CNN with other two wavelet functions, Daubechies-2 and biorthogonal wavelet functions, to evaluate the robustness of MW-3D-CNN over the wavelet function. In Table 11, it can be found that the SR performance with different wavelet functions is close to each other. The SR performance changes slightly with different wavelet functions, which demonstrates the robustness of MW-3D-CNN over the wavelet functions. The MW-3D-CNN is implemented on Tensorflow [60], with a NVIDIA GTX 1080Ti graphic card. It takes about 7 h and 20 h to train the MW-3D-CNN with upscaling factor two and four respectively. In the testing stage, inferring a HR HSI only takes less than two seconds, it is fast because there is only feed forward operation involved.
Conclusions
In this study, a MW-3D-CNN for HSI SR was proposed. Instead of predicting the HR HSI directly, we predicted the wavelet package coefficients of the latent HR HSI, and then reconstructed the HR HSI via inverse WPT. The MW-3D-CNN is constituted by an embedding subnet and a predicting subnet, both of which are built on 3D convolutional layers. The embedding subnet projects the input LR HSI into feature space and represents it with a set of feature cubes. These feature cubes are then fed to the predicting subnet, which consists of several output branches. Each branch corresponds to a wavelet package sub-band and predicts the wavelet package coefficients of each sub-band. The HR HSI can be reconstructed via inverse WPT. The experiment results on both simulated and real spaceborne HSI demonstrate that the proposed MW-3D-CNN could achieve competitive performance. The MW-3D-CNN learns the knowledge from the external training data for HSI SR. HSI has its prior information in both spectral and spatial domains, such as the structural self-similarity [26] and low rank prior [61][62][63]. Exploiting these prior information helps regularize the ill-posed HSI SR problem. How to combine such internal prior with external learned knowledge in the deep learning will need to be examined in future work. Furthermore, integrating adversarial loss [64] in training the network is another direction to boost the SR performance.
|
2019-07-26T11:17:29.531Z
|
2019-06-30T00:00:00.000
|
{
"year": 2019,
"sha1": "37b2ac1e9cea41bfa617406dad55693a7c8526fc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/11/13/1557/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "37772d99bdab9b29f7873c96e15a148a00cfb05f",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
}
|
62886482
|
pes2o/s2orc
|
v3-fos-license
|
An isocratic liquid chromatography-electrospray ionization tandem mass spectrometric determination of varenicline in human plasma and dosage form
A simple, sensitive and accurate liquid chromatography tandem mass spectrometric (LC/MS/MS) method has been developed and validated for determination of varenicline (VRC) in human plasma and pharmaceutical tablets as a tool for therapeutic drug monitoring. The VRC and internal standard (paracetamol, IS) were extracted by liquid-liquid extraction technique. The separation was achieved on C18 column (150mm × 4.6 mm, 5 μm, maintained at 25°C) by isocratic mode at a flow rate of 0.7 ml/min using a mobile phase consisted of a mixture of 5 mM ammonium formate, pH 7.5 (A) and (acetonitrile: methanol, 50:50, v/v) (B) in a ratio of A:B (15:85, v/v) for 10 min. The analytes were monitored by electrospray ionization in positive ion multiple reaction monitoring (MRM) mode. Optimization of MRM mode and chromatogrphic conditions were applied to elmeinate the interference peaks and increse of sensitivity. The method was linear (r 2 = 0.9998) at concentration range of 20.0 to 500.0 ng/ml with lower limit of detection of 6.0 ng/ml. The method was statistically validated for linearity, accuracy, precision and selectivity following Food and Drug Administration (FDA) guidelines. The mean extraction recovery of VRC from human plasma was 87.06 ± 2.47%. The reproducibility of the method was reliable with the intraand inter-day precision was < 5% and average accuracy of 103.54%. The validated method was successfully applied to quantify VRC in human plasma as well as bulk and dosage form in quality control laboratory.
The quality of pharmaceutical product of VRC, in terms of purity and stability of the active substance and/or finished product is vital for the effective and safest delivery of its therapeutic values to the smokers.A detailed understanding of correlations of drug levels with drug action is an important aspect of the routine use of drug.The accurate quantification of agents in biological matrices such as blood, serum, urine and tissue samples is the cornerstone of therapeutic drug monitoring.Therefore, detailed specific, reproducible and accurate method for the quantification of VRC is necessary.Additionally, examining the matrix effects represents an important issue in liquid chromatography tandem mass spectrometric (LC-MS/MS), particularly when dealing with biological matrices such as biological fluids.These phenomena can be reduced by an efficient sample preparation (Souverain et al., 2004) and an adequate chromatographic separation with the elution of the analytes outside the matrix effect time window generally observed at the beginning of the chromatogram (Marchi et al., 2007).
However in quantitative analysis, these conditions might be insufficient to reduce interferences, and other approaches should be combined to compensate residual matrix effects; the use of multiple reaction monitoring (MRM) mode can be one of these approaches (Chambers et al., 2007;Addona et al., 2009).Two LC-MS/MS methods have been published for the quantification of varenicline in human plasma, the first one was done by Obach et al., (2006) which has then been applied for the study of varenicline pharmacokinetics (Faessel et al., 2006;Faessel et al., 2007).The second method was developed by Dobrinas et al. (2011) for determination of nicotine, cotinine, trans-3'-hydroxycotinine and varenicline, which was performed as a procedure for a clinical study on smoking cessation to confirm abstinence from smoking and detection of overdose.
The present study describes, for the first time, the development and validation of an isocratic LC-MS/MS with highly efficient, more specific and highly sensitive method for quantitative determination of VRC in both human plasma and dosage form.
Chemicals and reagents
Varenicline tartarate reference standard (purity, 99.5%) was purchased from Weihua Pharma Co. Ltd. (Zhejiang, China), parcetamol reference standard (purity, 99.5%) was purchased from Sigma-Aldrich (Buchs, Switzerland), Champix ® 0.5 mg and 1 mg tablets (Pfizer Inc.New York, USA) were procured from a local pharmacy.High power liquid chromatography (HPLC)-grade solvents and reagent-grade ammonium formate were purchased from Merck (Darmstadt, Germany).De-ionized water was purified using cartridge system (Milford, USA) [Ultra pure water of 18 µμ was obtained from Milli-Q plus purification system, Millipore, Waters (Milford, USA)].Human blood was obtained from King Khalid University hospital (Riyadh, KSA) and was kept frozen until use after gentle thawing.
Instrumentation and chromatographic conditions
Chromatographic separation was performed on an Agilent 1200 series system consisting of G1311A binary pump, G1322A degasser, G1367B HIP-ALS autosampler, G1316 thermostatted column compartment and an Agilent 6410 triple quadrupole LC/MS (Agilent Technologies, Palo Alto, CA, USA).Binary chromatography was carried out on Agilent eclipse plus C18 analytical column (150 mm × 4.6 mm, 5 µm) (Agilent Technologies, Palo Alto, CA, USA).Column temperature was kept constant at 25 ± 2°C.The most suitable chromatographic conditions were achieved at a flow rate of 0.7 ml/min with a mobile phase consisted of A; 5 mM ammonium formate buffer, and an apparent pH was adjusted to 7.5 using formic acid: B (acetonitrile and methanol; 50:50%) in a ratio of A: B, 15:85, v/v.Sample injection volume was 10 µl.Detection was performed on a triple quadrupole MS detector (6410 QQQ), operated with an ESI interface in the positive ionization mode.Nitrogen was used as desolvation gas at a flow rate of 12 L/min and as collision gas at a pressure of 30 psi.Source temperature was set at 350°C, capillary voltage at 4 kV, and dwell time for each ion was 200 ms.Quantification was achieved using multiple reaction monitoring (MRM) of the transitions 212→183; 212→169 for VRC and 152 → 110; 152 → 93 for paracetamol as an internal standard (IS).These transitions were previously reported in other publications for the detection of VRC (Obach et al., 2006;Tan et al., 2010).Fragmentor voltage was set to 130 and 75 V with collision energy of 21 and 13 V for VRC and paracetamol, respectively.Mass Hunter software (Agilent Technologies, Palo Alto, CA, USA) was used to control the instruments and data acquisition.
Preparation of standard solutions
Vareniciline standard stock solution was prepared in de-ionized distilled water to give a final concentration of 1 mg/ml.The working standard solution was prepared by diluting 1 ml of stock solution into 10 ml measuring flask in de-ionized water to give a 100 µg/ml concentration.The internal standard (IS) paracetamol stock solution was prepared in methanol to produce a concentration of 1.0 mg/ml.One ml of stock solution (IS) was prepared into 10 ml measuring flask in methanol to produce a working solution of a 100 µg/ml concentration, and then an appropriate amount was diluted in methanol to give working stock solution of 480 ng/ml.All working solutions were stored at -20°C until required for analysis.
Sample preparation and construction of the calibration curve
The calibration standard samples were prepared by spiking blank human plasma with varenicline to yield final concentrations of 50 Low quality control (LQC), 200 medium quality control (MQC), and 400 high quality control (HQC) ng/ml.To an aliquot of plasma (300 µl), 1 N sodium hydroxide solution (500 µl) was added.The alkalinized samples were subjected to liquid extraction, where (15 µl) paracetamol internal standard (480 ng/ml) was added with diethyl ether (3 ml).The organic phase was separated and evaporated to dryness under nitrogen.
The residue was reconstituted with the mobile phase.Similarly, blank and blank with IS were also prepared.Ten (10 µl) of each calibration sample was injected into the LC-MS system.The drug free plasma was processed with similar procedure using de-ionized water instead of VRC.Blank plasma was then tested to ascertain the absence of any endogenous interference at the retention time of VRC and internal standard.An eight-point calibration curve (20,50,100,150,200,300,400 and 500 ng/ml) was constructed by plotting the peak area ratio of VRC to paracetamol (IS) versus VRC concentration (x).Analysis of calibration samples at each concentration was performed in triplicates.Slope, intercept, and r 2 values were calculated as regression parameters by linear regression.The linear regression equation was used to calculate the concentrations of VRC in spiked plasma based on their peak area ratios.
Preparation of tablet solutions
Twenty tablets were weighed and the average weight was calculated.Tablets were crushed to a fine powder, and a quantity of the powdered tablets, equivalent to 10 mg of VRC, was transferred to 50 ml volumetric flasks.A 25 ml of methanol was added, the contents of the flask were shaken for 10 min by a mechanical shaker, and the volume was diluted to 50 ml with methanol.This solution (0.2 mg/ml) was diluted to give a concentration 10 µg/ml.This solution was filtered through a 0.45 µm membrane filter and the filtrate was subjected to the analysis by the LC-MS/MS method.
Method validation
The method validation was based on the recommendations of International Conference on Harmonisation (ICH) (ICH Guidance for Industry, 2000) and on the guidelines for analytical procedures and methods validation by the Food and Drug Administration (FDA) (FDA, 2000).
Selectivity
The selectivity of an analytical method may be defined as the ability to obviously determine the analyte in the presence of additional components such as impurities, degradation products and matrix.Method selectivity was tested by analyzing 10 blank plasma batches from different sources for interfering peaks.Additionally, selectivity was checked by analyzing 10 placebo tablet samples comparing them with the prepared tablet solutions.Possible carryover effects were reduced by increasing run time after elution of the analytes.
Linearity and sensitivity
Using the aforementioned optimum chromatographic conditions, three independent calibration curves were constructed correlating the calculated peak area ratio of VRC to the internal standard (paracetamol) versus the nominal concentrations of VRC.Calibration plots for VRC in plasma were prepared daily at eight concentration points; each concentration was injected in triplicates.Regression analysis for the results was carried out using the leastsquare method.The method is extensively validated as per the United States Food and Drug Administration (FDA, 2000) guidelines and ICH (ICH Guidance for Industry, 2000).
Precision and accuracy
Precision was measured in accordance with ICH recommendation (ICH Guidance for Industry, 2000).Intra-day accuracy and precision were determined in six replicates by analyzing QC samples at low, medium and high concentrations (50, 200 and 400 ng/ml) across the linear range.Inter-day accuracy and precision were evaluated on three consecutive days.Precision was expressed as the relative standard deviation of the determined concentrations.Percent accuracy was reported as: Error % = [(mean measured concentration -nominal concentration) / nominal concentration] ×100.
Precision less than 5.3 % and accuracy within 97.8 to 104.6% were accepted.
Limit of detection and lower limit of quantification
The limit of detection (LOD) and limit of quantitation (LOQ) were calculated based on the signal-to-noise ratio (ICH Guidance for Industry, 2000).The intercept was then equal to SD (the estimated SD at a concentration of zero).LOD and LOQ were then defined as 3SD, 10SD, respectively.
Robustness and ruggedness
In order to measure the extent of the method robustness, the most critical parameters were interchanged while keeping the other parameters unchanged, and in parallel the chromatographic profile was observed and recorded.The chromatographic parameters were interchanged within the range of 1 to 10% of the optimum recommended conditions.The studied parameters were: the composition of the mobile phase, pH, flow rate, and column temperature.Ruggedness of the method was determined by using two different analyzing and different instruments.
Recovery
The percentage recovery of VRC in human plasma and pharmaceutical tablets was assessed as the ratio of the mean peak area of the VRC spiked before extraction to the mean peak area of the same concentration spiked post-extraction in the same matrix multiplied by 100.
Stability studies
Stability experiments were performed with low, medium and high QC samples to evaluate the varenicline stability under different conditions.Experiments were performed in triplicate to determine stability of bench top (6 h) and auto sampler (24 h) sample at room temperature (25 ± 2°C).
Method development
Mass spectrometric conditions were optimized so as to achieve the maximum stable response of the parent ions and the major product ions of the analytes.Multiple reaction monitoring (MRM) afforded by MS/MS had a greater advantage in reducing interference and enhancing sensitivity over selected ion monitoring (SIM).ESI operated in positive ion mode for the LC-MS/MS analysis to provide optimum sensitivity and selectivity.The mass spectrum of VRC showed protonated molecular ions ([M+H] + ) at m/z 212.Two major fragments were observed at m/z 169 and 183, which were selected for the subsequent monitoring in the third quadrupole.The mass spectrum of the IS, paracetamol, showed a protonated molecular ions ([M+H] + ).Two major fragments were observed at m/z 110 and 93 (Figure 2).
The chromatographic conditions, especially the composition of the mobile phase, were optimized through several trials to achieve good resolution, sensitivity and symmetric peak shapes for varenicline and IS.Different percentages of acetonitrile and methanol solution containing ammonium formate buffer at different pH using formic acid, were tested.The presence of formic acid in the mobile phase can aid in ionization the analytes, enhancing ion response, and modifying the peak shape.Finally, mixture of formate, pH 7.5 (A) and (acetonitrile: methanol, 50:50, v/v) (B) in a ratio of A:B (15:85, v/v) was adopted as mobile phase because of its better separation, high sensitivity and more stable MS signal.Varenicline and IS were detected at retention times of 7.7 and 5.4 min, respectively, using the optimized LC-MS/MS condition and not interfered by endogenous compounds.
The matrix effects were also evaluated by comparing the peak areas ratio of varenicline from the spiked after extraction of the samples (the blank plasma samples were obtained from six different sources) to those obtained for the standards in the mobile phase at equivalent concentrations.The ratio from low to high dose levels was 88.17 to 87.05%.These results indicate that the matrix effect should not have a significant impact on assay performance.Choosing the appropriate IS is important for active high accuracy and to deal with sample matrix effect where LC-MS/MS is used for the assay.Paracetamol was selected as the IS because of its chromatographic behavior similar to that of varenicline.Varenicline was found to be stable at bench top for 6 h, and then for 24 h in an auto sample at room temperature.
Under the optimal LC conditions, VRC eluted at 7.7 min, and the IS at 5.4 min, with a total chromatographic run time within 10 min.Carryover was not obvious in either blank matrices or zero-level standard (blank with IS).A representative total ion chromatogram of VRC and IS in multiple reaction monitoring (MRM) mode is shown in Figure 3.
Our proposed study, in comparison with the reported methods is highly efficient, more specific and highly sensitive for quantitative determination of VRC in both human plasma and dosage form compared with published methods (Obach et al., 2006;Dobrinas et al., 2011).Both methods (Obach et al., 2006;Dobrinas et al., 2011) were used as gradient elution.In the first method (Obach et al., 2006), the column was washed and reequilibrated after each injection and the method was not fully validated.In the second method (Dobrinas et al., 2011), UPLC was used with gradient mode followed by recondition with 95% of solution B (acetonitrile with 0.1% formic acid ) for 8.0 min, as requested for HPLC columns.
The present study describes, for the first time, the development and validation of an isocratic LC-MS/MS with highly efficient, more specific and highly sensitive method for quantitative determination of VRC in both human plasma and dosage form also, the method may be useful for therapeutic drug monitoring of VRC in plasma as well as pharmacokinetic studies of VRC.
Linearity, sensitivity and selectivity
The method is extensively validated according the FDA guidelines and ICH and is rugged and adequately sensitive for routine subject sample of analysis.The linear regression analysis for the results was carried out using the least-square method.The relative standard deviation values of each concentration point (triplicates) did not exceed 5.13%.The results revealed a good linear calibration fit in the range of 20 to 500 ng/ml, with a correlation coefficient (r) ≥ 0.998.A typical calibration curve has the regression equation of y = 1.5209x -0.0006 (r 2 = 0.9998).The high r 2 value was indicative for the good linearity, and the low values of standard deviations of the intercept and the slope were indicative for the significant validity of the calibration points used for constructing the calibration curve.The method is selective as no interference was observed in drug-free plasma and placebo tablets samples at the retention time of VRC.Additionally, no carry-over effect was observed in our system.Varencline and IS were well separated under the HPLC conditions applied and retention times were 7.7 and 5.4 min, respectively.No interferences were observed in drug free human plasma or excipients commonly co-formulated with drug (Figure 2).Otherwise, there are no peaks detected at the retention time of varencline and internal standard paracetamol.
Limit of detection and lower limit of quantification
The limit of detection (LOD) and limit of quantitation (LOQ) were calculated based on the signal-to-noise ratio (ICH Guidance for Industry, 2000).The intercept was then equal to SD (the estimated SD at a concentration of zero).LOD was then defined as 3SD, and LOQ was defined as 10 SD.The LOD and LOQ values were 6.0 and 20.0 ng/ml, respectively.
Precision and accuracy
The results of intra-day and inter-day accuracy and precision are presented in Table 1.The intra-and interday precisions were less than 2.32 and 4.1%, respectively.Similarly, the average intra-day and interday accuracy was 103.7 and 103.38%, respectively.Moreover, accuracy and precision were determined by the recovery study of known amounts (20 to 500 ng/ml) of VRC standard added to a placebo matrix for tablets.The samples were analyzed (6 replicates were injected) by one analyst, and the added amounts were calculated from a calibration curve.The accuracy values ranged from 97.28 to 101.65% and precision values ranged from 0.54 to 2.39 (Table 2).These results indicated the acceptable accuracy and precision of the method (ICH Guidance for Industry, 2000).
Robustness and Ruggedness
In order to measure the extent of robustness, the most critical parameters were interchanged while keeping the other parameters unchanged, and the chromatographic profile was observed and recorded, in parallel.The chromatographic parameters were interchanged within the range of 1 to 10% of the optimum recommended conditions.The studied parameters were: the composition of the mobile phase, pH, flow rate, and column temperature.The results indicated that (relative standard deviation (RSD) was 0.3 to 0.4%) the small change in the conditions did not significantly affect the determination of VRC.Ruggedness of the method was determined by using mobile phase components from two different manufactures, two different analyst, and two different instruments.There was no significant change observed in the retention time of VRC; RSD was 0.26 to 0.39%, indicating the ruggedness of the method.
Application to pharmaceutical formulations and in plasma
The accuracy data of back-calculated concentration of calibration samples for varenicline was evaluated by recovery studies using the standard addition method.The obtained recovery values were 97.28 to 101.65% and the RSD was 0.5 to 2.39% (Table 2).The method was proven to be highly accurate.Results obtained for the analysis of VRC in each formulation by the proposed HPLC is given in Table 3.The recovery of varenicline was in the range of ~98.6 ± 0.7 to 101.98 ± 3.18%.Recovery of VRC was also determined by analysis of plasma spiked with standard VRC under the optimum conditions.As shown in Table 4, average percentage recovery 87.057% for linearity was in the range of 20.0 to 500.0 ng/ml of VRC.
Conclusion
The optimized LC/MS/MS method was validated for measuring varenicline in human plasma and pharmaceutical formulations.Good linearity was observed from 20 to 500 ng/ml.The assay has a simple satisfactory extraction procedure for sample preparation and a relatively rapid run time of 10 min.The validated method described here, utilizing an isocratic HPLC separation and positive ionization tandem MS detection is rapid, robust, highly selective, and sufficiently sensitive.The method may be useful for therapeutic drug monitoring of VRC in plasma as well as pharmacokinetic studies of VRC.
Table 2 .
Accuracy and precision data of varenicline.
Table 4 .
Varenicline recovery from spiked plasma
|
2018-12-30T14:14:14.948Z
|
2013-05-29T00:00:00.000
|
{
"year": 2013,
"sha1": "c1bd3da4d72168f8d4d1814a21c68f3c8e0b4f9d",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJPP/article-full-text-pdf/2E73C0E35885.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c1bd3da4d72168f8d4d1814a21c68f3c8e0b4f9d",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
210041735
|
pes2o/s2orc
|
v3-fos-license
|
Sensor-as-a-Service: Convergence of Sensor Analytic Point Solutions (SNAPS) and Pay-A-Penny-Per-Use (PAPPU) Paradigm as a Catalyst for Democratization of Healthcare in Underserved Communities
In this manuscript, we discuss relevant socioeconomic factors for developing and implementing sensor analytic point solutions (SNAPS) as point-of-care tools to serve impoverished communities. The distinct economic, environmental, cultural, and ethical paradigms that affect economically disadvantaged users add complexity to the process of technology development and deployment beyond the science and engineering issues. We begin by contextualizing the environmental burden of disease in select low-income regions around the world, including environmental hazards at work, home, and the broader community environment, where SNAPS may be helpful in the prevention and mitigation of human exposure to harmful biological vectors and chemical agents. We offer examples of SNAPS designed for economically disadvantaged users, specifically for supporting decision-making in cases of tuberculosis (TB) infection and mercury exposure. We follow-up by discussing the economic challenges that are involved in the phased implementation of diagnostic tools in low-income markets and describe a micropayment-based systems-as-a-service approach (pay-a-penny-per-use—PAPPU), which may be catalytic for the adoption of low-end, low-margin, low-research, and the development SNAPS. Finally, we provide some insights into the social and ethical considerations for the assimilation of SNAPS to improve health outcomes in marginalized communities.
Examples of SNAPS-ART
Near real-time qualitative decisions are often key for rapid response. SNAPS make up a tool that uses sensor data to provide a response at the point of use with minimal analytics. If two or more factors must be considered by the human-in-the-loop to take a decision, artificial reasoning tools (ARTs) are implemented. ARTs make up a data fusion layer that combines sensor data and displays suggestions or information on the user's mobile device. In principle, SNAPS are designed to offer "point solutions," which implies a rapid binary output (yes/no) based on the data captured from the sensor signal (for example, sensor binds to an analyte). However, even in rudimentary scenarios, a single source of binary data may fail to provide basic information. Hence, the need for artificial reasoning tools (ARTs), which are light-weight middleware (software that sits in the "middle") embedded with preliminary logic to decide what is the meaning of the data and what information may be conveyed (displayed) for the end-user. By introducing a modular ART, the user takes advantage of a combinatorial variant configuration menu to change, adapt, or introduce new reasoning/logic in the middleware by re-programming the logic "buckets" by simply re-shuffling and inserting the user's preferred choices from a repertoire of pre-programmed logic [22].
There are many complex layers to a system-level solution to ease the environmental burden on impoverished communities. Velez-Torres et al. [25] recently developed a circular system framework for integrating analytic tools (such as SNAPS) with social action research (Closed-loop integration of social action and analytical science research, CLISAR). The CLISAR framework is a transdisciplinary approach that involves analytical tools such as sensors for informing community action that is related to, for example, public health, environmental issues, or food security. Beyond simple commercial colorimetric detection strips that are used in development of CLISAR, information derived from SNAPS can transform this system by supporting decision-making processes that are aimed at improving the health outcomes of marginalized communities. Herein, we suggest a conceptual approach for selecting and implementing the type of diagnostic tools for implementation of SNAPS (see Figure 1). The examples that follow in the subsequent section used a five-step process that followed a closed-loop approach similar to CLISAR and other circular economic models [25,26]. The first step is to understand the specific problem as well as the social and economic context where decision-support technology may be needed. The next step is to identify readily available resources and then design diagnostic tools for creating a technology portfolio (sensors, analytics software, portable hardware, etc.). The third step involves the selection of the most appropriate tools to create SNAPS based on technical capabilities as well as interactive feedback from stakeholders. In step four, scientists and end-users test technology prototypes in field conditions by using established participatory methodologies. Finally, the results from the proof-of-concept testing are used to evaluate and refine the technology. This process is repeated until a solution meets user expectations and desired performance characteristics. The concept is based on principles of circular systems and convergent thinking [25,26], where technology refinement may occur by using reductionist or parallel approaches. Below, we present two examples of how this conceptual model is applied in real-world settings. The first example is in advanced stage field-testing (refinement and technology improvement, with some elements in the second circular phase), while the second example is in the early phase of development (tool selection and technology transfer).
Diagnostics 2019, 9, x FOR PEER REVIEW 4 of 20 readily available resources and then design diagnostic tools for creating a technology portfolio (sensors, analytics software, portable hardware, etc.). The third step involves the selection of the most appropriate tools to create SNAPS based on technical capabilities as well as interactive feedback from stakeholders. In step four, scientists and end-users test technology prototypes in field conditions by using established participatory methodologies. Finally, the results from the proof-of-concept testing are used to evaluate and refine the technology. This process is repeated until a solution meets user expectations and desired performance characteristics. The concept is based on principles of circular systems and convergent thinking [25,26], where technology refinement may occur by using reductionist or parallel approaches. Below, we present two examples of how this conceptual model is applied in real-world settings. The first example is in advanced stage field-testing (refinement and technology improvement, with some elements in the second circular phase), while the second example is in the early phase of development (tool selection and technology transfer). The process begins with establishing context, and each cycle concludes with technology refinement based on user feedback. The blue, orange, and green arrows indicate technology evolution by using established principles of circular feedback systems. (B) A conical representation of the blue, orange, and green cycles shown in Panel (A) indicate convergence toward a systems-level solution through feedback/refinement pathways. The total number of cycles is context-specific and proceeds from cycle 1 to cycle n.
Early Assessment of Tuberculosis in Vulnerable Populations
In 2017, 1.6 million people died from tuberculosis (TB) globally, and there were 10 million new TB cases that occurred in the same year [27]. TB has surpassed HIV as the leading infectious disease killer worldwide since 2014 [28]. Furthermore, multidrug-resistant and extensively drug-resistant TB (MDR/XDR-TB) are current global public health threats. The 2017 Moscow Ministerial Declaration on ending TB, involving 120 countries and over 800 partners, identified "to advance research and development of new tools to diagnose, treat and prevent TB" as one of four action items [29]. This meeting was followed in 2018 by a United Nations (UN) General Assembly first-ever high-level meeting to accelerate efforts to end TB [30].
The care of TB patients starts with accessible and affordable diagnosis. The majority of TB patients live in poor conditions and in geographically remote areas. Culture-based techniques are the gold standard for diagnosis, but this is relatively expensive and results take six-to-eight weeks [31]. For decades, TB diagnosis has relied on direct sputum smear microscopy (SSM) in many countries [31]. SSM is fast, inexpensive, facile, and specific for detecting Mycobacterium tuberculosis (Mtb) in high incidence areas [31][32][33][34]. SSM does not require a highly specialized apparatus and is therefore The process begins with establishing context, and each cycle concludes with technology refinement based on user feedback. The blue, orange, and green arrows indicate technology evolution by using established principles of circular feedback systems. (B) A conical representation of the blue, orange, and green cycles shown in Panel (A) indicate convergence toward a systems-level solution through feedback/refinement pathways. The total number of cycles is context-specific and proceeds from cycle 1 to cycle n.
Early Assessment of Tuberculosis in Vulnerable Populations
In 2017, 1.6 million people died from tuberculosis (TB) globally, and there were 10 million new TB cases that occurred in the same year [27]. TB has surpassed HIV as the leading infectious disease killer worldwide since 2014 [28]. Furthermore, multidrug-resistant and extensively drug-resistant TB (MDR/XDR-TB) are current global public health threats. The 2017 Moscow Ministerial Declaration on ending TB, involving 120 countries and over 800 partners, identified "to advance research and development of new tools to diagnose, treat and prevent TB" as one of four action items [29]. This meeting was followed in 2018 by a United Nations (UN) General Assembly first-ever high-level meeting to accelerate efforts to end TB [30].
The care of TB patients starts with accessible and affordable diagnosis. The majority of TB patients live in poor conditions and in geographically remote areas. Culture-based techniques are the Diagnostics 2020, 10, 22 5 of 21 gold standard for diagnosis, but this is relatively expensive and results take six-to-eight weeks [31]. For decades, TB diagnosis has relied on direct sputum smear microscopy (SSM) in many countries [31]. SSM is fast, inexpensive, facile, and specific for detecting Mycobacterium tuberculosis (Mtb) in high incidence areas [31][32][33][34]. SSM does not require a highly specialized apparatus and is therefore very suitable for low-resource settings [31,33]. However, the accuracy of SSM is only 25-65%, which is considerably lower than the standard culture technique, and its limit of detection is about 10,000 colony forming units per milliliter (CFU/mL) [34,35]. In a recent study involving hundreds of specimens tested with culture, SSM, and the Xpert MTB/RIF system, the SSM method exhibited an average accuracy of 54% for respiratory samples and 50% for non-respiratory samples [36]. Furthermore, the overall performance of SSM depends on different variables including the type of lesion, the type and number of specimens, the specific Mycobacterial species, the staining technique, and the competence of the microscopist [35]. In a 2014 survey, 22 high-burden countries conducted 78 million sputum smears valued at 137 million USD in 43,000 microscopy centers; about 61% of the analyses were conducted in the BRICS countries (Brazil, Russian Federation, India, China and South Africa) [37]. About 79% of the smears performed in the BRICS countries were used for initial diagnosis. On average, the unit cost for a smear was 1.77 USD, including materials, labor, and overhead expenses [35]. Several studies had shown that the accuracy of SSM improved when specimens were subjected to liquefaction, followed by the concentration of the Mycobacteria through overnight sedimentation or centrifugation [34,[38][39][40][41][42]. However, the enhanced SSM performance provided by these pretreatment steps may not be sufficient to offset their increased cost, the complexity of their process, and potential biohazards.
Recent advances in bacteria preconcentration and the diagnosis of TB and multi-drug resistant tuberculosis (MDR-TB) include sophisticated techniques such as Xpert MTB/RIF, TB beads, liquid culture, centrifugation, filtration, and line probe assays [43][44][45][46][47]. However, these techniques are not necessarily accessible or affordable for those who need them the most [48]. Considering the high accuracy (~97%) and specificity (~99%) of the Xpert system relative to the culture standard [36], the World Health Organization issued a recommendation in 2010 to use Xpert MTB/RIF for the diagnosis of all persons with signs and symptoms of TB. However, the Xpert MTB/RIF assay entails a price of US$10 per cartridge. Thus, if this method was to be implemented for all people with presumed TB, the cost would exceed 80% of the total TB spending in low-income countries such as India, Bangladesh, Indonesia and Pakistan [49]. In 2014 and 2015, there were 33 and nine SSMs for every Xpert MTB/RIF test procured, respectively [50]. While high-end diagnostic methods are more accurate and/or specific than SSM, these techniques remain cost-prohibiting and inaccessible for people living in low-income countries where Mtb has a high prevalence.
An essential aspect of TB is the substantial financial burden placed on patients and their families due to treatment and associated costs. For example, TB patients are often required to take absence leave from work, which, is unpaid in some cases, leading to a higher risk of financial struggle in the household [51]. Tanimura et al. reported the distribution of financial burden for the TB patient as 20% due to direct medical costs, 20% due to direct non-medical costs, and 60% due to income loss [52]. On average, the total cost was equivalent to 58% of reported annual individual income and 39% of reported household income [52].
In this context, accurate, rapid, and cost-effective diagnostic tests are paramount for reducing TB infection and its unacceptably high mortality rates, especially for an easily treatable disease [53]. The ambitious goal of the global "End TB Strategy" to diminish TB incidence by 90% and reduce TB mortality by 95% by the year 2035 is unlikely achievable without highly accurate yet low-cost tools to address epidemics in settings of poverty [54]. New tools must include improved point-of-care diagnostic tests that are delivered to low-income communities and at the first point-of-contact by patients in the healthcare system. Ideally, TB tests should be performed with the use of non-invasive sampling procedures, and results should be promptly delivered to the patients, allowing for a quick turnaround time for treatment in a single clinical encounter and hence avoiding the loss of patient follow up [54].
Thus, our strategy was to develop low-cost biosensing assay for rapid TB detection by employing modern advances in nanoparticle science and glyco-chemistry, thus resulting in an accuracy matching the performance of Xpert MTB/RIF [55,56] and standard culture. The nanoparticle-based colorimetric biosensing assay (NCBA) is based on the concept of the magnetically activated cell enrichment (MACE) technique using glycan-coated magnetic nanoparticles (GMNP). In this technique, the Mtb cells are isolated and enriched by applying a magnetic field to activate nanoparticle-bound Mtb cells without using any expensive antibodies or energy-consuming centrifuge instruments, thus eliminating the need for time-consuming growth of Mtb. The NCBA test involves the utilization of iron oxide nanoparticles with superparamagnetic properties. The incorporation of magnetic nanoparticles (MNPs) allows for significant improvements over other pre-concentration techniques due to their high surface-area-to-volume ratio and physicochemical properties. The MNP solution is colloidal in nature, providing stability, low sedimentation rates, and minimal precipitation due to gravitation forces. The MNPs are coated with glycan to facilitate their attachment to the bacterial cell wall through carbohydrate-binding protein sites, providing selectivity to the biosensing mechanism. There are three stages of specificity involved in this method: First, glycan-cell interaction is specific to the bacteria cell membrane through carbohydrate-protein binding. Second, the Ziehl-Neelsen staining used in the NCBA test is specific to acid-fast bacilli Mycobacteria. Third: the Mycobacteria present in sputum due to respiratory hemoptysis (i.e., intense coughing) is likely TB-causing bacteria.
The NCBA has been used to test sputum samples in Nepal (500 samples), Peru (1108 samples), and Mexico (24 samples) [55][56][57]. In the case of Nepal, all sputum samples were tested for TB by using three different methods: SSM, Xpert MTB/RIF, and the NCBA. In this study, SSM detected only 40% of the true-positive specimens, while Xpert and the NCBA successfully detected 100% of the true-positive samples. Neither one of the methods yielded false-positive results. Table 1 presents the results from the SSM (left panel) and the NCBA tests (right panel), using Xpert MTB/RIF as the standard for defining the number of true-positive and true-negative TB cases. Table 2 presents the performance characteristics for both SSM and the NCBA, including sensitivity, specificity positive predictive value (PPV), negative predictive value (NPV), and accuracy. As shown in Table 2, at a 95% confidence interval, SSM had a relatively low sensitivity of only 40% (29−52%), while the NCBA exhibited high sensitivity comparable to the Xpert system (95−100%). The accuracy of SSM was 90% (87-93%), while the accuracy of the NCBA was 100% (99-100%). Given the sample size and nature of the collected samples, the calculated prevalence for this cohort of patients was 16% (80 out of 500). When samples were positive, the Xpert MTB/RIF system reported the bacterial load set by the manufacturer as very low, low, medium, and high. These four categories were used to estimate the equivalent load in SSM and the NCBA by matching the corresponding samples with Xpert results. Table 3 shows a comparison of the detection limit and dynamic range of the detection of the two techniques with respect to the Xpert system. As seen in the table, the NCBA yielded the same results as Xpert MTB/RIF at all levels of bacterial load. Conversely, SSM was unable to detect positive samples at the very low level and detected only 14% of true-positives at the low level, 48% at the medium level, and 79% at the high level. TB positive samples are normally distributed around the medium level, at which SSM exhibited a poor detection rate of less than 50%.
The NCBA method significantly outperformed SSM with a lower detection limit for acid fast bacilli (AFB) of 10 2 CFU/mL and a fast analysis time of 10-20 min. This diagnostic tool is facile (Figure 2), easily scalable, and inexpensive (0.10 USD/test). According to the Ministry of Health of Nepal, a low-cost TB diagnostic test with 70% accuracy could potentially save 300,000 lives just in Nepal over the next five years [58]. The NCBA technique shows promising potential for improving the TB control program in Nepal and other high-prevalence low-income countries. The deployment of the NCBA in remote rural areas would help increase case finding and case notification, thus supporting public health programs for fighting drug-resistant TB. There are nearly 600 microscopy centers distributed throughout Nepal in which the immediate implementation of the NCBA is possible. Similarly, this technique is applicable in many of the high TB-burden countries. In 2013, Desikan hypothesized that a universally accessible and rapid detection method with a sensitivity of 85% and specificity of 97% could save about 392,000 lives every year worldwide [33]. Thus, the developed NCBA technology may enable the "End TB Strategy" and lead towards a TB-free world.
Diagnostics 2019, 9, x FOR PEER REVIEW 7 of 20 The NCBA method significantly outperformed SSM with a lower detection limit for acid fast bacilli (AFB) of 10 2 CFU/mL and a fast analysis time of 10-20 min. This diagnostic tool is facile ( Figure 2), easily scalable, and inexpensive (0.10 USD/test). According to the Ministry of Health of Nepal, a low-cost TB diagnostic test with 70% accuracy could potentially save 300,000 lives just in Nepal over the next five years [58]. The NCBA technique shows promising potential for improving the TB control program in Nepal and other high-prevalence low-income countries. The deployment of the NCBA in remote rural areas would help increase case finding and case notification, thus supporting public health programs for fighting drug-resistant TB. There are nearly 600 microscopy centers distributed throughout Nepal in which the immediate implementation of the NCBA is possible. Similarly, this technique is applicable in many of the high TB-burden countries. In 2013, Desikan hypothesized that a universally accessible and rapid detection method with a sensitivity of 85% and specificity of 97% could save about 392,000 lives every year worldwide [33]. Thus, the developed NCBA technology may enable the "End TB Strategy" and lead towards a TB-free world.
Alerting Mercury Exposure in Artisanal Gold Mining Communities
In South America, Africa, and Asia, millions of individuals are exposed to dangerous levels of mercury concentrations as a result of artisanal small-scale gold mining (ASGM) [59]. ASGM is a rudimentary gold mining approach that is performed by individuals or groups with little or no mechanization, often in informal (illegal) operational settings with toxic chemicals [60]. ASGM is composed of three main steps: crushing the ore into fines, mixing the fines with liquid mercury, and separating the mercury from gold by evaporating the mercury [61]. Often in unregulated occupational conditions, workers perform mercury evaporation by using open pits, which not only have severe adverse health effects for the workers that inhale the mercury vapor but also release the
Alerting Mercury Exposure in Artisanal Gold Mining Communities
In South America, Africa, and Asia, millions of individuals are exposed to dangerous levels of mercury concentrations as a result of artisanal small-scale gold mining (ASGM) [59]. ASGM is a rudimentary gold mining approach that is performed by individuals or groups with little or no Diagnostics 2020, 10, 22 8 of 21 mechanization, often in informal (illegal) operational settings with toxic chemicals [60]. ASGM is composed of three main steps: crushing the ore into fines, mixing the fines with liquid mercury, and separating the mercury from gold by evaporating the mercury [61]. Often in unregulated occupational conditions, workers perform mercury evaporation by using open pits, which not only have severe adverse health effects for the workers that inhale the mercury vapor but also release the toxic vapor into the environment. ASGM recently exceeded combustion of coal as the leading anthropogenic source for mercury emissions globally [62]. The risk of exposure to mercury can lead to detrimental effects on the nervous, immune, reproductive, and digestive systems, induce infertility, reduce mental function, and induce kidney failure [63][64][65][66][67].
The global responsibility for reducing mercury emissions was recognized by the Minamata Convention in Switzerland in 2013. At the convention, over 140 countries signed a treaty committing to protect human health from mercury exposure [62]. The signatory countries pledged to "ban new mercury mines, phase-out existing mines, ensure the phase out and phase down of mercury use in a number of products and processes, develop control measures for emissions, and regulate the informal sector of ASGM" [62]. In order to mitigate mercury exposure and regulate mining operations, it is prudent for marginalized communities to monitor the presence of mercury in their water through low-cost, rapid, and facile devices.
Several analytical methods have been developed for mercury determination in water. Standard laboratory techniques include cold vapor atomic absorption spectroscopy (CV-AAS) [68,69], cold vapor-atomic fluorescence spectrometry (CV-AFS) [70,71] and inductively coupled plasma mass spectrometry (ICP-MS) [72,73]. These spectroscopic techniques are highly sensitive and accurate but are often impractical for environmental applications due to the high cost of analysis. In addition, these standard methods require extensive user training, and the results often require days or even weeks to produce results, making them less suitable for rural communities [74][75][76]. Some field capable units are commercially available, namely based on direct mercury analysis (DMA) and handheld nanosensors/biosensors [77,78]. DMA is based on the principle of thermal decomposition (vaporization), followed by amalgamation and subsequent atomic absorption spectroscopy. While extremely accurate, DMA is cost prohibitive for low-income communities because commercial prices of US-manufactured equipment range between 13k and $30k USD. Perhaps inexpensive nanosensors/biosensors that are coupled with low-cost electrochemical techniques on portable devices are likely to be more suitable as tools for the on-site analysis of mercury, especially where ASGM is in practice.
While there are many types of transduction methods for the low-cost determination of mercury, electrochemical methods are sensitive, quantitative, and may be the mechanism of choice for cost-effective rapid detection in the field [79]. The most common electrochemical method for ionic mercury detection is that of the anodic linear stripping voltammetry (ASV) techniques [74,80]. ASV is a two-step method of deposition/accumulation during the reduction of mercury ions and stripping during the oxidation of mercury ions along the surface of the electrode. As the mass transfer limit is reached in the reaction, the oxidative current forms a well-defined peak that can be used to calculate the concentration of mercury in the sample [81]. The efficiency of any electrochemical stripping test can be determined by calculating the percent change in oxidative current relative to baseline.
Carbon-based nanomaterials are a popular choice for improving the electrochemical detection of mercury, as this type of material exhibits a high surface area, strong mechanical strength, excellent thermal conductivity, and high conductivity [82][83][84]. Some of the carbon nanomaterials in recent literature include glassy carbon [85,86], carbon nanotubes [87], graphene [88], and reduced graphene oxide [89]. While each of these nanocarbon materials is efficient for mercury detection via stripping voltammetry, some of the materials are complicated to fabricate and exhibit poor water solubility [90]. Among carbon nanomaterials, graphene and reduced graphene oxide (rGO) have the highest water solubility and one of the lowest fabrication costs. For these reasons, there is a growing trend to develop disposable, low-cost, graphene-based electrodes for field applications.
Examples of low-cost graphene electrodes include screen-printed electrodes and conductive paper and plastic [74,91]. In 2014, Lin et al. (2014) [92] discovered a low-cost, one-step, conductive material when reducing graphene on a commercial polymer with a carbon dioxide infrared laser. Since then, multiple researchers have shown that laser scribing could be used to design electrodes to sense biomolecules by using infrared and ultra-violet light lasers [93][94][95][96]. While graphene is indeed a useful material in sensing, one of its problems is the tendency of graphene and graphene oxide to bind to a variety of materials in aqueous phase [97]. For this reason, sensor labs typically metallize graphene electrodes with a noble metal that has a specific interaction with mercury ions. These metals can be deposited by using simple electrodeposition methods or advanced techniques such as pulsed sono-electrodeposition [98]. Recently, Abdelbasir et al. 2018 [99] showed that copper nanoparticles recovered from waste cables can be used to detect ionic mercury by using linear sweep stripping voltammetry (LSSV).
Low-cost, portable, mobile phone-based acquisition systems have been developed for mercury analysis in the field [100]. While this is significant for deploying sensors in low-income regions, the inexpensive-portable sensor-systems lack data analytics capability to transform the data into meaningful information that could be useful for the user. For example, the maximum concentration level for inorganic mercury in drinking water is 6 ppb [101]. However, bodyweight, ingestion rate, length of exposure, form and pathway of the contaminant, health of the individual, and concentration of mercury influences the degree of mercury toxicity [102][103][104]. Thus, a SNAPS tool may assist communities in acquiring data and extracting actionable information for decision support.
Our group is currently working on developing the SNAPS platform for estimating the toxicity risk associated with the ingestion of mercury-contaminated water. This SNAPS platform is composed of a disposable graphene-nanocopper sensor that is coupled with a low-cost handheld potentiostat and a smartphone. The working mechanism of the platform starts with the detection of mercury present in the sample by using the graphene-nanocopper sensor. Next, selective electrochemical interactions between mercury and the electrode generate an electrical signal. The electrical signal is acquired and processed by the potentiostat to produce a current output. Then, computer software records the current output and transforms it into concentration data via calibration curves. Finally, a smartphone app is used by the user to enter the data for the following parameters: mercury concentration in water (from the sensor), bodyweight of the user, water ingestion rate, and length of exposure. Based on these parameters, the app runs an algorithm that includes a hazard quotient formula to generate an estimation of the risk of toxicity for the user [105][106][107][108].
We recently conducted a proof-of-concept demonstration of this SNAPS platform in a rural area that has been dramatically impacted by ASGM known as La Toma in Cauca, Colombia. Even though this SNAPS platform is in an early stage of development, it represents an example of how rural communities in developing countries may use sensors as a service to access data on mobile devices and extract actionable information to help make informed decisions. Figure 3 shows the progression of the proof-of-concept demonstration of the technology. that has been dramatically impacted by ASGM known as La Toma in Cauca, Colombia. Even though this SNAPS platform is in an early stage of development, it represents an example of how rural communities in developing countries may use sensors as a service to access data on mobile devices and extract actionable information to help make informed decisions. Figure 3 shows the progression of the proof-of-concept demonstration of the technology.
Mercury enters natural aquatic systems primarily due to the burning of mercury amalgam during the extraction of gold from raw ore. Mercury enters natural aquatic systems primarily due to the burning of mercury amalgam during the extraction of gold from raw ore.
Can We Overcome the Economic Barriers for Distributing Diagnostic Tools in Low-Income Settings?
Framing the issue of diagnostic tools in the context of technology leads us to recognize a vast spectrum. On one hand, ideas about telemedicine were proposed about 100 years ago [109], and on the other hand, milestones in computational speed occurred about 100 days ago [110]. It may be justifiable to suggest that technological barriers may not be the primary reason why many diagnostic tools are still absent from communities under economic constraint. The powerful incentive of lucrative profitability, in the short term, may not be realized by serving impoverished regions.
Transaction cost [111] may be the over-arching factor that has multiple interpretations [112] but appears to be the economic barrier with respect to the reasons why accelerating the rate of diffusion of diagnostic tools in distressed communities continues to pose difficult challenges [113][114][115]. We must focus on value to the user or the extent of the benefit to the beneficiary's environment and/or ecosystem (for example, the early diagnosis of tuberculosis in a patient may save the entire village from infection and epidemic). However, delivery of value is inextricably linked to cost, unless it is aimed to deliver philosophical or mythical messages [116].
In over-simplified terms, the convergence of the cost of the product and the cost to deliver the service contributes to transaction cost [117]. A plethora of costs and cost-incurring processes are involved, but we shall bypass the details. The physical product (in this case is the sensor) and the service is the solution delivery (SNAPS). Academics cannot control cost, but their contribution can impact implementation and use. A low-cost sensor from a lab must be manufactured, calibrated, evaluated, and sufficiently scaled if the outcome can still be claimed as a "low-cost" sensor that is capable of delivering value with respect to maintaining a certain pre-agreed quality of service (QoS) in keeping with the key performance indicators (KPI) that the users desire, demand, or deem necessary.
In addition, a working sensor that is delivered to a user is useless without a visualization system to capture the data from the sensor. Stand-alone visualization devices (for example, blood glucose home monitors with dedicated devices to read the blood glucose strip and deliver data readout) add inordinate costs to the system. The alternative is to use a mobile phone as a platform to visualize the data from the sensor. The signal transduction from the sensor to the mobile phone calls for multiple layers of tools, technologies, and software (middleware), in addition to the functional use of a mobile phone. The presence of a mobile phone in any environment is contingent upon available cellular and/or wireless infrastructure to support its use. It may not be prudent to assume the presence of a telecommunications infrastructure despite the global penetration of such services [118][119][120][121]. Thus, even if a working sensor is at hand, the obvious process of signal to data transition and the visualization of the data involves multiple layers of capital expenses (infrastructure cost), as well as associated technologies and software.
Assuming that the above layers are in working order, the sensor data meets a "dead end" upon data visualization. A number (with units) is only meaningful if there is a relevant framework for interpreting such data, e.g., the combination of sensor data from mercury contamination expressed in terms of a hazard quotient score, which uses other vital pieces of information to assess health risk.
It is the delivery of information based on sensor data that drives value. Taken together, the physical product is no longer the focal point of value. Information pertaining to the health of the user is the service that delivers value to the user. Transaction cost, therefore, is no longer a product-based entity; rather, it is the cost of service that must be feasible for the service to be delivered, disseminated, and adopted by a community.
Overcoming the economic barriers to deliver SNAPS will be virtually impossible if the chasm between product and service continues to overshadow the concept of value delivery to the user. The economic principle, which may work in impoverished nations, is rooted in micro-finance and micro-payments with low transaction costs [122,123]. The paradigm shift from "product sales" to delivery of "service" involves combining the product with resources (including retail mobile banking, infrastructure, telecommunications, cybersecurity, and customer service). Users pay only when they use the service. The latter lowers the transaction cost and hence the barrier to entry into vast markets of low-income users. It is not the product but the user experience that is the pivotal fulcrum for the inversion of traditional business models in the era of the Internet of Things (IoT) [124].
The PAPPU model was epitomized by the plain old telephone system (POTS), where the user paid only the "charge per call" which was reasonably affordable even if the per capita income was low. In this paper, we advocate for PAPPU as a metaphor for ethical profitability through social business models. In principle, the user may pay a penny for each use of a SNAP (suggested but not restricted to one penny). The "penny" is a placeholder for the financial design of an ultra low-cost nano-payment model, which, in the real world, may represent one Rupee (INR), one Yuan (CNY, RMB) or one Peso (COP). The PAPPU metaphor may evolve to become the generalized monetization mantra that signifies pay-a-price-per-unit wherever the principles of IoT may be deployed or embedded as a digital by design metaphor including ubiquitous sensing. The diffusion of connectivity may serve as a tool and IoT may be catalytic as a platform to better facilitate the practice of equality, equity and égalité. PAPPU offers an economic instrument for businesses to build a profit model based on economies of scale to serve low-income communities and abide by ethical profitability. PAPPU offers an alternative strategy for enterprises and businesses who are seeking to engage with the next billion users, albeit profitably, but within the realms of ethical profitability that can be sustained by the per capita income of these communities.
The concomitant growth of infrastructure (e.g., affordable access to low latency, reduced jitter, high bandwidth wireless telecommunications, 5G, and trusted mobile banking) may be necessary to pave the road for the pursuit of PAPPU. The ability to escape the dead weight of old technology in the developing world may accelerate the implementation of PAPPU as an integral part of the socio-economic fabric of a product-less, service-based economy where payment per unit of service (one liter of municipal water, one kilo-watt hour of energy, or one gallon of sanitation waste) may become the new normal.
Implementing PAPPU may require alliances, public-private partnerships, or global consortia with an altruistic fervor to pay and pave for the synergistic integration that is necessary to promote SNAPS as services in low-income communities. The challenge is to bring to the table global organizations, benevolent individuals, and thoughtful governments who may choose to lead this effort to channel science to serve society for the less fortunate. We need new eyes, unbridled imagination, and the moral fabric of synergistic solutions that can wrap around-not to isolate-and protect, provide and promote acceptable solutions for remediable injustices.
Social and Ethical Considerations for the Development and Implementation of SNAPS
Social and ethical considerations are inextricably linked with the transformation of SNAPS from an academic vision to real-world implementations that may actually help people. Academics must remain cognizant of their ethical responsibility to discourage the misapplication and dissemination of misinformation about their inventions. In this section, we attempt to analyze some potential interactions between the social and technological domains, as well as how democratic approaches for technology creation and diffusion could favor the improvement of health outcomes for disadvantaged communities.
Since the introduction of the technology acceptance model (TAM) decades ago, several extended versions of this archetype have been proposed to elaborate a more comprehensive framework for predicting people's intention to use a particular product or service [125][126][127]. The TAM and its variants have served as the guiding rationale behind R&D for a variety of commercial technologies that are mass-produced, including healthcare devices [128]. However, this model may be inadequate in the context of technology development for low-income communities [129]. It is worth noting that the ultimate goal of the TAM and related models is to forecast user behavior across a broad range of consumer populations, which means that the model focuses on highly generic predictors of technology acceptance. For instance, the TAM does not explicitly include any cultural or social variables, which is a significant limitation because social differences may contribute significantly to the variance in users' attitudes towards technology [127,130]. However, the goal of SNAPS with the PAPPU concept is to provide an affordable sensor-analytics service platform to support decision-making and the enhancement of health outcomes for economically challenged groups. Thus, a useful model to guide the development of SNAPS should include bi-directional communication between researchers and users, and it should perhaps motivate researchers and users to change or adapt or better inform their behavior [131].
Trust in the technology [132] is quintessential for adoption and continued use, because technology is equally seen as a double-edged sword [133,134]. Driving positive impacts from the introduction of SNAPS in low-income regions may involve not only the transfer of fully functional technology but also the empowerment of the beneficiary communities by enabling the local mastery of the technology along with the possibility to re produce and even adapt the technology to local conditions. We believe this open-source approach to technology adoption is auspicious for supporting marginalized communities, especially when trying to avoid the known failures of the charitable approach of technology leapfrogging. For example, the WHO estimates that only 10%-30% of the medical devices that are donated to developing countries are used as intended; the remaining 70-90% end-up being dumped in landfills, thus contributing to more pollution problems and environmental health risks [135]. This situation is explained not only by the incompatibility of the technology with the locally available infrastructure but also to the lack of local capacity to adapt or fix the donated devices once they break [136]. Additionally, dependence on foreign technologies could lead to an imbalance of power in which the users have no option other than relying on the willingness of external entities to continue to deliver much-needed technology in their regions. Thus, if the goal is to make technology work effectively on behalf of society, we must divert from the mainstream handed-down from the top approach and enable society to create and transform technology in meaningful ways, in dispersed regions, and from the bottom-up.
Engaging the community through operational transparency may prevent public anxiety and may also facilitate the proper implementation of technology. Users' understanding of the limitations and potential risks associated with SNAPS could be vital for setting clear expectations about SNAPS-assisted testing while avoiding misapplications of the technology. As Wallace et al. pointed out, the misuse of many direct-to-consumer screening tests could have caused an unnecessary increase in healthcare costs due to people's overreaction to inaccurate readings from direct-to-consumer screening tests, as well as their subsequent demand for further testing with advanced clinical technology [132]. However, this concern is mostly relevant for developed countries in which people have access to healthcare systems where clinical testing is readily available for patients. In low-income settings, such as remote rural areas in developing countries, health care services are often dysfunctional or completely inaccessible. For marginalized communities, information from SNAPS could instead drive actions that are aimed at limiting the exposure to harmful biological vectors and chemical agents. Thus, communities in territories that suffer from prolonged government abandonment could greatly benefit from the democratic adoption of SNAPS to make informed decisions and solve their problems with more autonomy. Nonetheless, we agree that transparency and accountability from everyone involved in the process of technology deployment are paramount for protecting the users' rights and integrity.
Conclusions
Monitoring environmental contamination is essential to protect the public from diseases and other health issues. This monitoring requires accurate and cost-accessible sensor technologies to enable early warning capabilities for users to minimize negative impacts (Figure 4). The framework of SNAPS with PAPUU has the potential to pave the way for economically viable systems that can potentially be applied as tools to reduce local environmental risks and mitigate health problems that are derived from them. We envision that the use of SNAPS will increase low-income communities' participation in the public/government planning process by providing data that they can use to fight for their right to public health care, clean water and adequate sanitation. By bridging smart technology with basic needs and public health, SNAPS will advance our understanding of how information can change public participation, having low-income communities' representatives as 'change agents' that influence public policies and planning. These communities' representatives benefit from rights-based arguments, evidence-based research, and effective data analyses. SNAPS have the potential to serve as an illustration of how empowering impoverished communities in their local context can strengthen democratic practice in their region. Grounded on an integrated perspective that takes social and ethical considerations into account, we foresee that SNAPS will shed some light to improve implementation of public health plans in underserved communities by increasing public participation in planning. Moreover, SNAPS could potentially become a new approach to achieve the United Nations Sustainable Development Goals 3 and 6: ensure healthy lives while promoting well-being at all ages and ensure access to water and sanitation for all, respectively. Furthermore, it could also help empower impoverished communities to obtain the rights they have been promised such as basic sanitation, clean water, and adequate health care services. some light to improve implementation of public health plans in underserved communities by increasing public participation in planning. Moreover, SNAPS could potentially become a new approach to achieve the United Nations Sustainable Development Goals 3 and 6: ensure healthy lives while promoting well-being at all ages and ensure access to water and sanitation for all, respectively. Furthermore, it could also help empower impoverished communities to obtain the rights they have been promised such as basic sanitation, clean water, and adequate health care services. . SNAPS converges with pay-a-penny-per-use (PAPPU) to establish a framework for sensoras-a-service. The paradigm is rooted in economic, ethical, cultural, and environmental core values that synergistically act as a catalyst for the democratization of healthcare in underserved communities. Where noted, photos credited to Demirbas et al. [137] and Vanegas et al. [95]. . SNAPS converges with pay-a-penny-per-use (PAPPU) to establish a framework for sensor-as-a-service. The paradigm is rooted in economic, ethical, cultural, and environmental core values that synergistically act as a catalyst for the democratization of healthcare in underserved communities. Where noted, photos credited to Demirbas et al. [137] and Vanegas et al. [95].
Supplementary Materials:
The following are available online at http://www.mdpi.com/2075-4418/10/1/22/s1, Table S1: Number of research articles published every year for the past ten years in peer-reviewed journals on the topic of E. coli biosensors, Table S2: Top five agencies that provide funding for research on E. coli biosensors, Table S3: Depiction of research articles on E. coli biosensors that contain claims related to real-world applicability.
|
2020-01-08T14:04:52.624Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "2943729d75ea588eaae882c5497d7f2b5463a875",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/10/1/22/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7971a6b52714b092045e7b84b59fb55cb4e5843d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
19234303
|
pes2o/s2orc
|
v3-fos-license
|
Coordinated Cluster / Double Star and ground-based observations of dayside reconnection signatures on 11 February 2004
A number of flux transfer events (FTEs) were observed between 09:00 and 12:00 UT on 11 February 2004, during southward and dawnward IMF, while the Cluster spacecraft array moved outbound through the northern, highaltitude cusp and dayside high-latitude boundary layer, and the Double Star TC-1 spacecraft was crossing the dayside low-latitude magnetopause into the magnetosheath south of the ecliptic plane. The Cluster array grazed the equatorial cusp boundary, observing reconnection-like mixing of magnetosheath and magnetospheric plasma populations. In an adjacent interval, TC-1 sampled a series of sometimes none standard FTEs, but also with mixed magnetosheath and magnetospheric plasma populations, near the magnetopause crossing and later showed additional (possibly turbulent) activity not characteristic of FTEs when it was situated deeper in the magnetosheath. The motion of these FTEs are analyzed in some detail to compare to simultaneous, poleward-moving plasma concentration enhancements recorded by EISCAT Svalbard Radar (ESR) and “polewardmoving radar auroral forms” (PMRAFs) on the CUTLASS Finland and Kerguelen Super Dual Auroral Radar Network (SuperDARN) radar measurements. Conjugate SuperDARN observations show a predominantly two-cell convection pattern in the Northern and Southern Hemispheres. The results are consistent with the expected motion of reconnected magnetic flux tubes, arising from a predominantly sub-solar reconnection site. Here, we are able to track north and south in closely adjacent intervals as well as to map to the corresponding ionospheric footprints of the implied flux tubes and demonstrate these are temporally correlated with clear ionoCorrespondence to: Q.-H. Zhang (zhangqinghe@pric.gov.cn) spheric velocity enhancements, having northward (southward) and eastward (westward) convected flow components in the Northern (Southern) Hemisphere. The durations of these enhancements might imply that the evolution time of the FTEs is about 18–22 min from their origin on magnetopause (at reconnection site) to their addition to the magnetotail lobe. However, the ionospheric response time in the Northern Hemisphere is about 2–4 min longer than the response time in the Southern Hemisphere.
Introduction
Magnetic reconnection is a fundamental plasma process, resulting in energy and momentum transfer from the solar wind to the magnetosphere.This process was first discussed in terms of a steady process by Dungey (1961) and was later discovered to show an independently intermittent and spatially limited nature by Haerendel et al. (1978) and then Russell and Elphic (1978) on the dayside magnetopause.The associated sampled, magnetic signatures arising from the passage of bundles of reconnected flux nearby a spacecraft were named flux transfer events (FTEs) by Russell and Elphic (1978).This term was originally designed to characterize the signatures according to their bipolar signature in the magnetic field component normal to the magnetopause.Subsequent studies detailed the intricate mixing of magnetosheath and magnetospheric plasma populations associated with these signatures (e.g.Daly et al., 1981;Thomsen et al., 1987;Farrugia et al., 1988), their accelerated ion flows (e.g. Q.-H.Zhang et al.: Cluster, TC-1, and ground-based observations of FTEs Paschmann et al., 1982) and their larger occurrence rate during periods of southward interplanetary magnetic field (IMF) (e.g.Berchem and Russell, 1984;Lockwood and Smith, 1992).Statistical studies (e.g.Rijnbeek et al., 1984;Lockwood, 1991;Lockwood and Wild, 1993) have also shown that the mean interval between FTE signatures is of the order of 8 min.However, Lockwood and Wild (1993) showed that the distribution of these intervals has a mode value at 3 min, with upper and lower decile values of 1.5 and 18.5 min, respectively.
Because of the limitation of single-point spacecraft measurements at the magnetopause, it is difficult to determine the spatial distribution and motion of FTEs.Furthermore, the in-situ space observations are associated with the response of the ionosphere and ground geomagnetic field.The early work of Elphic et al. (1990) demonstrated that ionospheric flow bursts measured by EISCAT were associated with FTEs observed by ISEE and the first magnetically conjugate measurements of an FTE by Equator-S and of ionospheric flow bursts by SuperDARN were presented by Neudegg et al. (1999).The UV aurora measured by the Polar spacecraft's VIS (Visible Imaging System) Earth camera in the vicinity of the reconnection footprint for this event was later discussed (Neudegg et al., 2001).Recently, Cluster (Escoubet et al., 2001) observations of FTEs (e.g.Owen et al., 2001;Fear et al., 2005;Zheng et al., 2005;Hasegawa et al., 2006;Wang et al., 2005) have been combined with a variety of ground-based instruments (e.g.Lockwood et al., 2001;Wild et al., 2001Wild et al., , 2003;;Marchaudon et al., 2004;Zhang et al., 2008Zhang et al., , 2010)).
Following the successful launch of Double Star, it is now possible to study FTEs from five or six points in space simultaneously.For example, the first magnetically conjugate observations of FTEs by Cluster and Double Star TC-1 at the Northern and Southern Hemisphere, respectively, were presented by Dunlop et al. (2005) and coordinated Cluster/Double Star and ground-based measurements of FTEs were reported by Wild et al. (2005Wild et al. ( , 2007)).Nevertheless, the evolution of a flux tube (FTE), from its generation at the magnetopause to its disappearance in the global magnetospheric convection (Amm et al., 2005) is not well tied to the location of reconnection onset or the development of the reconnection rates.
In this paper, we analyze several medium to large scale FTEs which were observed by the Cluster array, at the highlatitude magnetopause, or by the TC-1 spacecraft, south of the subsolar magnetopause, simultaneously measured by the ESR and conjugately observed by the CUTLASS Finland and Kerguelen SuperDARN radars (also observing the ionospheric plasma flow, Greenwald et al., 1995;Chisham et al., 2007) measuring the global ionospheric convection.These FTEs are interpreted as reconnection generated signatures.All FTEs observed by Cluster and TC-1 have some reconnection features in the plasma data: some of these FTEs, especially observed by TC-1, contain an accelerated mag-netosheath population, and the others contain a mixture of magnetospheric and magnetosheath plasma populations.Using the Cluster 4-spacecraft observations, we calculate the velocity and the size of the implied flux tubes connected to the northern cusp.The ESR measurements, record poleward flows and the CUTLASS Finland and Kerguelen Su-perDARN radar observations show "poleward-moving radar auroral forms" (PMRAFs), also indicative of bursty reconnection at the dayside magnetopause.The SuperDARN observations show that the individual flux tube movements, which contain predominantly northward (southward) or eastward (westward) components, map to positions in the ionospheric convection cells in the Northern (Southern) Hemisphere which have the corresponding flow directions.Moreover, we verify that the movements of the reconnected flux tubes are well consistent with the Cooling model (Cooling et al., 2001), which predicts the expected motion of reconnected flux tubes, given the prevailing IMF and sheared solar wind flow.We also comment on other features of the data, focusing on additional magnetic activity at TC-1. angle, (e) solar wind plasma number density, (f) solar wind speed, and (g) solar wind dynamic pressure.The data have been lagged by 69 min before 10:00 UT (lagged time) and 66 min after 10:00 UT (this time delay is calculated using the method of Liou et al., 1998) in order to take into account the propagation of solar wind/IMF structure from the spacecraft to the subsolar magnetopause.The ACE spacecraft is located at about (221.2, −32.6, 9.5) R E in the Geocentric Solar Magnetic (GSM) coordinate system at about 10:34 UT (lagged time).During whole interval, the IMF B Z component was near zero before about 09:30 UT (lagged time) and always negative, varying between −8.2 to −0.4 nT (see Fig. 1c), after 09:30 UT, while the B Y component was negative with a short, positive incursion (see Fig. 1b).The IMF clock angle (if
Upstream solar wind and IMF conditions
) therefore varied between 90 • and 180 • during this period (see Fig. 1d).The solar wind density increased from 7 to 19 cm −3 over the interval of interest (see Fig. 1e), whilst the solar wind velocity varied between 370 and 387 km s −1 (see Fig. 1f), resulting in a prevailing solar wind dynamic pressure of ∼1.8-4.5 nPa (see Fig. 1g).
Spacecraft and ground coverage
The Cluster spacecraft (Escoubet et al., 2001) with a perigee at ∼4 R E , an apogee at ∼19.6 R E and identical orbital periods of 57 h.The average distance of each two spacecraft is about 300 km in February 2004.Data with 4 s resolution from the fluxgate magnetometer (FGM) (Balogh et al., 2001) on all four Cluster satellites and from the Plasma Electron and Current Experiment (PEACE) (Johnstone et al., 1997) and from Cluster Ion Spectrometry (CIS) (Rème et al., 2001) onboard Cluster S/C 1 are used in this study.One of the two Double Star spacecraft, TC-1 (Liu et al., 2005) was launched in December 2003 into an equatorial orbit at 28.2 • inclination, with a perigee altitude of 577 km, an apogee of 13.4 R E , and an orbital period of 27.4 h.Data with 4 s resolution from FGM (Carr et al., 2005) and from PEACE (Fazakerley et al., 2005) instruments onboard TC-1 are used in this paper.
Figure 2 shows the tracks of all Cluster and the TC-1 spacecraft between 09:00 and 13:00 UT on 11 February 2004, in the X-Z (a) and X-Y (b) planes, in the GSM coordinate system, with the configuration of the Cluster space-craft array drawn as a tetrahedron (size scaled up by a factor of 20).Model geomagnetic field lines are shown for the projection into the X-Z plane and cuts through the bow shock (BS) and magnetopause (MP) are shown for the X-Y plane.The ionospheric footprints of Cluster spacecraft 1 (in blue line) and TC-1 (in red line) spacecraft on the maps of Northern (c) and Southern (d) Hemispheres in geographic coordinate system are shown in the lower panels.The X-Z plane field lines and ionospheric footprints of the spacecraft are drawn from the Tsyganenko '96 model (Tsyganenko and Stern, 1996) with input parameters: P Dyn = 3.93 nPa (the solar wind dynamic pressure), IMF B Y = −4.00nT, IMF B Z = −5.92nT and Dst = −1 nT.These parameters represent the average IMF and solar wind conditions during the interval of interest.During this interval all spacecraft are outbound from the magnetosphere, where the Cluster array appears to move through the open field line region, initially in the northern lobe, then grazing the equatorial cusp boundary, and TC-1 appears to enter the magnetosheath after The footprints of Cluster S/C 1 and TC-1 in the Northern and Southern hemispheres about 10:00 UT.Thus there are no Cluster footprints in the Southern Hemisphere and TC-1 footprints end after about 10:00 UT in both hemispheres.In fact, the data shown below (see Fig. 3) indicate that Cluster exits into the magnetosheath earlier at ∼11:20 UT and TC-1 exits at ∼09:20 UT, which suggests a significantly eroded magnetopause at this time.The field-of-view of the CUTLASS Finland and Kerguelen SuperDARN radar is presented as a fan in Fig. 2c and d, respectively, with the beam employed in this study indicated by red dashed lines.The poleward-looking low elevation beam (32M dish) of ESR (between 76 and 85 • magnetic latitude) is indicated by the solid green line in Fig. 2c.The red "⊕" represents the magnetic pole.
Cluster and Double Star TC-1 observations
Figure 3 plots in the top panels the magnetic field data from all 4 Cluster S/C (represented by black, red, green and magenta lines for the satellites 1, 2, 3, 4, respectively) for the interval of interest and from the TC-1 spacecraft (blue line), together with the IMF clock angle, lagged by 69 min before 10:00 UT (lagged time) and 66 min after 10:00 UT (the convection time from ACE to subsolar magnetopause).The lower panels show the spectrograms of electron field-aligned differential energy flux from the HEEA and LEEA sensors of PEACE instrument on the Cluster S/C 1 and from the PEACE instrument on TC-1.It should be noted that the quasi-regular jumps in energy level in the TC-1 spectra (at 10:38, 10:58, 11:13, and 11:33 UT) are believed to be spacecraft generated interference spikes.The dropouts in the TC-1 distribution around 10:30 and 11:20 UT are excursions into the solar wind (encountering the bow shock), which is also clear from the TC-1 magnetic field (see Fig. 3), and it is clear that the second of these corresponds to the main magnetopause crossing by Cluster, suggesting a global compression of the magnetosphere and inward bow shock motion at this time.
The magnetic field data are expressed in local boundary normal coordinates (LMN), which have been found by performing minimum variance analysis (MVA) (Sonnerup and Scheible, 1998) on the local magnetopause crossing of Cluster S/C 3 between about 11:00 and 11:35 UT and of TC-1 between about 09:22 and 09:45 UT, to obtain the mean boundary normal n in each case, and unit vector l along the projection of the solar magnetospheric Z-direction perpendicular to www.ann-geophys.net/29/1827/2011/Ann.Geophys., 29, 1827-1847, 2011 the boundary normal.The unit vectors l, m, n are given as (−0.40, 0.40, 0.83), (0.56, 0.82, −0.12) and (0.72, −0.42, 0.55) in GSM coordinates, for the mean Cluster magnetopause crossing, and (0.38, 0.32, 0.87), (0.09, 0.92, −0.38) and (0.92, −0.22, −0.31) in GSM coordinates, for TC-1.Inspection of the solar wind conditions shows that the IMF clock angle (see Fig. 3e) exhibited stable, dominant southward IMF conditions (CA ∼130 • to 180 • ) between about 09:54 and 10:43 UT and after 11:30 UT, and variable dominant dawnward IMF conditions (CA ∼80 • to 140 • ) with southward components before 09:54 UT and between 10:43 and 11:30 UT.This favours a high reconnection rate at the low-latitude magnetopause.Figure 3a and f shows disturbed magnetic field and precipating electron signatures, which indicates that the Cluster spacecraft were crossing open field line regions and cusp between about 09:10 and 11:12 UT and encountered the magnetopause at about 11:18 UT (marked by violet dashed vertical line in Fig. 3).The spacecraft were in the magnetosheath after about 11:18 UT. Figure 3g shows the TC-1 spacecraft was moving outbound through the dayside, low-latitude magnetopause at about 09:33 UT and within the magnetosheath after that with two short ex-cursions into the solar wind.There are a large number of separate field parallel electron beams containing mixed highand low-energy electron populations in the Cluster S/C 1 electron spectrogram (Fig. 3f).There are a large number of electron beams in the TC-1 PEACE electron spectrograms (Fig. 3g), although these beams consists mostly of accelerated magnetosheath population.The small-scale electron signatures observed in the magnetosheath by TC-1 are quite complicated: some electron beams are very short and have high electron fluxes at 90 deg pitch-angles.Some beams are longer and show reconnection-related signatures.These will be discussed later in the text in detail.
The electron distributions seen by all Cluster PEACE instruments between about 10:00 and 11:35 UT show clear mixing of magnetosheath and magnetospheric plasma populations, suggestive of reconnected flux tubes (FTEs) (Owen et al., 2001).In TC-1 PEACE measurements between about 09:00 and 10:45 UT the observed possible FTE structures do not show a clear mixing of plasma population from magnetospheric and magnetosheath sources.In order to identify FTEs, we have done hodogram analysis of the magnetic field variation for all of the FTE-like signatures observed Ann.Geophys., 29,2011 www.ann-geophys.net/29/1827/2011/by Cluster and TC-1, respectively.As an example, Fig. 4 presents hodograms of the magnetic field in LMN coordinates for the periods 09:43:01-09:45:26 UT observed by TC-1 spacecraft and for the periods 11:32:43-11:33:39 UT measured by Cluster S/C 1. Left and right panels show L-M and L-N representations.The black dot point (marked by "S") presents the start point in each panel.From Fig. 4, we find that there were clear "bump" of the reconnected flux tube in both L-M and L-N planes of the magnetic field crossed by TC-1 and Cluster, respectively, which indicated that the FTE-like signatures are FTEs and could be thought as one of the criteria of the FTE identifications.According to the criterion from the hodogram analysis with higher plasma number density and velocity, we highlight for detailed analysis, one magnetospheric and one magnetosheath FTEs measured by TC-1 (indicated by red dashed vertical lines and marked by the red numbers "i-ii" at the top of Figs. 3 and 5), and two other FTEs observed by Cluster (indicated by blue dashed vertical lines and marked by the blue numbers "1-2" at the top of Figs. 3 and 6), respectively.These data are plotted in Figs. 5 and 6 for interval of 09:18-10:00 UT for TC-1 and 11:00-12:00 UT for Cluster S/C 1 respectively to show more detail for the analysis below.
The panels in Fig. 5 show the magnetic field boundary normal component B N (same as Fig. 3c) and the field magnitude, together with PEACE electron spectrograms in the anti-parallel, perpendicular, and parallel directions, as observed by TC-1.As in Fig. 3, we show two FTEs, observed by TC-1, by red dashed vertical lines.From Fig. 5, we can find that near its magnetopause crossing, the TC-1 spacecraft sampled a series of FTE signatures which are generally of large size and show the "reverse" polarity (negative/positive) bipolar signatures in the B N component (highlighted by the red dashed vertical lines and marked by the red numbers "iii").This suggests that TC-1 observed southward moving flux tubes, which are connected to the southern cusp and were generated by low-latitude magnetic reconnection.The electron population was studied in detail for these two FTEs, using electron spectrogram and electron pitch-angle spectrogram (not shown here).The electron spectrogram shows the well-defined electron beam with accelerated magnetosheath plasma population mixed with the magnetospheric population for the second discussed FTE (ii).However there is no clear electron signature associated with the first FTE (i), as the electron beam does not show a classical mixing of the high energy magnetospheric population with the magnetosheath electrons.We suggest that the high energy electrons became less and less evident after the spacecraft crossed the magnetopause into the magnetosheath due to the fact that the spacecraft is crossing into the older opened flux tubes and that the magnetospheric electrons already escaped inside these tubes.Additionally, for the middle period shown in Fig. 3, between 10:20 and 11:00 UT, the TC-1 spacecraft observed very turbulent magnetic field.It revealed a small-scale sub-structure with many short-lived electron populations with mostly 90 • fluxes by analyzing the electron pitch-angle spectrogram (not shown here) for this period.We note that these observations are very similar to the observations presented by Retino et al. (2007) of the reconnection inside the turbulent magnetosheath.We suggest, as the TC-1 spacecraft lies deeper in the magnetosheath during this interval, that the observed magnetic field fluctuations and electron small scale sub-structure are not associated with FTEs, but with more complex processes which are out of scope of this paper.
In Fig. 6 we present (a) the magnetic field boundary normal component B N (same as Fig. 3c), (b) the field magnitude, (c) the number density and (d) velocity (projected into LMN coordinates) of H + from CIS instrument onboard Cluster S/C 1, together with (e) the PEACE electron spectrograms in the (e) anti-parallel, (f) perpendicular, and (g) parallel directions, as observed by Cluster S/C 1 in the same way as Fig. 5.There are associated FTE signatures in the Cluster magnetic field data.As in Fig. 3, we indicate the two FTEs, observed by Cluster, by blue dashed vertical lines.All FTEs show standard polarity (positive/negative) bipolar signatures in the B N component (see Fig. 6a) with enhanced |B|, enhanced number density of H + (decreased number density of H + in magnetosheath FTE) with fast ion flow in L and M direction (see Fig. 6c and d), and well defined electron beams, in which the plasma mainly focused on the parallel or anti-parallel directions, with a clear mixing of magnetosheath and magnetospheric plasma populations in the electron spectrograms for the first FTE and accelerated magnetosheath population for the second FTE (see Fig. 6e-i).This suggests that Cluster observed northward moving flux tubes, which are connected to the northern cusp and were generated by low-latitude magnetic reconnection.These FTE signatures become increasingly distinct and of larger size as the spacecraft cross the magnetopause into the magnetosheath.Additionally, for the earlier period shown in Fig. 3, between 09:00-10:00 UT, while Cluster was grazing the poleward cusp boundary, there appear to be a large number of often non-standard (positive/negative) FTE-like signatures.These therefore might represent a range of flux tube sizes (as discussed in Sect.3).The electron spectrograms of Cluster S/C 1 also show clear mixing of magnetosheath and magnetospheric plasma population signatures suggestive of the reconnection features expected for each FTE.
EISCAT measurements
We now briefly examine the ionospheric dynamics which resulted from the FTEs discussed above.
Data from the two-dish incoherent scatter radar system near Longyearbyen, part of the EISCAT Svalbard Radars (ESR) (Wannberg et al., 1997) are used here.One dish (a 32m parabolic antenna) is fully steerable towards any direction, and the other (a 42 m parabolic antenna) is fixed, pointing along the local magnetic field line.On the 11 February 2004, the 32 m-dish was pointing nearly towards geo-magnetic north (azimuth 336 • ), at low elevation (30 • ).The radars used alternating code measurement techniques to provide profiles of electron density, electron and ion temperature, and ion velocity along the line-of-sight.
During the interval of interest, the ionosphere above Svalbard was magnetically conjugate to Cluster and the radar measurements suggest that it was subject to impulsive precipitation associated with FTE-related bursts of magnetopause reconnection.Figure 7, for example, presents 2min post-integrations of the ESR radar observations between 09:00 and 12:00 UT (the same interval as in Fig. 3). Figure 7a presents observations of the electron density, the electron temperature, the ion temperature, and the line-of-sight ion velocity from the low elevation northward-directed ESR dish (azimuth 336 • , elevation 30 • ).The post-integrated data are shown as a function of magnetic latitude between 76 • and 84 • and the observations cover the F-region altitude range from about 100 to 520 km.The density measurements indicated a series of high-density plasma regions moving along the beam to higher latitudes (highlighted by the black solid lines in the first panel of Fig. 7a), so called poleward-moving plasma concentration enhancements, some of which could correspond, one-to-one to the FTEs observed by TC-1 or Cluster.For example, the one highlighted by red arrow could correspond to the FTE 2 observed by Cluster.These events appeared quasi-periodically with a period of about 10 min, which is roughly consistent with the period of FTEs observed by Cluster and TC-1 spacecraft.It is worth noting that the density measurements indicated a density of 3×10 11 m −3 between 79 • and 82 • geomagnetic latitude (see the first panel of Fig. 7a) in the events between about 10:10 and 11:10 UT and after about 11:42 UT.The electron temperature decreased in these events, highlighted by the black dashed bias lines (see the second panel of Fig. 7a).These suggest that the transient reconnection (FTE) leads to the erosion of the OCB equatorward to a region of higher density plasma (the solar EUV ionized plasma), followed by poleward relaxation of that boundary carrying with it the high density plasma accelerated into the polar flow (Lockwood and Carlson, 1992;Zhang et al., 2011).The plasma flow has a poleward component for most of the time between 09:00 UT and 12:00 UT (see the fourth panel of Fig. 7a, where positive represents flow away from the radar), except for some brief equatorward incursions before about 09:40 UT and after about 10:44 UT, which might be caused by the low or decrease in IMF clock angle (the dominant component of the IMF changes from negative B Z to negative B Y ).However, they did not show clear poleward moving channel-like structures accompany with the polar cap patches.This might be because ESR is located in the polar cap with the combined effect resulting from the tailward motion of the different separate flux tubes (FTEs) with different velocities.With the assumption that their poleward phase motion was roughly constant in speed (Lockwood et al., 2001), the black straight line can be mapped back to a magnetic latitude of about 76 Figure 7b presents the same parameters from the fieldaligned ESR dish (azimuth 181 • , elevation 81.6 • ), as a function of altitude between 100 to 800 km.The electron density is high and well structured in the F-region, whereas the Eregion (between 95 km and 120 km) looks empty.This again suggests the precipitation of low energy electrons.The low energy electrons are thought to be effective in heating the electron population in the ionosphere and, as a consequence, in triggering ion outflow (e.g.Pitout et al., 2002).The ion temperature shows many structures (see the third panel of Fig. 7b) and is a good indicator of the electric field (e.g.Pitout et al., 2002).There are many ion temperature/electric field enhancements which could reflect the large number of FTEs observed by Cluster.Due to the tailward motion of the large number of separate flux tubes (FTEs) with different velocities produced by magnetic reconnection with a high reconnection rate as suggested by the Cluster observations, it is difficult to determine the direct ionospheric flow response to each FTE from the ESR radar data.
SuperDARN observations
The Co-operative UK, Twin Located Auroral Sounding System (CUTLASS) (Milan et al., 1997;Lester at al., 2004) is the easternmost pair of SuperDARN radars (Greenwald et al., 1995;Chisham et al., 2007) in the Northern Hemisphere.The SuperDARN radars normally measure the line-of-sight (l-o-s) Doppler velocity, spectral width, and the backscatter power from ionospheric plasma irregularities in 16 adjacent beam directions separated by 3.24 • in azimuth.A full scan is, therefore, completed in either 2 min or 1 min, depending on the integration period along each beam, and covers 52 • in azimuth and over 3000 km in range with a resolution of 45 km.The two CUTLASS radars have been upgraded such that two experimental modes can be run simultaneously, the so-called stereo capability (Lester et al., 2004).During the interval of interest, these two radars were running an experimental mode on channel B described in detail by Karhunen et al. (2006), with the normal scan, described above, on channel A. Only data from channel A are discussed in this paper.One of the CUTLASS radars located at Hankasalmi, Finland (62.3 • N, 26.6 • E) has a field of view as a fan covering the magnetic latitude between 65 • and 90 • including the directions of the ESR radars near Longyearbyen on Svalbard archipelago (see Fig. 2c), just discussed.The Kerguelen SuperDARN radar is located in Kerguelen island (49.35 • S, 70.26 • E) in the Antarctic and looks to the magnetic south pole over a section of ionosphere that includes the east Antarctica ice cap and the southern ocean.The backscatter power, line-of-sight (l-o-s) Doppler velocity, and spectral width observed by the CUTLASS Finland radar in the Northern Hemisphere and Kerguelen radar in the Southern Hemisphere can be shown to examine the conjugate ionospheric response to the FTEs measured by Cluster and TC-1.
Figure 8 shows the backscatter power, l-o-s Doppler velocity, and Doppler spectral, measured by the (a) CUTLASS Finland SuperDARN radar along beam 8 and (b) Kerguelen SuperDARN radar along beam 12 during the period 09:00-12:00 UT on 11 February 2004, respectively.Polewardmoving regions of backscatter or enhanced backscatter power, known as "poleward-moving radar auroral forms" (PMRAFs), the radar counterpart of "poleward-moving auroral forms" (PMAFs), are often observed and are widely accepted to be the auroral signature of FTEs (e.g.Sandholt et al., 1990;Milan et al., 2000;Wild et al., 2001).Pinnock et al. (1995) and Provan et al. (1998) described the radar signatures of FTEs as "pulsed ionospheric flows" (PIFs), i.e. poleward-moving regions of enhanced convection flow in the dayside auroral zone.Depending on the exact nature of the convection response to transient reconnection, either PM-RAFs (Milan et al., 2000) or PIFs (Provan et al., 1998), or both (Wild et al., 2001(Wild et al., , 2003) ) can be observed by Super-DARN radars (Wild et al., 2001).In the present case, only PMRAFs were observed.During the interval of interest, the ionospheric footprints of Cluster and TC-1 along the magnetic field line (see Fig. 2c and d) are located in the fieldof-view of CUTLASS Finland radar in the Northern Hemisphere and Kerguelen radar in the Southern Hemisphere.Therefore, it is interesting to examine the CUTLASS and Kerguelen radars observations to check the conjugate ionospheric response to the FTEs observed by Cluster and TC-1.
In Fig. 8a and b, the backscatter power shows that there are a large number of clear PMRAFs in beam 8 of the Finland radar and a few clear PMRAFs in beam 12 of the Kerguelen radar, marked by the black dashed bias lines (see the first panel of Fig. 8a and b).Some of these could correspond, one-to-one to the FTEs observed by TC-1 and/or Cluster, for example the PMRAFs highlighted by the red arrows in the first panel of Fig. 8a and b could correspond to the FTE i observed by TC-1 and FTE 2 measured by Cluster, respectively.The l-o-s velocity suggests that the ionospheric convection is almost all anti-sunward flows (see the second panel of Fig. 8a and b), but there is lack of clearly PIFs.This is roughly consistent with the results reported by Milan et al. (2000) and also might be because of the combined effect of the tailward motion of the different separate flux tubes (FTEs) with different velocities.The wide values of the spectral width show clear equatorward extending cusp features observed by Finland radar between about 77 and 80 • at the beginning and about 74 and 79 • at the end (see the third panel of Fig. 8a) and observed by Kerguelen radar between −80 and −84 • at the beginning and about −78 and −82 • at the end (see the third panel of Fig. 8b), which can be taken as the further evidence of the FTEs resulting in strong ionospheric response in the cusp region.In comparison to Finland radar measurements in the Northern Hemisphere, however, the echoes received at Kerguelen radar were weaker (lower received signal power) and there are less PMRAFs observed by Kerguelen radar.This suggests that the nature of the backscatter observed in the northern and southern conjugate ionospheres were markedly different, which is consistent with the results reported by Wild et al. (2003).The open-closed boundary (OCB), shown by the black line in the third panel of Fig. 8a and b, corresponds to the Doppler spectral width boundary (Baker et al., 1995(Baker et al., , 1997;;Chisham et al., 2001Chisham et al., , 2005)).The OCB can be seen to have extended progressively equatorward, as the polar cap expanded due to magnetopause reconnection.
In situ tracking
Since all 4 Cluster spacecraft sample the FTEs, we may apply four-spacecraft techniques (timing analysis (Russell et al., 1983;Harvey, 1998;Dunlop et al., 2001) and Spatiotemporal Difference (Shi et al., 2006)) to calculate the motion and scale of the FTEs observed by Cluster in each case using the tetrahedral spacecraft configuration.The results, briefly summarized in Table 1, are almost similar and show that the motion of two FTEs at Cluster (the unit vectors n GSM represent the direction of motion of the FTEs in GSM coordinate in the third row of Table 1) are mainly northeast.The speeds of these two FTEs are 100 km s −1 and 116 km s −1 , repectively.These motions were also checked using deHoffmann-Teller (deH-T) analysis, which gives broadly similar directions and magnitudes.Assuming a cylindrical flux tube shape and according to D FTEs = V • t, the velocity and the duration of the whole bipolar signature (∼43 s and 80 s) surrounding these two FTEs, gives estimated (maximum) flux tube sizes of 0.79 R E and 1.28 R E .For TC-1, there are no ion data at this time so we may not directly estimate the flux tube speeds via deH-T analysis (but see later for the discussion of Table 1 showing the TC-1 FTEs).
Although the effects of the magnetic field draping and the extension of the reconnection sites might be more complex than the prediction from the Cooling model (e.g.Shepherd et al., 1999), which examines the motion of reconnected magnetic flux tubes over the surface of the magnetopause (Cooling et al., 2001), in order to place the motion of the We show the expected velocities of the flux tubes near the spacecraft corresponding to the two FTEs observed by Cluster and the angle between the expected (Cooling) velocities and the Cluster observations in Table 1.These results demonstrate that the expected motion is mainly northeast.The speeds of the expected flux tubes are ∼358 km s −1 and 334 km s −1 , and the angles are all less than 30 • .This suggests that the direction of motion of the expected flux tube are relatively consistent with that of the FTEs observed by Cluster, but the predicted speeds are a factor of two to three times higher than the Cluster observations, which is roughly consistent with the statistical results of Fear et al. (2007).This might also be caused by the following two reasons.Firstly, the velocity derived from the four-spacecraft techniques is the velocity of the FTE perpendicular to the flux tube, and the axis is assumed to extend infinitely, so motion along the FTE axis cannot be estimated.Secondly, the motion of a flux tube branch at positions further from the point at which it threads the magnetopause may be more influenced by local magnetosheath flows (Fear et al., 2007).The expected (Cooling) velocities of the flux tubes, corresponding to TC-1 observations, are also presented in Table 1, and show that the expected motion is southwest.The speeds of the expected flux tubes are ∼140 km s −1 and 167 km s −1 , respectively.
Ionospheric convection
The motion of individual flux tubes may be expected to correspond to the local motion in the ionospheric flow cells at their footprints and it is interesting to briefly examine the global ionospheric convection observed by SuperDARN radars in both hemispheres in this context, which will help us to understand how the high-latitude ionospheric convection responds to a change in reconnection rate and/or location such as it occurs when a change in the IMF orientation impacts the magnetopause.We therefore present the two minute averaged dayside ionospheric convection patterns observed by the SuperDARN radar in Fig. 10.An increasing clock angle should result in an ionospheric convection flow enhancement (Lockwood et al., 2003) and the observed flow cells show sensitivity to the IMF orientation in this sequence also.
The SuperDARN radars also provide a unique way to directly monitor two-dimensional convection in the highlatitude ionosphere on a global scale.We therefore also present the ionospheric convection patterns with the map potential plots, derived by using the technique of Ruohoniemi and Baker (1998), observed by nine of the Northern Hemisphere radars and four of the Southern Hemisphere radars during the interval of interest.
The panels in Fig. 10 show successive flow maps for the Northern and Southern Hemisphere from 09:22 to 09:46 UT (a1-7 and b1-7) and from 11:18 to 11:48 UT (d1-6 and e1-6), in order to correspond closely to the highlighted FTE i observed by TC-1 and FTE 2 measured by Cluster, respectively.The dashed concentric circles indicate lines of constant magnetic latitude in 10 • increments and noon is located at the top of each plot.The cross-hair axis inset at the top right of each plot shows the IMF B Y and B Z components as a red arrow, where the time delay from ACE to the ionosphere is also indicated.The red circle highlights the region of velocity enhancement, as indicated by increased lengths of color drift vectors.The red star and blue circle represents the ionospheric footprint of TC-1 (in Fig. 10a1-7 and Fig. 10b1-7) and of Cluster S/C 1 (in Fig. 10d1-6), respectively.The field-of-view of the CUTLASS Finland radars (HAN) and Kerguelen radar (KER) is presented as a fan in Fig. 10a1 (d1) and b1 (e1), respectively.The violet line in each fan represents the open-closed boundary (OCB), marked by the Doppler spectral width boundary from each beam (Baker et al., 1995(Baker et al., , 1997;;Chisham et al., 2001Chisham et al., , 2005)).We note that the footprint of TC-1 or Cluster lie slightly equatorward of the OCB and are out of the flow burst region.This is because the TC-1 or Cluster position lies on magnetospheric field lines computed from the Tsyganenko' 96 model, rather than at the boundary, and therefore that the computed footprints lie slightly equatorward of the likely true locations.These points suggest that these FTEs have motions, which reflect the likely flow directions at the respective poleward positions of their footprints (e.g. the position at the violet circle in Fig. 10a and b and the violet rhombus in Fig. 10d and e).The convection cell pattern implies a relatively direct global context for the evolution of the sampled FTEs.The time series of ionospheric flow velocity, which are extracted from the convection maps at the violet circle in Fig. 10a and b and the violet rhombus in Fig. 10d and e, are presented in Fig. 10c and f, respectively, where the time is selected the middle time of each pattern.
From Fig. 10c and f, we find that there are very clear velocity enhancements from 09:25 to 09:43 UT and from 11:23 to 11:45 UT for the flow at the violet circle (or rhombus) in the Northern Hemisphere, and from 09:23 to 09:45 UT and from 11:19 to 11:37 UT for the flow at the violet circle (or rhombus) in the Southern Hemisphere.These demonstrate there are clear velocity enhancements at the near-noon, high-latitude sector of the morning cell or afternoon cell in the Northern and Southern Hemisphere in Fig. 10 (a2-6 and b2-6, morning cell) and in Fig. 10 (d2-5 and e2-4, afternoon cell).The velocity enhancements lasted about 18-22 min for both the FTE i observed by TC-1 and the FTE 2 measured by Cluster, which might suggest that the evolution time of FTEs is about 18-22 min from their origin on magnetopause (at reconnection site) to their addition to the magnetotail lobe.These are roughly consistent with the expected ionospheric flow excitation and decay time scale of 10-15 min (Cowley and Lockwood, 1992).These correspond to the ionospheric response to the FTE i observed by TC-1 and FTE 2 measured by Cluster.Near the positions of the violet circle, the drift vectors are mainly in westward (eastward) in the Southern (Northern) Hemisphere, in a good agreement with the expectations from the Cooling analysis; and near the violet rhombus, the drift vectors are mainly in northward to northeast in the Northern Hemisphere, also in a good agreement with the expectations from the Cooling analysis and the Cluster observations.The correspondence with the convection signatures confirms that individual flux tube movements are consistent with the anti-sunward ionospheric convections in the cusp regions of both hemispheres, and therefore are consistent with the two-dimensional (2-D) reconnection pulse model (Saunders et al., 1983;Southwood et al., 1988), where the model explains the bulge as the effect of a pulse of enhanced reconnection rate at an X-line whose length is not specified and allows for longitudinal event elongation.As this model prediction, the footprint of the newly opened flux tube moves along the streamlines in the distorted "two-cell" convection pattern, and the ionospheric signatures of these events show that patches of newly opened flux, produced by successive reconnection pulses, are appended to each other in a contiguous manner, causing discontinuous steps in the cusp ion dispersion on the boundaries between poleward moving events (Lockwood and Hapgood, 1998).This correspondence (with the conjugate response) confirms the interpretation from the analysis in Sect.2, despite the lack of direct one-to-one identifications with the ionospheric FTE signatures.These comparisons further suggest the formation of an extensive, low latitude merging line, with a reconnection geometry reflected in the observed FTE motion.
Comparing the ionospheric convections in both hemispheres, we find the velocity enhancements starting from 09:25 and 11:23 UT in the Northern Hemisphere and from 09:23 and 11:19 UT in the Southern Hemisphere for the FTE i observed by TC-1 and the FTE 2 measured by Cluster, respectively, which suggests the ionospheric response time in the Northern Hemisphere is 2 min later for the FTE i observed by TC-1 and 4 min later for the FTE 2 measured by Cluster than in the Southern Hemisphere.Does this suggest the reconnection site is located southward of the subsolar region?This might need better data coverage in the Southern Hemisphere to show the further evidence or be because of a different conductivity in the northern and southern highlatitude ionosphere.Whilst the intensities of the ionospheric convections are much stronger in the Southern Hemisphere than in the Northern Hemisphere for FTE i under the conditions of smaller IMF clock angles (∼94.4 • , see in Table 2 and Fig. 10a), they are stronger in the Northern Hemisphere than in the Southern Hemisphere for FTE 2 under the conditions of larger IMF clock angles (∼154.4• , see in Table 2 and Fig. 10b).This might lead to an unclear developing and fading of the velocity enhancement in the ionosphere of the more intense hemisphere because of the stronger background and suggests that the asymmetry of the intensities of the ionospheric convections between the Northern and Southern Hemisphere are IMF clock angle dependent.The convection signatures therefore show that there is a good response to the IMF conditions resulting in clear anti-sunward ionospheric convections at the cusp regions of both hemispheres, consistent with the onset of low-latitude reconnection and a predominantly eastward IMF.The velocity enhancements of the ionospheric convections, corresponding to the other FTEs, show the similar character, although they are not so strong and clear.
It is worth noting that the implied evolution time of these FTEs are different with the results reported by our previous paper (Zhang et al., 2008), but the response time of these FTEs are similar.That paper showed that the implied evolution time of the FTEs was about 4-6 min from its origin on magnetopause (at reconnection site) to its addition to the polar cap (the magnetotail lobe), and the ionospheric response time in the Southern Hemisphere were 2-6 min longer than that in the Northern Hemisphere for the events on 1 April 2004.This might be because of the dayside magnetopause reconnection occurred at the different hemisphere and the FTEs had different speed under the different IMF and solar wind conditions (see Table 2).From Table 2, we can find that for the two FTEs on 11 February 2004, the IMF had negative B Y and B Z with a positive B X component, giving a negative elevation angle (elevation angle = (B X /|B X |)tan −1 (|B X |/B Z )) and a smaller cone angle (cone angles = cos −1 (B X /|B|)); for the two FTEs on 1 April 2004 (reported by Zhang et al., 2008), the IMF had negative B X and B Z with a positive B Y component, giving a positive elevation angle and a larger cone angle.Considering the topology of Earth magnetic field during each event, the negative (positive) elevation angle and/or smaller (larger) cone angle with a negative (positive) B Y component may suggest the reconnection site is located southward (northward) of the subsolar region.This is because the first contact point between IMF and Earth's magnetic field at dayside magnetopause (largest shear angle point) will be located at southward (northward) of the subsolar region when the IMF B X component is positive (negative) with a negative B Z .The solar wind speeds are smaller for the two events on 11 February 2004 than that for the two events on 1 April 2004, which may lead to the implied evolution times for the FTEs on 11 February 2004 are longer than that for the FTEs on 1 April 2004.
Summary
In summary, we have presented the features of two FTEs observed by TC-1 and two FTEs measured by Cluster, while the Cluster array was near the high-latitude magnetopause and the TC-1 spacecraft was near the subsolar magnetopause.The ionospheric plasma flow and convection analysis, which are simultaneously observed by the ESR and CUTLASS Finland and Kerguelen SuperDARN radar, and conjugate observed by the SuperDARN radars, are also presented and support well the in-situ observations.Using the Cluster 4spacecraft observations, we calculated the velocity and the size of the flux tubes.The inferred northwardly (southwardly) reconnected flux tubes for these FTEs are shown to move northward (southward) or north-east (south-west) and tailward, either with dominant northward (southward) or dominant eastward (westward) velocity components under the stable IMF and high clock angle conditions.Under the unstable IMF and low clock angle conditions the motion is more eastward (westward).The FTE motion is consistent with the expected motion of reconnected magnetic flux tubes over the surface of the magnetopause, arising from a predominantly subsolar reconnection site during the prevailing IMF and solar wind conditions.The simultaneous ESR measurements recorded poleward flow and the CUTLASS Finland and Kerguelen SuperDARN radar observations showed the "poleward-moving radar auroral forms" (PMRAFs), indicative of bursty reconnection at the subsolar region of magnetopause and the simultaneous and conjugate SuperDARN observations show that flux tube motion is consistent with global conjugate ionospheric convections in both hemispheres.The flux tube footprints map to clear positions in a predominantly two-cell convection pattern, which are temporally correlated with the local ionospheric flow enhancements at these positions.The time durations of the velocity enhancements in the both hemispheres might imply that the evolution time of FTEs is about 18-22 min from their origin on magnetopause (at reconnection site) to their addition to the magnetotail lobe.However, the ionospheric response time in the Northern Hemisphere is 2 and 4 min longer than the response time in the Southern Hemisphere, for the FTE i observed by TC-1 and the FTE 2 measured by Cluster, respectively.
Figure 1
Figure 1 presents an overview of the solar wind and IMF conditions measured by the ACE satellite.Parameters shown are IMF components (a) B X , (b) B Y , (c) B Z , (d) IMF clockangle, (e) solar wind plasma number density, (f) solar wind speed, and (g) solar wind dynamic pressure.The data have been lagged by 69 min before 10:00 UT (lagged time) and 66 min after 10:00 UT (this time delay is calculated using the method ofLiou et al., 1998) in order to take into account the propagation of solar wind/IMF structure from the spacecraft to the subsolar magnetopause.The ACE spacecraft is located at about (221.2, −32.6, 9.5) R E in the Geocentric Solar Magnetic (GSM) coordinate system at about 10:34 UT (lagged time).During whole interval, the IMF B Z component was near zero before about 09:30 UT (lagged time) and always negative, varying between −8.2 to −0.4 nT (see Fig.1c), after 09:30 UT, while the B Y component was negative with a short, positive incursion (see Fig.1b).The IMF clock angle (if B Z > 0, clock angle = atan(|B Y |/B Z ), and if B Z < 0, the clock angle = π− atan(|B Y |/B Z )) therefore varied between 90 • and 180 • during this period (see Fig.1d).The solar wind density increased from 7 to 19 cm −3 over the interval of interest (see Fig.1e), whilst the solar wind velocity varied between 370 and 387 km s −1 (see Fig.1f), resulting in a prevailing solar wind dynamic pressure of ∼1.8-4.5 nPa (see Fig.1g).
Fig. 1 .
Fig. 1.An overview of the solar wind and IMF conditions measured by the ACE satellite.Parameters shown are IMF components (a) B X , (b) B Y , (c) B Z , (d) IMF clock angle, (e) solar wind plasma number density, (f) solar wind speed, and (g) solar wind dynamic pressure.
Fig. 2 .
Fig.2.All Cluster and TC-1 spacecraft tracks in the X-Z (a) and X-Y (b) plane in GSM coordinates, together with the ionospheric footprints of Cluster S/C 1 (blue line) and TC-1 spacecraft (red line) on the maps of Northern (c) and Southern (d) Hemispheres, between 09:00 and 13:00 UT on 11 February 2004.The orbit also shows the configuration of the Cluster spacecraft array as a tetrahedron (size scaled up by a factor of 20).The Model geomagnetic field lines are shown for the projection into the X-Z plane and cuts through the bow shock and magnetopause are shown for the X-Y plane.The X-Z plane field lines and ionospheric footprints of the spacecraft are drawn from the Tsyganenko '96 model inputting the real parameters.The field-of-view of the CUTLASS Finland radar and Kerguelen radar is presented as a fan in panels(c) and (d), respectively, with the beams employed in this study indicated by red dashed lines.The poleward-looking low elevation beam (32M dish) of ESR (between 76 and 85 • magnetic latitude) is indicated by the solid green line.The red"⊕" presents the magnetic pole.
Fig. 4 .
Fig. 4. Hodograms of the magnetic field in LMN coordinates for the periods 09:43:01-09:45:26 UT observed by TC-1 spacecraft and for the periods 11:32:43-11:33:39 UT measured by Cluster 1. Left and right panels show L-M and L-N representations.The black dot point (marked by "S") presents the start point in each panel.
Fig. 5 .
Fig. 5. Zoom in of the magnetic field boundary normal component B N (same as Fig.3c) and the field magnitude, together with PEACE electron spectrograms in anti-parallel, perpendicular, and parallel directions, observed by TC-1.
Fig. 6 .
Fig. 6.Zoom in of (a) the magnetic field boundary normal component B N (same as Fig. 3c), (b) the field magnitude, (c) the number density and (d) velocity (projected into LMN coordinates) of H + from CIS instrument onboard Cluster S/C 1, together with PEACE electron spectrograms in the (e) anti-parallel, (f) perpendicular, and (g) parallel directions, observed by Cluster S/C 1.
Fig. 7 .
Fig. 7. Plasma parameters observed by the northward-directed ESR dish and the filed-aligned dish on 11 February 2004.From top to bottom: N e , electron density, T e , electron temperature, T i , ion temperature, and line of sight velocity, V i (positive away from the radar) as a function of time and magnetic latitude (shorten as "Lat MAG " in a) or altitude (shorten as "Alt" in b).
Fig. 9 .
Fig. 9. Motion of reconnected flux tubes for low-latitude reconnection under IMF clock angle of (a) 85.63 • and (b) 148.13 • , respectively, which is obtained by running the Cooling model.The reconnection conditions are satisfied along a merging line, the projection of which is indicated by the black diagonal line in the middle of the figure.The solid lines indicate the trajectories of tubes which connect to the northern cusp, and the dashed lines indicate those which connect to the southern cusp.The position of Cluster and TC-1 was represented by the blue and red star dot, respectively.
FTEsFig. 10 .
Fig. 10.Streamlines and vectors of the dayside ionospheric flows derived from the Northern (a1-7 and d1-6) and Southern (b1-7 and e1-7) Hemispheric SuperDARN velocity measurements shown on geomagnetic grids, obtained from the "map potential" algorithm.Maps are shown for (a) from 09:22 to 09:46 UT and for (b) from 11:18 to 11:48 UT.The field-of-view of the CUTLASS Finland radars (HAN) and Kerguelen radar (KER) are presented as a fan in panel a1 (d1) and b1 (e1), respectively.The direction and magnitude of the lagged IMF are indicated at the right-hand upper corner of each map.The red star and blue circle represents the ionospheric footprint of TC-1 (in panels a1-7 and b1-7) and Cluster S/C 1 (in panels c1-6), respectively.The time series of ionospheric flow velocity, which are extracted from the convection maps at the violet circle in panels (a) and (b) and the violet rhombus in panels (d) and (e), are presented in panels (c) and (f), respectively.
Q.-H.Zhang et al.: Cluster, TC-1, and ground-based observations of FTEs in the middle of the figure, where its length has been limited to an arbitrary maximum of 10 R E and the model allows the position relative to the subsolar point to be chosen.Pairs of open reconnected flux tubes are assumed to be initiated along the merging line and are followed over a period of 600 s, resulting in the fan of motion tracks shown.The trajectories of flux tubes which connect to the northern cusp are indicated by the solid lines and the dashed lines indicate those which connect to the southern cusp.The positions of TC-1 and Cluster S/C 1 are represented by the red and blue star dots in Fig.9a and b, respectively.It is clear from the changing IMF direction that both spacecraft may observe a variety of FTE motions depending on the different IMF conditions, which agrees well with the results ofDunlop et al. (2005) andZhang et al. (2008).
Table 2 .
Zhang et al., 2008)ind conditions at the core time of FTE i observed by TC-1 and the FTE 6 measured by Cluster on 11 February 2004, and of the 12:31 UT FTE and 12:51 UT FTE on 1 April 2004 (reported byZhang et al., 2008).
|
2017-12-16T02:16:52.948Z
|
2011-10-24T00:00:00.000
|
{
"year": 2011,
"sha1": "44d1b659f9dc6002270e2162422cb7cc7a14f39f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5194/angeo-29-1827-2011",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "44d1b659f9dc6002270e2162422cb7cc7a14f39f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Geology"
]
}
|
55158238
|
pes2o/s2orc
|
v3-fos-license
|
Perception Based Determinants of Mobility Dilemma in Ilorin Metropolis
Usually, planning proceeds from issue identification and evaluation, directed at addressing recognized problems. As straightforward as it sounds, this sometimes can be difficult to achieve in settings where outdated master plans, continuous piecemeal and poorly regulated development are the order of the day. Even worse is the near total lack of baseline data upon which rational decisions about issues, such as mobility can be premised. Resulting into a scenario where holistic evaluations of city mobility needs and requirements are sometimes entirely jeopardized, leaving city managers with very little information to work with. The consequence is a distress filled mobility environment, from which determinants of dilemma may be tacitly admitted, but are generally not properly determined. This work uses perception of travelers’ to rank and classify effect of contextually relevant mobility influencing attributes of Ilorin metropolis, as bases for isolating determinants of mobility dilemma in the city. The approach is targeted at providing city managers with a simplistic alternative way of evaluating needs, based upon user identified preferences, such that determinants of mobility dilemma become implicit enough to be utilized for decision making and planning. The work involved a survey of randomly selected respondents’ from officially designated spatial subunits of Ilorin metropolis. It became apparent that attributes of public modes, development characteristics, as well as other operational, economic and safety attributions of the city’s mobility environment were influencing perception of mobility by respondents’ negatively, mainly as an offshoot of inadequate planning and regulation enforcement in the city. It is suggested that, parsimonious techniques such as utilized in this work, be employed in bridging the gap between comprehensive plans, where wherewithal is lacking.
Introduction
Since Ravenstein 1885 work [1], the movement of people has been an active subject of research in the social and geographical sciences.It has been shown in quantitative studies and described in a broad range of representations that a close relationship exists between mobility, distance and the environment within which it happens [2].No doubt, the efficacy of complex quantitative mobility evaluation in divulging useful mobility underpinnings has been proven.However, their lack of altruistic and realistic connections to mobility complexities continues to be demerits.The increasing complexity [3] [4] and high technical requirements associated with running simulations with contemporary tools bring about drawbacks in data deficient situations, characteristic of developing nations, such as Nigeria.A drawback premised on data unavailability, "dirtyness" or scantiness.The situation is further compounded because the temporal and economic cost of gathering basic data for use in dedicated databases present other challenges in these settings.This is particularly so because the average data deficient city usually lacks the resources to embark on full scale, world class data acquisition programmes or projects.Consequently, how mobility goes on in data deficient cities of developing nations are usually inadequately described, especially at the city and sub-city levels.This trend perpetually keeps mobility environment cumbersome and full of myriad issues that result into different levels of mobility dilemma for different stratum of the society, typically underscored by socio-economic, physical, environmental and political impulses that in turn help to aggravate the severity of mobility dilemma an individual experiences.
Therefore, it becomes pertinent to seek parsimonious but reliable approaches to assessing or describing how mobility goes on in such areas, so as to create a viable decision support platform for city management, especially by local authorities.One way of gaining this insight is by tapping into the perception of travel by the traveler, from which engendered inhibitors and enhancers of mobility in the environment can be deciphered.Perception entails acquiring and mentally interpreting information from the senses, in other to discern.Perception connotes a person's ability to be aware of and understand what is happening in his or her environment [5].A number of factors operate to shape perception [6], these factors can reside in the perceiver, the object being perceived or in the context of the situation, in which perception happens [7].An interesting attribute of perception is its incremental adaptableness, it reshapes according to new knowledge and developments a perceiver is exposed to.
Many perception based urban studies have been carried out to determine a number of issues.For instance, to gain insight into the determinants of cycling to school, [8] used school pupil's parent's perception of cycling routes in socio-ecological models to identify correlates at multiple levels (individual, social and physical environmental factors) [9], also used perception of traffic to demonstrate that demographic and environmental factors, such as traffic and busy roads, can determine whether an individual's perception of an environment within which mobility takes place is negative or positive.Likewise, a study on a walking intervention programme, towards creating safe walking routes to school in a rural area of California, USA, succeeded in increasing walking rates to school, by using perception based criteria to identify needed interventions [10].It can therefore be said that perception is key to the process of understanding our environment and how it influences us.Hence, discerning underlining determinants of a group of people's perception of an environment, and the elements it contains, may reveal traits that could further help in understanding such an environment, for the purpose of maintaining, reconstructing or administering it.These works provide some bases why tapping into the perception of travelers, who genuinely know how the travel environment enhances or hinders their personal mobility is essential.
The Study Area
Ilorin is a metropolitan area in Kwara state, north central Nigeria.The city exhibits characteristic dualism similar to many developing country cities [11].Thus, Ilorin can be taken as a fair representation of cities in developing countries, more so Nigeria.The city has both organic and inorganic sectors, reflecting both modern and traditional characteristics.The city also suffers from inadequate planning data base, as attested to by [12].Ilorin, to a large extent exhibits homogeneity in terms of development density, environmental quality, and in transport enterprises [13].Efforts to provide adequate transport infrastructure for the city of Ilorin have been adjudged adhoc, uncoordinated and poor according to [14], which is why simplistic alternative approaches are required to enable local city managers better address issues related to mobility experiences and dilemma in the city.
Approach to Data Acquisition
To help with mobility environment attribute contextualization, 10 local urban planning and transportation professionals were purposively selected from agencies and associated institutions in Ilorin metropolis.6 of whom were field professionals and 4 from local tertiary institutions.The professionals helped extract contextually relevant factors from a list of 57 harvested potential mobility influencing attributes from literature.Note, that the 57 attributes are not entirely exhaustive, but were considered enough for Ilorin case.The rating of harvested attributes were done on a 5 point Likert scale ranging from 4 -0, with extremely significant having the highest and not significant the lowest.For instance, there is no formal bus system in Ilorin metropolis, hence the attribute scores (0) and ends up taken off the list.Only 7 of the 10 participating professionals were available for each of 3 contacts.Therefore, only ratings from these 7 were utilized for further analysis.The extraction of contextually relevant attributes were done by determining the weighted mean of entries for each item in the list of attributes to pave way for comparison to a calculated cut-off point.The cut-off point of acceptance or rejection of items rated in Likert scale is the arithmetic mean of individual weights [15], which in this case are 4, 3, 2, 1 and 0. Hence, the cut-off point was calculated to be 2.00, Equation (1) shows the formula for this derivation.Consequently, any item with a weighted mean (WM) of 1.99 and below is considered not a significant contributor to mobility dilemma in the context of the study area, while those with WM equal to or above 2.00 are considered significant.WM is derived as shown in Equation (2).The extraction of contextually relevant mobility influencing attributes was then done.Table 1 shows the WM values of extracted contextually significant attributes for Ilorin metropolis. , Secondly, a general survey was carried out by trained research assistants with knowledge of the local language and terrain.Interviews were carried out in respondents' houses and in the streets of the constituent wards (spatial sub unit) of Ilorin, for which information is sought.Respondents were interviewed based on the checklist of 31 items that emerged from professional contextual ratings of potential mobility influencing attributes of Ilorin metropolis.Equal numbers of interviews were conducted in all wards, because the population figures at ward level are not officially available for the metropolis and for other cities in Nigeria at large.So, there were no bases for differing figures.500 questionnaires were administered, based on [16] and [15], suggestions on sample size determination and in view of the population of the city which is, 510,444 persons.This translates into 25 questionnaires each per ward.As a precaution, an extra 5 questionnaire each were added to make 30 per ward, in order to make room for substitution in case some are returned unusable at the end of the city wide survey, which Where, ES extremely significant is 4, HS, highly significant is 3, S, significant is 2, and LS, low significance is 1 and NS, not significant is 0. WM represents Weighted Mean, while R denotes relevant Items respectively.For this study, the non-relevant factors were discarded.Only factors rated to be relevant were classified and used for further analysis and preparation for the next phase.
usually is the case with survey based data collection exercises.Afterwards, 25 questionnaires were in turn randomly selected without replacement from the total number of valid questionnaires returned for each ward.The main issues of consideration in sampling for this research were geographic distribution, age, gender, employment status, income, location of activities of daily living and available human and financial resources to the researcher.The targeted age bracket was 18 -65, normally considered active age range.Interviews were conducted along randomly selected streets.Approach to respondents' selection was systematic random sampling.Table 2 depicts the general socio-economic characteristics of respondents', while Table 3 summarizes frequency of each type of perceived effect reported by respondents'.
Discussion
As a prerequisite, the 31 contextually relevant potential attributes of influence to mobility perception in Ilorin metropolis were grouped according to trait similarities, since it is known that groups of attributes operate together to influence perception, as mentioned by [6].For instance, all attributes describing development characteristics of the city are listed under one thematic area.This is necessary to allow for proper characterization of outcomes and comparison of effect.This exercise produced 7 thematic groups, listed A-G as in Table 4. Afterwards, the calculated weighted mean values for each attribute derived from ratings by respondents on another 5 that the remaining 20 attributes were each contributors to mobility dilemma in the city.Regarding perception of individual thematic groups, the development characteristics thematic area, with 10 different attributions had 6 of them, namely, pedestrian network density, pattern of development, road characteristics, pedestrian network characteristics, land use mix and activity mix all perceived positively.Pedestrian network density and pedestrian network characteristics represent differing attributions in that the former refers to the concentration of pedestrian networks, while the latter describes their physical characteristics, such as surfacing.However, diversity of movement channels, road network density, development density and quality of public transport facilities (which in this case refers only to bus stops) were considered a source of mobility dilemma.Road characteristics ranked 9 th , with a weighted mean value of 3.014, an indication that respondents' percept was positive.Even though, the roads are most often plagued with structural and functional lapses that threaten motorized and non-motorized traffic safety [17].Signifying acceptance of prevalent road conditions as they are, instead of what they should be in terms of standards.This might also be tied to the fact that 95% of urban trips in Nigeria are made by roads, according to [18], therefore it appears that road users have become captive such that functionality trumps standards.This notion is accentuated by a similar overall positive perception of pedestrian network characteristics of the city, mainly made up of unpaved residual spaces adjoining roads and in-between buildings.This shows that majority of the respondents' are not enlightened about what to expect regarding roads and pedestrian facilities, therefore, they appear to be under reporting the degree to which such attributions affect the level of mobility stress they might be experiencing.The import of this is that most city travelers in Ilorin have adapted to making do with what is available, and not what is desirable.
The development density of the city was largely perceived negatively.Contrarily, the pattern of development was adjudged to be a positive influencing attribute of mobility.Here lies another contradiction, because the development density of Ilorin metropolis has been reported to be high for the most part [11], a situation that is supposed to be favourable to pedestrian movement, given the relatively low personal mobility ownership level in the city.Only 191 persons of the 500 respondents' own personal means of transportation.Furthermore, the quality of public transport facilities was deemed to be negative, it turned up with a 2.932 weighted mean and a ranking of 20 th , of the 31 contextually relevant attributes.It should be noted that, public transport facilities refer almost entirely to improvised stop areas, except for very few that are actually officially designated as bus stops.Therefore, the main highlighting factor of mobility dilemma in terms of development characteristics in Ilorin metropolis can be connected to inadequate planning and substandard supply of transport facilities.Two other issues seen to be contributing to mobility dilemma in Ilorin metropolis that are also tied to the cities development characteristics are distance to public transport stops at origins and distance from public transport stops at destinations.These two public transport accessibility attributes of the mobility environment of Ilorin metropolis were considered to be negatively affecting the perception of mobility in the city by the respondents', thereby contributing to mobility distress.They both turned in a weighted mean of less than 3.00.
As for ratings of modal characteristics, respondents' attributed a positive effect to use of private modes, it turned up with a weighted mean value of 3.852, thereby ranking 1 st , in terms of level of positive influence on respondents' mobility.This agrees with assertions in the literature that private means of movement are usually preferred by travelers, unless conscious efforts are instituted to reduce its use from several fronts, in order to reduce the side effects of over motorization, which usually adds to mobility dilemma, often compounded by inadequate planning, as was the case for Ilorin metropolis.The complete absence of comprehensive and integrated public transit system [19], ensures that private modes are preferred over public modes.A situation affirmed by the ranking of public mode attributes as 27 th of all 31 contextually relevant attributes that contributes to mobility distress in the study area.This is a worrisome development because according to [19], 70% of urban trips are made by public transport modes, the high ridership happens mostly for lack of alternatives, not by choice, meaning that the highly negative percept of influence of public modes will likely define how mobility is generally perceived in the metropolis.This is even more so that all respondents' including the 191 with personal means of mobility use public modes from time to time.The availability of a variety of modes to choose from was however seen to be a positive influence on respondents' mobility in the study area.Even though the array only includes, cars, motorcycles, rickshaws and minibuses, that are ill-maintained and rickety and serviced by equally deplorable auxiliary facilities (if any), reflective of a highly decentralized and poorly regulated sector, due mainly to private ownership of all public transport modes in the metropolis.This attribute of public modes ownership also explains the possibility of having diverse characteristics in public transportation within the city [20].This is underlined by the fact that some areas are serviced by only rickshaws, or motorcycles, or taxis or minibuses, while others are served by a combination of taxis, minibuses, and motorcycles, leading to a different array of public transport mode choices available for different parts of the city.Off the three attributes that defines mode characteristics in the metropolis, only public modes attributes were perceived negatively, see Table 4.
In economic terms, overall public transport cost is negatively affecting perception of mobility according to the respondents'.This stance is further buttressed by the fact that, public transport fare/distance relationship and public transport fare effect on monthly income were both seen as unfavourable.More so, fuel affordability, as perceived by 38.2% of respondents' with private means of mobility was also negative.If the prevailing attributions of the economic aspects of mobility in Ilorin metropolis are viewed against the backdrop of [21] assertion that, exorbitant prices of spare parts and inability of the largely unregulated private operators to purchase new vehicles to expand their fleet, coupled with the negative perception of overall transport cost by respondents', it becomes clear that, both the informal operators of public transportation and the travelers are operating in a distressful environment, wherein none of the parties is favoured.In the same vein, the perception of contextually relevant attributes that describe mobility safety in the city, namely safety characteristics of pedestrian paths, safety of bus stops, rate of traffic accidents and the deployment of road markings and signs, were all negative.This confirms as mentioned before, that in general terms roads in Nigeria are for the most part improperly designed, thereby unsafe, while the equipment used by the operators also aggravate safety issues [22].
Conclusion
Grasping issues around determinant factors of individual mobility experiences, are important in understanding how elements an individual is exposed to shape their percept of mobility, and the extent to which they contribute to associated dilemma, because the variability of reasons for an individual's own choice of how and when to move, or where to go is huge.Thus, perception based studies could provide versatile input into the assessment of public mobility space by segmenting preferences.More so, it is the individual's perception of the environment that defines what that individual considers the action space, which is the area within which opportunities may be easily reached by individuals for their activities [23].The knowledge that, through the perceptual process, we gain information about properties and elements of the environment that are critical to our survival, underscores the increasing significance of perception based studies.Upon which the prospect of detecting environmentally induced mobility complexities from traveler perception is premised, especially where use of other sophisticated means of evaluation might be improbable.Specifically, it became apparent that attributes of public modes, development characteristics, as well as operational, economic and safety attributions of the city's mobility environment were influencing perception of mobility by respondents' negatively, mainly as an offshoot of inadequate planning and regulation enforcement in the city.It is suggested that, simplistic parsimonious techniques such as utilized in this work, be employed in bridging the gap between comprehensive plans, where wherewithal is lacking.This work demonstrates with Ilorin, a typical metropolitan area in north central Nigeria, how perception based determinants of mobility dilemma were isolated and processed to yield information about mobility environment induced distress, derived from perception of mobility influencing attributes inherent in an individual's mobility environment.
Figure 1 ,
depicts Ilorin in the context of Kwara state.
Figure 1 .
Figure 1.Ilorin metropolis in the context of Kwara State.Source: Kwara state town planning authority.
Table 1 .
List of extracted contextually relevant mobility influencing attributes for Ilorin metropolis.
Table 2 .
Socio-economic characteristics of respondents.
Table 3 .
Respondents' rating of contextually relevant mobility influencing attributes for Ilorin metropolis.
point Likert scale was done.Similar to the process utilized in the first stage, involving professional contextual relevance raters, a cut-off point of 3.00 was derived, representing the arithmetic mean of the representative numerical values for each of 5 types of descriptive effect of individual attribute perception.After comparing resultant WM values with the cut-off point, it was evident that only 11 attributes of the 31 contextually relevant ones to Ilorin metropolis were perceived as positively influencing respondents' mobility.Therefore, it was concluded
Table 4 .
Perception based rating of mobility influencing attributes of Ilorin metropolis.
|
2018-12-13T05:02:10.517Z
|
2015-03-27T00:00:00.000
|
{
"year": 2015,
"sha1": "d68bcbaf89fab3c8fe29fd5eb5ad85ebc9134c9b",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=55375",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d68bcbaf89fab3c8fe29fd5eb5ad85ebc9134c9b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
263818454
|
pes2o/s2orc
|
v3-fos-license
|
Changes in Lipids in Granulomatosis with Polyangiitis Relates to Glucocorticoids and History of Hypertension
Granulomatosis with polyangiitis (GPA) is an ANCA-associated small-vessel vasculitis. Vessel wall inflammation induces multiple vascular damages, leading to accelerated atherosclerosis. Metabolic profile and cardiovascular risk are somewhat understood in GPA patients. Cardiovascular atherosclerotic disease (ASCVD) may represent a risk for outcomes. Our purpose is to evaluate ASCVD risk in GPA patients. Thirty-six patients received GPA diagnosis (T0) and were evaluated after 1 (T1) and 2 (T2) years follow-up. All patients were treated with high-dose glucocorticoid, one-year tapered, along with immunosuppressants. Total cholesterol significantly increased in T1 vs. T0 and T2. LDL exhibited the same trend, while triglycerides increased in both T1 and T2 vs. T0. No difference was found in HDL. A significant hsCRP decrease was detected at T1 and T2 vs. T0, but not between T2 and T1. Moreover, we found a significant reduction in ESR at T2 compared with T1 and T0 and at T1 compared to T0. Hypertensive patients presented a pronounced increase in lipids, while inflammation reduced slowly compared to normotensives. Our data suggest that the increase in cholesterol and LDL in T1 is a consequence of glucocorticoids. These data can be useful in the evaluation of both CV diseases and lipid metabolism, which are closely related to vessel inflammation.
Introduction
Granulomatosis with polyangiitis (GPA) is a small-medium vessel-necrotizing vasculitis [1] classified in the spectrum of anti-neutrophil-cytoplasmic-antibody (ANCA)-associated vasculitis (AAV) [2].The prevalence of GPA ranges from 2.3 to 146.0 cases per million persons, with an incidence of 0.4 to 11.9 cases per million persons/year [3].Known as Wegener's granulomatosis and lately renamed at the 2012 Chapel Hill Consensus Conference (CHCC) [4], the cause of GPA is not well understood.However, etiopathogenesis has been ascribed at least partly to ANCA [5].Most cases (80-90%) are attributed to cytoplasmic-ANCA (c-ANCA) directed against proteinase 3 (PR3) in neutrophil granulocytes, whereas the remaining cases are attributed to perinuclear-ANCA (p-ANCA) against myeloperoxidase [1].ANCA activates neutrophils, causing their enhanced adherence to endothelium and degranulation that can damage endothelial cells and create systemic inflammation.ANCA is 66% sensitive and 98% specific for GPA and is present in 80-90% of patients with active multisystemic disease [2,5].Approximately 10-20% of patients with GPA are ANCA-negative, and the reason for the absence of ANCA remains unclear [1].Patients with GPA present nonspecific symptoms for weeks or months, such as fever, malaise, myalgias, arthralgias, and weight loss, without evidence of specific organ involvement [6].Prodromal symptoms are followed by a specific organ involvement due to necrotizing granulomatous inflammation most frequently in the upper (sinusitis, crusting rhinitis, saddle nose deformity, otitis media, mastoiditis, hearing loss) and lower (lung nodules, pulmonary interstitial capillaritis, alveolar hemorrhage) respiratory tract, in small to medium vessels (systemic necrotizing vasculitis), and in the kidney (necrotizing glomerulonephritis) [7,8].Although the life expectancy and control of symptoms of GPA patients have improved with immunosuppressive therapies [2], atherosclerosis has now emerged as a significant morbidity [9,10], and its progression may lead to the occurrence of atherosclerotic cardiovascular disease (ASCVD).This has already been detected in other vasculitises [11] and autoimmune diseases [12,13], underlining the increased risk of ASCVD and the poor predictive value of the existing score.For this reason, early subclinical atherosclerosis results may be an important field of investigation in the treatment of GPA patients.Inflammation burden and changes in lipid profile during patients' follow-up are involved in the progression of atherosclerosis.The purpose of this study is to evaluate lipid profile and ASCVD risk in GPA patients during follow-up.
Study Population
We retrospectively evaluated 63 patients relating to the Immunology clinic of the Unit of Internal Medicine, 'Guido Baccelli' of the Policlinico of Bari, Italy, who received a diagnosis of GPA between 1 January 2006 and 31 December 2021.Diagnoses was re-evaluated according to the recent international guidelines [1], and only patients with confirmed diagnosis were studied.We excluded patients who experienced disease worsening within two years of diagnosis.We also excluded patients that were already on statin treatment.Thus, we included 36 patients (18 females and 18 males, aged 54.73 ± 17.20) (Table 1).Patients were evaluated at diagnosis (T0), and at first (T1) and second year (T2) of follow-up.All patients were treated with a high-dose glucocorticoid at diagnosis followed by a one-year tapering, associated with another immunosuppressant.None of the patients were affected by alcohol addiction or registered a positive Alcohol Use Disorders Identification Test (AUDIT) [14].
Patients completed a questionnaire and underwent a comprehensive physical examination, with systolic blood pressure (SBP), diastolic blood pressure (DBP), and heart rate (HR).Blood samples obtained from all patients were analyzed for the complete blood count, low/high-density lipoproteins and triglycerides.In patients with triglyceride levels over 400 mg/dl, we dosed the value of low-density lipoprotein [15].Additional laboratory tests included erythrocyte sedimentation rate (ESR), C-reactive protein (hsCRP) level, uric acid, glycaemia, lactate dehydrogenase (LDH), alkaline phosphatase (ALP), ferritin, iron, as well as protein electrophoresis, anti-nuclear antibodies (ANA), and ANCA.Estimated glomerular filtration rate (eGFR) was calculated for each patient with CDK-EPI formula [16].
Demographic characteristics as well as clinical and laboratory findings were recorded.We investigated the Apulian regional patient database of the healthcare system, Edotto © (Exprivia, Molfetta, Italy), to identify the date of death of patients lost to follow-up.
The study protocol was part of the retrospective study on the evaluation of cardiovascular (CV) risk score in internal medicine approved by the Ethics Committee of the Policlinico University of Bari Medical School (protocol ID n.6645/20), and it conformed to the good clinical practice guidelines of the Italian Ministry of Health and the ethical guidelines of the Declaration of Helsinki, as revised and amended in 2004.Informed consent was waived due to the retrospective nature of this study.
Disease Activity and CV-Specific Risk Score
GPA-specific items must be evaluated to assess the diagnosis [17].Multiple tools have been suggested to achieve this aim.First, we needed to focus on the patients' symptoms, and the onset and progression of the disease.During physical examination, specific signs related to GPA were evaluated, such as joint swelling, skin rashes, or abnormalities in the ears, nose, or throat.Moreover, to evaluate inflammatory activity, laboratory tests were used: ESR, hsCRP, white blood cell (WBC) count, neutrophil to HDL ratio (NHR), and ANCA, specifically the PR3-c-ANCA subtype [18].As a surrogate marker of neutrophil activity, we used the ratio between neutrophils and lymphocytes (NLR).This marker's results were useful in the evaluation of immune cell involvement during cardiovascular disease (CVD) [11,19].The NHR is a calculated value representing the ratio between neutrophil count and high-density lipoprotein (HDL) cholesterol levels in the blood.It can be used as a marker of systemic inflammation and CV risk [20].Furthermore, urine tests were used to evaluate kidney involvement (the presence of red blood cells, protein, or other abnormalities).
Radiological imaging techniques were applied to assess the involvement of organs, identifying the presence of granulomas, inflammation, or structural abnormalities.Tissue biopsy served to evaluate disease diagnosis, but it is not common in follow-up of GPA [21].
Some scoring systems and disease activity indices have been developed to assess the severity and activity of GPA.
Birmingham Vasculitis Activity Score (BVAS)
We used the BVAS version 3 as an indicator of disease activity [22].The BVAS provides a standardized and systematic approach to assess the extent and severity of vasculitisrelated symptoms and organ involvement.The scoring system consists of various items that assess specific clinical manifestations, such as constitutional symptoms, organ-specific symptoms, and laboratory findings.Each item is evaluated and assigned a score based on its severity or presence.The scores for all individual items are then summed to calculate the total BVAS score, which represents the overall disease activity.A higher score indicates more active and severe disease [22].For each patient, we evaluated this score at T0, T1, and T2.
European Society of Cardiology CV Risk Score
The European Society of Cardiology (ESC) risk score is a tool used to estimate an individual risk of developing CVD within the next 10 years [23].The ESC CV risk score, called SCORE2, considers several key risk factors for CV disease, including age, gender, smoking status, blood pressure, total cholesterol level, diabetes, and is an important predictor of future CV events.The scoring system assigns points to each risk factor based on its impact on CV risk.The points are then summed to calculate a total risk score.The resulting score represents the estimated probability or percentage risk of experiencing a CV event, such as a heart attack or stroke, within the next 10 years.SCORE2 is useful in assessing an individual's overall CV risk and guiding treatment decisions.It is important to note that the SCORE2 does not assess the vascular damage due to vasculitis, but it only assesses the CV risk based on the above parameters [12].We used SCORE2 or SCORE2-OP where appropriate, according to the age of patients.
Statistical Analysis
This was carried out using SPSS (version 21, IBM, Armonk, NY, USA) while graphs were made using Prism (version 6.0, GraphPad Software, Boston, MA, USA).Kolmogorov-Smirnov test was performed to evaluate distribution of values.Data are presented as mean ± SD or median and interquartile range [IQR] where appropriate.Friedman test was used to evaluate the trend over time and Tukey's multiple comparisons test was a secondstep evaluation.The distribution of dichotomous values was analyzed using Chi-squared test.p-values are shown only for statistically significant comparisons.Survival comparison was made with the log-rank method (presented as Kaplan-Meier curves).p values of <0.05 were considered significant.
Population Characteristics and Organ Damage at Diagnosis
About half (55%) of patients were hypertensive; thus, 5% of them were treated with ACE2 inhibitors (ACE2is), 27% with angiotensin receptor blockers (ARBs), 23% with calcium antagonist (CA), and 14% with beta blockers (BBs), in particular, propranolol and bisoprolol (Table 1).Ten patients were affected by diabetes and treated with metformin, only two patients with associated insulin, none with SGLT2i or GLP-1 (Table 1).At T0, all patients started therapy with glucocorticoids, 31% of them in association with azathioprine, 36% with cyclophosphamide, and 14% with methotrexate (Table 1).By analyzing the localization of GPA at T0, it becomes apparent that it was mostly localized in the ear, nose, and throat (ENT) (82%), 59% had lung involvement, and 41% eye implications (Table 1).
Arterial Hypertension Influences the Metabolic and Inflammatory State
We selected patients with hypertension (n.20) and evaluated the same parameters described in Table 2 at the different time points.A significant increase in WBC at T2 vs. T1 and T0 was identified (p = 0.026 and p = 0.037, respectively), along with a reduction in L at T2 vs. T0 (p = 0.031) and a growth of N at T2 vs. T0 (p = 0.001).This resulted in a significant spread of NLR at T1 and T2 (p = 0.015) vs. T0 (p = 0.005) (Table 3).Furthermore, a significant decrease in hsCRP was found at T2 vs. T0 (p = 0.029) and in ESR at T2 (p = 0.015) and at T1 (p = 0.034) vs. T0 (Table 3).A different condition is shown in the group of non-hypertensive patients.WBC was not significantly higher at the different time points, but a significant reduction in N at T1 and T2 (p = 0.003) vs. T0 (p = 0.001), and T2 vs. T1 (p = 0.006) was observed.No differences were found in lymphocytes between the different time points.This resulted in a great reduction in NLR at T1 vs. T0 (p = 0.021), of hsCRP at T1 vs. T0 (p = 0.034) and vs. T2 (p = 0.038), and in ESR at T1 and at T2 vs. T0 (p = 0.0001 and p = 0.038, respectively) (Table 4).With regard to kidney function, in hypertensives, T2 uric acid was significantly increased vs. T1 (p = 0.024) and T0, (p = 0.024) (Table 3).On the contrary, in non-hypertensive patients, a reduction in creatinine was observed at T2 vs. T1 (p = 0.017) and a significant increase in eGFR at T1 vs. T0 and T2 (p = 0.007 and p = 0.026, respectively) with a decrease at T2 vs. T0 (p = 0.049) (Table 4).
With regard to inflammatory parameters, at T1 and T2, a significant decrease in hsCRP was detected compared to T0 (p = 0.03 and p = 0.0005, respectively) (Figure 2).Similarly, ESR was significantly reduced at T2 (p = 0.0001) and at T1 (p = 0.003) compared with T0 (Figure 2).Finally, no differences in NLR at different time points in the general population were observed.With regard to inflammatory parameters, at T1 and T2, a significant decrease in hsCRP was detected compared to T0 (p = 0.03 and p = 0.0005, respectively) (Figure 2).Similarly, ESR was significantly reduced at T2 (p = 0.0001) and at T1 (p = 0.003) compared with T0 (Figure 2).Finally, no differences in NLR at different time points in the general population were observed.Analyzing the BVAS score during the different time points, it significantly decreased at T1 and T2 compared to T0 (p = 0.008 and p = 0.044, respectively) in all patients, and considering only hypertensives or normotensives as well (Figure 3).Moreover, we did not observe a change in BVAS in hypertensives vs. non-hypertensives (Supplementary figure S1).Analyzing the BVAS score during the different time points, it significantly decreased at T1 and T2 compared to T0 (p = 0.008 and p = 0.044, respectively) in all patients, and considering only hypertensives or normotensives as well (Figure 3).Moreover, we did not observe a change in BVAS in hypertensives vs. non-hypertensives (Supplementary Figure S1).If we consider GPA patients with hypertension, a critical increase in total cholesterol was observed at T1 vs. T0 (p = 0.002), with a great increase in LDL at T1 vs. T0 (p = 0.001) and in T1 vs. T2 (p = 0.021) (Table 3).
Diagnostic Delay, Population Survival, and CV Disease
The majority of patients investigated in this study experienced a diagnostic delay, from 3 to 120 months.At baseline, one patient was receiving treatment for chronic atrial fibrillation.During the follow-up period, none presented a new CVD or a worsening of that previously described.We calculated the risk score in all patients affected by GPA, and it significantly increased at T1 compared to T0 and T2, as shown in Figure 4.Moreover, we calculated this score at T0 in patients according to survival outcome, and no difference was observed.Instead, we found an increased risk at 10 years, comparing hypertensives and normotensives (Figure 4).Furthermore, we analyzed the survival in GPA patients, comparing hypertensives with normotensives.The data revealed a hazard ratio greater than 1.5 that suggested a 50% increase in the risk of death in normotensives vs.If we consider GPA patients with hypertension, a critical increase in total cholesterol was observed at T1 vs. T0 (p = 0.002), with a great increase in LDL at T1 vs. T0 (p = 0.001) and in T1 vs. T2 (p = 0.021) (Table 3).
Diagnostic Delay, Population Survival, and CV Disease
The majority of patients investigated in this study experienced a diagnostic delay, from 3 to 120 months.At baseline, one patient was receiving treatment for chronic atrial fibrillation.During the follow-up period, none presented a new CVD or a worsening of that previously described.We calculated the risk score in all patients affected by GPA, and it significantly increased at T1 compared to T0 and T2, as shown in Figure 4.Moreover, we calculated this score at T0 in patients according to survival outcome, and no difference was observed.Instead, we found an increased risk at 10 years, comparing hypertensives and normotensives (Figure 4).Furthermore, we analyzed the survival in GPA patients, comparing hypertensives with normotensives.The data revealed a hazard ratio greater than 1.5 that suggested a 50% increase in the risk of death in normotensives vs. hypertensives, despite no relevant differences between the two groups (p > 0.05) found in the survival analysis (Figure 4D).hypertensives, despite no relevant differences between the two groups (p > 0.05) found in the survival analysis (Figure 4D).
Discussion
GPA shows defective immune-regulatory responses to environmental insults such as infections or autoantigens followed by excessive production of Th1 and Th17 cytokines (IL-17, TNF, and IFN-gamma).Pro-inflammatory cytokines led to the development of an inflammatory granulomatous vascular lesion [2].
The correlation between atherosclerosis and vasa inflammation burden has been analyzed, revealing its significant role in the increase in atherosclerosis itself, cardiovascular risk, and cardiovascular events [24].
Evidence in the general population demonstrates that LDL-C is the most important causal risk factor for ASCVD [25], leading to higher CVD risk [26].Therefore, within the analysis of the progression of subclinical atherosclerosis of GPA patients, variations in the patients' lipid profiles have to be taken into consideration, and lipid screening for periodic assessment of CVD risk is recommended for patients [23].Even so, a study of 29 patients with Wegener's granulomatosis underlined that traditional risk factors cannot explain the increase in cardiovascular risk in patients with BVAS of ≤ 1 [27].Patients with rare autoimmune diseases show altered lipid metabolisms, which could differ based on the underlying disease [28].However, few data and little evidence regarding lipid profile in the AAV exist, especially regarding lipid level variations during treatment.Thus, with regard to associations between lipid levels, and endothelial cell dysfunction and damage [29], understanding lipid profile variation in GPA is important.
Oral glucocorticoids are the most used for GPA patients for remission induction and maintenance [30].Although glucocorticoids represent the cornerstone of the treatment for AAV, their optimal dosing has not yet been assessed at present, resulting in significant
Discussion
GPA shows defective immune-regulatory responses to environmental insults such as infections or autoantigens followed by excessive production of Th1 and Th17 cytokines (IL-17, TNF, and IFN-gamma).Pro-inflammatory cytokines led to the development of an inflammatory granulomatous vascular lesion [2].
The correlation between atherosclerosis and vasa inflammation burden has been analyzed, revealing its significant role in the increase in atherosclerosis itself, cardiovascular risk, and cardiovascular events [24].
Evidence in the general population demonstrates that LDL-C is the most important causal risk factor for ASCVD [25], leading to higher CVD risk [26].Therefore, within the analysis of the progression of subclinical atherosclerosis of GPA patients, variations in the patients' lipid profiles have to be taken into consideration, and lipid screening for periodic assessment of CVD risk is recommended for patients [23].Even so, a study of 29 patients with Wegener's granulomatosis underlined that traditional risk factors cannot explain the increase in cardiovascular risk in patients with BVAS of ≤ 1 [27].Patients with rare autoimmune diseases show altered lipid metabolisms, which could differ based on the underlying disease [28].However, few data and little evidence regarding lipid profile in the AAV exist, especially regarding lipid level variations during treatment.Thus, with regard to associations between lipid levels, and endothelial cell dysfunction and damage [29], understanding lipid profile variation in GPA is important.
Oral glucocorticoids are the most used for GPA patients for remission induction and maintenance [30].Although glucocorticoids represent the cornerstone of the treatment for AAV, their optimal dosing has not yet been assessed at present, resulting in significant variability in clinical practice, especially for the induction of remission [30].There is some controversy about the right dose that should be administered to exploit both the immunosuppressive and anti-inflammatory effects of glucocorticoids, without a high increase in collateral effects [31].
Some studies suggest that low doses of glucocorticoids might mitigate the toxicity of the treatment, while maintaining the anti-inflammatory effects, but their data are not always accepted by physicians due to the risk of a poor prognosis of an uncontrolled disease [31].Glucocorticoids influence the immune system and its function.Furthermore, glucocorticoids have other collateral effects related to high cumulative doses: osteoporosis, CV disease, and gastrointestinal bleeding [32].
The variations in lipid profile (total cholesterol, LDL, and triglycerides) in the first year of follow-up should be related to glucocorticoid treatment needed for remission induction and maintenance of the disease.Evidence referring to changes in lipid profiles during patients' follow-up in AAV is minimal.Wallace et al. [33] observed significant variations in lipid panels during the remission induction of AAV treatment, which was characterized by important changes in disease activity and intensive immunosuppression.The changes were mainly related to serum concentrations of total cholesterol, LDL-C, and apolipoprotein B. However, they described a shortened follow-up (six months), while we collected data up to two years from diagnosis.
A study performed on 535 patients with AAV found that, after the first year following the diagnosis, the major cause of death was CV disease [34].Moreover, another study on 1781 patients with GPA proved a higher risk of heart failure and CV outcomes after the first year from the diagnosis compared with the general population [35].Despite these data, the CV risk in those patients is far from being determined.A survey of 106 patients with GPA suggested that their age could be a predictor of CV events at the time of diagnosis [24].
Our data underline how lipid profile increases in the first year, especially in hypertensives, increasing the risk of CV events, but during the second year, the same data decrease without lipid-lowering drugs, according to decreases in inflammation.However, differences are evident, also considering hypertensives in inflammation.In fact, in these patients, we found a slower decrease in inflammatory markers along with a faster change in lipid profile.Therefore, these changes may interfere with CV risk evaluation.
The SCORE2 currently used is validated on the general population therefore it is nonspecific for GPA.This score was higher in hypertensives, but there has been no increased risk of death found compared to normotensives when analyzing survival percentages.However, data are still controversial on CV risk change in GPA.The correlation between inflammation and change in metabolic/lipid profile is also testified by our findings despite a controversial presentation.In particular, BVAS is inversely related to CV risk in all patients.Moreover, it decreases during follow-up, resulting in a higher decrease in hypertensives.The increase in Hb values observed during the different time points could be related to the control of the inflammation.Furthermore, kidney function increases in the first year in non-hypertensives, despite it not changing in hypertensives.A possible explanation of this finding may be related to a double damage occurring in hypertensives.In these patients, better control of disease may not change the kidney damage.On the contrary, in normotensives, kidney failure is only associated with vasculitis.Thus, when GPA is treated and under control, there is an increase in filtration rate.All these data could be related to the high-dose glucocorticoid therapy that these patients take for vessel inflammation.
This study has potential limitations, such as the small sample size; on the contrary, GPA is considered a rare disease.This is also a retrospective study, including a long-time evaluation.Moreover, few patients were diagnosed via biopsy, and no histological evaluation was performed in follow-up.Finally immunosuppressive therapy may modify the lipid profile (mostly VLDL and triglycerides), making inferences about the effects of glucocorticoid therapy alone on lipid profile more difficult; however, after the first year of treatment, patients were only administered only immunosuppressants [36].Further studies are needed to better evaluate the cardiovascular effects of vasculitis and consequent treatment.
Figure 1 .
Figure 1.Lipids change at different time points in all 36 GPA patients.
Figure 1 . 15 Figure 2 .
Figure 1.Lipids change at different time points in all 36 GPA patients.Metabolites 2023, 13, x FOR PEER REVIEW 9 of 15
Figure 3 .
Figure 3. Birmingham vasculitis activity score (BVAS) score evaluated at different time points in all 36 GPA patients and its correlation to SCORE2/2-OP in T0, as well as values in hypertensives and in normotensives.
Figure 3 .
Figure 3. Birmingham vasculitis activity score (BVAS) score evaluated at different time points in all 36 GPA patients and its correlation to SCORE2/2-OP in T0, as well as values in hypertensives and in normotensives.
Figure 4 .
Figure 4. Cardiovascular (CV) risk score at 10 years at different time points in all 36 GPA patients (panel A), and at the time of diagnosis according to arterial hypertension history (panel B) or outcome (panel C).Panel D shows percentage of survival and hazard ratio in hypertensives vs. normotensives.
Figure 4 .
Figure 4. Cardiovascular (CV) risk score at 10 years at different time points in all 36 GPA patients (panel A), and at the time of diagnosis according to arterial hypertension history (panel B) or outcome (panel C).Panel D shows percentage of survival and hazard ratio in hypertensives vs. normotensives.
Table 1 .
Baseline characteristics of 36 GPA patients.
Table 2 .
Clinical and laboratory evaluation of 36 GPA patients according to the different time points.
Table 3 .
Clinical and laboratory evaluation of 20 GPA patients with hypertension according to the different time points.
Table 4 .
Clinical and laboratory evaluation of 16 GPA patients without hypertension (normotensives) according to the different time points.
|
2023-10-11T15:35:52.407Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "91727b0fee972ac84ca11d5f5c3e5b077d9d6191",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1989/13/10/1053/pdf?version=1696584667",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "991e339a20c25025570a6fd37b8d0ea54dde0884",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258789845
|
pes2o/s2orc
|
v3-fos-license
|
Reduction of Very Rapid Emergency Transfers to the Pediatric Intensive Care Unit
Introduction: Emergency transfers are associated with increased inpatient pediatric mortality. Therefore, interventions to improve system-level situational awareness were utilized to decrease a subset of emergency transfers that occurred within four hours of admission to an inpatient medical-surgical unit called very rapid emergency transfers (VRET). Specifically, we aimed to increase the days between VRET from non-ICU inpatient units from every 10 days to every 25 days over 1 year. Methods: Using the Model for Improvement, we developed an interdisciplinary team to reduce VRET. The key drivers targeted were the admission process from the emergency department and ambulatory clinics, sepsis recognition and communication, and expansion of our situational awareness framework. Days between VRET defined the primary outcome metric for this improvement project. Results: After six months of interventions, our baseline improved from a VRET every 10 days to every 79 days, followed by another shift to 177 days, which we sustained for 3 years peaking at 468 days between events. Conclusion: Interventions targeting multiple admission sources to improve early recognition and communication of potential clinical deterioration effectively reduced and nearly eliminated VRET at our organization.
INTRODUCTION
Pediatric respiratory and cardiac arrests outside the intensive care unit (ICU), commonly called code blue events, are associated with high in-hospital mortality. 1 Implementing rapid response systems reduces code events outside the ICU. [1][2][3] The successful efforts 1 to reduce code blue events allowed for shifted focus to more proximal outcome measures: reducing critical deterioration events or emergency transfer (ET) events. [4][5][6] These precursor events are more common than code blue events and associated with significant morbidity and mortality. 4,5 For example, ET mortality in our organization was associated with 10% mortality over an 8-year review period.
System-level efforts to improve situational awareness (SA) have been tested and implemented to decrease unrecognized inpatient deterioration, including code blues and ET. 5,7-9 SA allows individuals and teams to better predict and recognize early clinical deterioration signs 5,7,10 An example of an SA framework, known as a "Watcher Program," focused on identifying, mitigating, and escalating concerns. 5 Implementing this framework to improve SA successfully decreased ET events in several pediatric hospitals. 5,[7][8][9] Our institutional SA quality improvement (QI) project to reduce ET promoted early recognition of patients at risk for clinical deterioration in inpatient, non-ICU settings. This Watcher Program successfully decreased ET. However, 2 years into the improvement work, a new safety concern surfaced: ET that occurred quickly after admission, which we labeled very rapid emergency transfers (VRET). A study of ICU transfer within 4 hours of admission noted patients who met ET definition had significantly increased mortality. 11 Although anecdotal, our bedside teams identified VRET events as a priority despite the rarity of occurrence, as it strained system resources and stressed the teams providing direct patient ABSTRACT Introduction: Emergency transfers are associated with increased inpatient pediatric mortality. Therefore, interventions to improve system-level situational awareness were utilized to decrease a subset of emergency transfers that occurred within four hours of admission to an inpatient medical-surgical unit called very rapid emergency transfers (VRET). Specifically, we aimed to increase the days between VRET from non-ICU inpatient units from every 10 days to every 25 days over 1 year. Methods: Using the Model for Improvement, we developed an interdisciplinary team to reduce VRET. The key drivers targeted were the admission process from the emergency department and ambulatory clinics, sepsis recognition and communication, and expansion of our situational awareness framework. Days between VRET defined the primary outcome metric for this improvement project. Results: After six months of interventions, our baseline improved from a VRET every 10 days to every 79 days, followed by another shift to 177 days, which we sustained for 3 years peaking at 468 days between events. Conclusion: Interventions targeting multiple admission sources to improve early recognition and communication of potential clinical deterioration effectively reduced and nearly eliminated VRET at our organization. (Pediatr Qual Saf 2023;8:e645; doi: 10.1097/pq9.0000000000000645; Published online May 22, 2023.) care. In addition to the mortality risk, VRET can increase the risk of errors with each care transition. 12 VRET continued despite prior interventions aimed at early recognition and mitigation and required improvement work to extend beyond the inpatient setting.
The QI global aim was to increase SA of clinical deterioration to decrease ETs and code blue events outside the ICU. The specific aim was to increase the days between VRETs from every ten to every 25 days by March 31, 2018, with sustained improvement for 6 months.
Context
The setting was an academic, quaternary-care, free-standing children's hospital. The hospital has 549 licensed beds, including 74 ICU beds with geographically distinct pediatric and cardiac intensive care units. Between 2017 and 2021, the hospital averaged 85,879 emergency department (ED) visits, 17,464 inpatient discharges, and 156,638 inpatient days (See appendix 1, Supplemental Digital Content 1, which shows emergency department and inpatient volumes. http://links.lww.com/PQ9/A478).
The following criteria define ET: an unplanned transfer from an inpatient unit to an ICU, with at least one of the following interventions within 60 minutes before or after the transfer: (1) Intubation, (2) 3 liters or >60 milliliters per kilogram fluid boluses, (3) vasoactive medication (specifically epinephrine, norepinephrine or dopamine), or (4) cardiopulmonary resuscitation. In addition, VRETs were defined as transfers meeting ET criteria within 4 hours of admission to the inpatient unit.
A daily report of transfers from the electronic health record (EHR; Epic Systems, Verona, Wis.) identified ET. A review of each transfer determined if the ET criteria were met. Data elements (service, unit, and contributing condition) were analyzed to identify trends. A multidisciplinary team reviewed ET and VRET with a standardized process to identify opportunities to improve and share learnings.
This article followed the Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0) Guidelines. 13
Interventions
An interdisciplinary QI team was assembled and charged with decreasing VRET. The QI team included a nurse-physician dyad from an established Steering Committee for SA (Watcher Program), nurse and physician leaders from the ED, ICUs, ambulatory clinic, and inpatient general-medicine units. Additional team members included representation from patient placement, the sepsis improvement team, and executive sponsors. Meetings began in April 2017, and in August 2018, VRET improvement work was integrated into the SA steering committee.
Using baseline data, we developed Pareto charts of admission sources and contributing conditions to VRET to guide the development of a key driver diagram (Fig. 1). Then, utilizing the Model for Improvement plan-dostudy-act cycles tested the interventions. 14 A cohort of VRET with an ED admission source was identified with opportunities regarding initial disposition (ie, floor versus ICU), particularly in patients with sepsis and respiratory illness. In addition to ED provider discretion, Pediatric Early Warning Score (PEWS) contributed to the admitting unit decision. Though not validated in the ED, 15 PEWS has been extensively studied in inpatient units. Our ED calculates PEWS before admission to facilitate communication and assessment during care transitions. A mandatory self-paced learning module provided a PEWS educational refresher to ED nursing. In addition, a pilot was initiated of nurses from a general-medicine inpatient unit who assessed their admissions while the patients were still in the ED. Nursing and PEWS assessment by a floor nurse in the ED promoted proactive communication and care planning before transferring the patient. In addition, inpatient nurses could escalate concerns to inpatient and ED providers regarding patient acuity and appropriateness for floor admission.
VRET interventions also collaborated and aligned with simultaneous institutional sepsis work. This work included implementing an EHR (Epic Systems, Verona, Wis.) sepsis screening and alert in the ED and inpatient units. The screening model included documented assessment and laboratory data; if the automated screen was positive, the ERH displayed an interruptive system alert and decision support to clinicians. 16,17 Though the sepsis improvement team specifically designed, tested, and implemented sepsis-specific interventions, 16,17 work aligned with VRET improvement efforts as sepsis was the major contributing condition to VRET during the baseline period.
Another cohort of VRET cases involved patients who were direct admissions from hospital specialty clinics. The goal was to maintain direct admissions and not route all patients through the ED yet ensure safe triage and disposition to the floor. A working group developed an algorithm and handover communication methods to guide recommended vital sign assessments before a direct admission to ensure floor resources matched the patient's acuity (See appendix 2, Supplemental Digital content 1, which shows direct admission from clinic to inpatient algorithm. http://links.lww.com/PQ9/A479).
The final intervention focused on the institution's Watcher Program. The EHR sent an automated pager alert to the Safety Officer of the Day (SOD) for newly identified watchers to promote system-level SA. 5 The SOD is a hospitalist available 24/7 and responsible for conducting a chart review, communicating with the primary team, and if necessary, assessing patients identified as a watcher (ie, at risk for clinical deterioration). The alerts provided timely SA specifically for patients newly admitted who may not yet be recognized as a deterioration risk or have an existing mitigation or escalation plan. The ED staff may also consult the SOD to assist in appropriate admission patient placement. In addition, the SOD, who rotates every 7 days, staffs admissions, responds to rapid response calls, and addresses outside referrals.
Study of the Interventions
Statistical process control charts and Pareto charts were utilized to study the effects of the interventions. The standardized multidisciplinary event review process identified additional context and learnings. Although causation cannot be determined for the specific interventions, the timing of interventions correlated with special cause variation.
Measures/Analysis
"Days between VRET" defined the primary outcome metric. As VRET was a discrete but rare event, a statistical process control g-chart was created (Fig. 2). Data from October 2016 through March 2017 defined the baseline. The intervention began in April 2017 with the formation of the QI team. Standard rules for identifying special cause variation and centerline shifts were applied; in this case, a centerline shift for a point above the upper control limit or 3-sigma from the established centerline. 18 During the baseline and intervention periods, the team collected descriptive data, including VRET contributing condition and admission source, and Pareto charts compared frequencies (Figs. 3, 4).
The percentage of Assessment and Consultation Team (ACT) events for patients on the floor in less than 4 hours defined the secondary outcome metric. The ACT team is our version of a Rapid Response Team, consisting of a PICU fellow/attending, a Hospitalist attending, a PICU charge nurse, and a respiratory therapist. All patients potentially needing transfer from an inpatient unit to the ICU require an ACT. This metric was a leading indicator for potential VRET. A p-chart was utilized, given discrete, attributable data represented as a percentage of all ACTs (Fig. 5). October 2016 through March 2017 again defined the baseline and standard rules for centerline shifts applied. 18 A process metric tracked floor registered nurse (RN) assessments of ED patients before their admission to the floor. To evaluate if their assessments were incongruent with the acuity and resources of the targeted general-medicine unit, we asked floor RNs to document answers to the following questions: • Did your PEWS assessment match the most recent ED PEWS? • Did you escalate any concerns? If yes, did the patient still transfer to the floor?
A convenience sample of this process metric was obtained for the 12-hour shift this RN was staffed.
Given that the interventions could negatively impact ED length of stay, our balancing measure was the percentage of patients transferred within 15 minutes of the ED RN documenting care was complete, a signal for transfer to the inpatient unit. Conversely, a decrease in the percentage of this metric could indicate prolonged time spent in the ED. The improvement team followed this metric for 12 months.
The other balancing metric was the percentage of ICU transfers after an ACT. We monitored activations and ICU transfers to ensure VRET interventions did not overburden the organizations' critical care resources. A p chart monitored ICU disposition following an ACT (Fig. 6). Data from October 2016 through March 2017 established the baseline and standard rules for centerline shifts applied. 18
Ethical Considerations
Per institutional policy, this project met the definition of QI, not human subject research; therefore, institutional review board approval was not required.
RESULTS
During the baseline period, VRET occurred on average every 10 days. Following QI interventions, 2 centerline shifts were observed (Fig. 2). The initial shift occurred after 132 days without a VRET between April 2017 and August 2017, resulting in a new centerline of 79 days between VRET. The second shift, occurred in November 2018, following a peak of 468 days between events leading to a new and sustained centerline of 177 days between events. A total of 13 VRET with 1 mortality occurred during the 6-month baseline, 11 VRET and 2 mortalities occurred during the 3.5-year intervention period for an overall VRET mortality of 12%.
Pareto analysis of events demonstrated contributing conditions similar to those of VRET during baseline and intervention periods (Fig. 3). Sepsis accounted for 69% of baseline VRET. Sepsis-related VRET decreased to 36% during the intervention period after sepsis alert implementation. During the baseline period, admission sources of VRET varied between ED (most common at 46%), direct admission, perioperative transfer, and ICU transfer (Fig. 4). During the intervention period, only 1 VRET was associated with a direct admission; the remaining VRET arrived from the ED.
Our leading indicator, the percentage of ACT events within 4 hours of admission, did not demonstrate the same level of improvement as our primary outcome VRET metric (Fig. 5). The centerline was 16.8% during the baseline period. Common cause was observed with a modest decrease in late 2018 through 2021, resulting in a centerline of 14.5%.
For our process measure evaluating floor RN assessment of ED patients, we captured data from 97 patients 3 VRETs during the 6-month baseline period (October 2016-March 2017) and did not experience another VRET during the intervention period through October 2021.
Regarding balancing metrics, the percentage of patients transferred within 15 minutes of the ED RN documenting that care was complete was monitored for 7 months after the intervention. A 2-sample t test compared 5 months of preintervention and 7 months of postintervention data, which confirmed the pre-and postintervention values did not differ (P = 0.755). For the percent of ICU transfers after ACT, the baseline and intervention centerline was 56.5%, with only common cause variation (Fig. 6).
DISCUSSION
Efforts to reduce the frequency of very rapid emergency transfers (VRET) far exceeded our initial aim of 25 days. A new mean of 79 days between events 6 months after interventions was followed by nearly eliminating VRET, as evidenced by achieving 468 days between events. Reducing VRET was part of our larger goal: ET and code blues reduction through improved situational awareness (SA). Albeit a rare event, our study sample's overall mortality of 12% and our bedside clinicians' passion for decreasing these events compelled us to reduce VRET.
Our interventions focused on recognizing early clinical deterioration, followed by prompt mitigation and escalation. Our improvement had initial success with multidisciplinary interventions sustained by a change in culture, buy-in, and communication targeting VRET preventionconcurrent efforts targeting improved sepsis outcomes aligned with the overarching goal of VRET reduction.
Sepsis is associated with pediatric morbidity and mortality. 19 Prompt recognition and response drive improved outcomes. EHR sepsis screening tools assist clinicians with earlier recognition and team communication. 16,17,19 Concurrent sepsis QI work promoted earlier detection of patient deterioration. Almost 70% of our baseline VRET were attributed to sepsis (Fig. 3, top), compared with only 36% postintervention (Fig. 3, bottom), indicating improvement. Interventions targeted the identification of patients at risk for early clinical deterioration before admission to the inpatient floor. Also targeted were ED and direct admissions, PEWS assessments, earlier vital signs trending, and intentional admitting unit decisions based on unit resources.
Inaccurate admission unit placement, such as a mismatch of patient acuity and unit resources, can result in unplanned transfers to higher levels of care associated with higher mortality and a longer length of stay. 20 For example, Mansel and colleagues reviewed all "rapid" transfers to the PICU within 4 hours of admission and found those patients meeting the ET definition had significantly increased mortality. 11 Other studies have demonstrated higher ET mortality than non-ET transfers, supporting this proximal measure of code blues is important to eliminate. 11,21,22 Though inaccurate admission placement can lead to VRET, we also recognize patients can experience rapid disease progression once admitted, thus the importance of situational awareness to identify and mitigate clinical deterioration at a system level.
Baseline data presented an opportunity to engage ambulatory colleagues in reducing VRET by ensuring appropriate direct admission placement. Direct admissions are important in decreasing ED utilization, 23 but there are inherent risks if the admitting unit's resource capacity does not match the patient's clinical status. Our baseline data revealed direct admissions were associated with a third of the VRET. We developed a direct admission vital sign and assessment algorithm stressing communication with the inpatient team (SDC 2, http://links. lww.com/PQ9/A479). Direct admissions were not recommended for certain high-risk patient populations (ie, bone marrow transplants).
We leveraged the SOD role created to support the Watcher Program to mitigate VRET risk by automating notification of new Watchers, including newly admitted at-risk patients. This streamlined communication to the SOD promotes timely risk awareness.
Sustaining the pilot of the inpatient unit nurse in-person patient assessment while in the ED contributed to eliminating VRET. This intervention leveraged the previously established role of this unit's "care partner," an experienced nurse without a patient assignment who assists with various workflows, including patient assessments. This role allowed flexibility in leaving the department to assess incoming patients in the ED. Initially, the care partner attempted to evaluate any admission from the ED but later focused on patients at risk for sepsis or respiratory distress. Anecdotal reports suggested this intervention improved ED-to-inpatient collaboration and communication between RNs. More importantly, this intervention led to at least 1 documented and avoided VRET, which is significant given the high mortality and the rarity of these events. Also notable, there was no change in the balancing measure of ED length of stay, suggesting the additional RN assessment did not lead to system inefficiencies. The pilot unit has not had a VRET since implementing this intervention. Unfortunately, the model was not spread beyond the pilot unit, given constraints in existing staffing models.
Limitations
Rare events are challenging to target for improvement and often require a multidisciplinary and multi-step approach to address the problem. Identifying which interventions made an impact was difficult. Though we had multiple interventions, we only defined 1 process metric, which we tracked for a limited time. Additionally, we do not know the effects the Covid-19 pandemic may have had on the events. Though patient volumes and, thus, opportunities decreased, we experienced 2 centerline shifts before the pandemic. Despite related challenges such as Covid-19 diagnosis, multisystem inflammatory syndrome in children, isolation protocols, and generalized healthcare staffing crisis, we sustained our improvements. In 2021, our volumes were near prepandemic without a return of events. This work may not be generalizable to organizations without a strong QI culture and clinical informatics resources.
CONCLUSIONS
This study identified a subset of emergency transfers that required different strategies to reduce and nearly eliminate the event type. Targeted efforts to dramatically reduce very rapid emergency transfers were successful; continued vigilance, ongoing event review for learning and action, and future interventions utilizing QI methodology will be important to sustain this endeavor. In addition, future quality improvement work will involve predictive analytics to identify at-risk patients better and incorporate knowledge from colleagues at other hospitals working to eliminate similar events at their organizations. 24
|
2023-05-20T14:41:58.926Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "397315154f894d646402973b7b66cad28a5dfa72",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "397315154f894d646402973b7b66cad28a5dfa72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258586221
|
pes2o/s2orc
|
v3-fos-license
|
Long-term HIV and tuberculosis outcomes in patients hospitalised with severe cutaneous adverse reactions
Background Treatment-limiting severe cutaneous adverse reactions (SCAR) occur more commonly amongst persons with HIV-associated tuberculosis (TB). The impact of SCAR on long-term HIV/TB outcomes is unknown. Methods Patients with TB and/or HIV admitted to Groote Schuur Hospital, Cape Town, South Africa with SCAR between 1/10/2018 and 30/09/2021 were eligible. Follow-up data was collected for 6- and 12-month outcomes: mortality, TB and antiretroviral therapy (ART) regimen changes, TB treatment completion, and CD4 count recovery. Results Forty-eight SCAR admissions included: 34, 11, and 3 HIV-associated TB, HIV-only and TB-only patients with 32, 13 and 3 cases of drug reaction with eosinophilia and systemic symptoms, Stevens-Johnson syndrome/toxic epidermal necrolysis and generalised bullous fixed-drug eruption respectively. Nine (19%), all HIV-positive (eight co-infected with TB), were deceased at 12-months, and 12(25%) were lost to follow-up. Amongst TB-SCAR patients, seven (21%) were discharged on all four first-line anti-TB drugs (FLTD), while 12(33%) had regimens with no FLTDs; 24/37(65%) completed TB treatment. Amongst HIV-SCAR patients, 10/31(32%) changed ART regimen. If retained in care (24/36), median (IQR) CD4 counts increased at 12-months post-SCAR (115(62–175) vs. 319(134–439) cells/uL). Conclusion SCAR admission amongst patients with HIV-associated TB results in substantial mortality, and considerable treatment complexity. However, if retained in care, TB regimens are successfully completed, and immune recovery is good despite SCAR.
TB treatment completion and cure rates approach 80% in HIVassociated TB patients in SA [ [7,8], p. 104]. However, data is limited on long-term TB outcomes in FLTD-associated SCAR. Our unit has pioneered sequential drug challenge (SDC) to assist in rapid re-initiation of all non-culprit FLTD in TB-SCAR [9] yet, despite 88% of TB-SCAR patients undergoing SDC, only half remain on at least one FLTD [5] and their treatment completion rates are unknown. SA has the largest antiretroviral therapy (ART) program in the world, with >60% of PLWH on ART and around 90% having sustained virological suppression [10,11]. With no data on the short-and long-term impact of SCAR on ART and CD4 count recovery and given data showing increased interruption of care following in-hospital ART commencement, there is potential concern [12,13]. This study aimed to describe the 6-and 12-month HIV and TB outcomes amongst patients hospitalised for HIV/TB-associated SCAR.
Patient selection and ethical approval
Patients with SCAR admitted to the dermatology ward of Groote Schuur Hospital (GSH), a tertiary hospital, in Cape Town, South Africa were reviewed for inclusion. In 2021, Cape Town city had an estimated population of 4.78 million people of which GSH serves approximately half [14]. At a provincial level, the estimated TB burden is one of the highest in the country [15]. SCAR patients were eligible for inclusion if they met the following criteria: i) > 12 years old, ii) either HIV-positive, active TB, or HIV-associated TB, iii) hospitalised due to SCAR necessitating treatment interruptions, iv) had a validated (possible, probable, or definite) SCAR phenotype of DRESS, SJS/TEN, or generalised bullous fixed-drug eruption (GBFDE), and v) consented to collection of their clinical data. The study and 12-month follow-up period spanned three years from 1st October 2018 to 30th September 2021. Patients were prospectively enrolled and had baseline data, phenotype validation and drug causality assessment performed as part of the Immune-mediated adverse drug reactions (IMARI) Africa study (University of Cape Town (UCT) Human Research Ethics Committee (HREC): R031/2018). IMARI uses RegiSCAR [16,17] phenotype validation for SJS/TEN and DRESS, and Naranjo and/or Alden scoring tools for drug causality assessment [18,19]. GBFDE was diagnosed by a dermatologist. The UCT HREC approved this study (577/2021).
Data collection, definitions, and analysis
Baseline TB, HIV and SCAR admission data was collected through the IMARI-registry. Baseline variables included: demographics and medical history; details of previous and current TB (including site-of-disease, method of diagnosis and starting treatment regimen); HIV details preadmission (including date of diagnosis, CD4 count, viral load (VL) and ART); and SCAR admission variables (including onset of reaction, clinical and laboratory markers of phenotype and severity, hospital length of stay (LOS), and SDC outcomes). Long-term TB and HIV outcomes were collected at 6-and 12-month time-points after discharge from SCAR hospitalisation. Virological suppression was defined as <400 copies/ml. To minimise missing data, several methods were used to collect data including: folder and drug allergy clinic record review, visit-tracking on the Clinicom hospital booking system, and the provincial Single Patient Viewer (SPV). Clinicom and SPV are electronic record systems tracking patient encounters across various levels of healthcare services in the Western Cape province. Additionally, SPV captures drug dispensing and laboratory information. The National Health Laboratory Services electronic results platform was searched for all HIV/TB laboratory testing performed at all care levels during the 12-month follow-up period. Fig. 1A describes TB and HIV outcome definitions used, and the window period allowed around 6-and 12-month time-points for key outcome variables. All data was stored on a password protected electronic database (REDCap 12.0.19©. 2022 Vanderbilt University), and de-identified data was exported for analysis on Microsoft Excel, version 16.54 (Microsoft Corporation©, 2021) and STATA, version 15.1 (StataCorp. 2017. College Station, TX: StataCorp LLC.). All predictor variables with a p-value <0.2 in univariable logistic regression models were used to build multivariable logistic regression models using a forward stepwise method at 0.05 level of significance. Variables that did not improve the model fit were dropped. Table 1 provides the baseline characteristics, and TB and HIV disease details for the cohort of 48 validated HIV or TB-associated SCAR patients, and Fig. 1B illustrates the stratification by phenotype (32 DRESS, 13 SJS/TEN and 3 GBFDE) and HIV/TB status (34 HIV-associated TB, 3TB-only and 11 HIV-only). The median (IQR) age was 38 (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45) years, and 60% were female. Six patients had a history of previous TB with exposure to FLTD without documented SCAR (three were diagnosed with HIV concurrently, one soon after TB diagnosis and one diagnosed at an unknown timepoint, one was HIV-negative). On admission, 37 (77%) participants were on anti-TB treatment, and all except one, were receiving FLTDs. TB was confirmed in 24/37 (65%) SCAR admissions, either by GeneXpert PCR alone (n = 19, 79%), culture alone (n = 3, 13%), or both (n = 2, 8%); all 24 confirmed TB cases were rifampicin-sensitive (one patient among them INH monoresistant). The remainder (n = 13) were started empirically on TB treatment based on clinical symptoms and suggestive imaging. Baseline TB characteristics were similar in people with HIV-associated TB compared to the overall cohort, except that all HIV-negative TB patients (n = 3) had pulmonary TB (PTB) alone while extrapulmonary TB (EPTB) occurred in 19/34 (56%) of HIV-associated TB patients. Median (IQR) CD4 cell count was lower amongst people with HIV-associated TB compared with HIVpositive alone [90 (61-142) vs 269 (134-391) cells/uL; P = 0.162)].
Baseline characteristics and clinical information
Overall, 45 (94%) participants were HIV-positive. Forty-four had baseline median (IQR) CD4 cell counts around time of SCAR of 115 (62-175) cells/uL. Baseline VL results were only available for 15/45 (33%) within six-months pre-and three-months post-SCAR, but it is notable that only one of seven VLs available pre-SCAR had virological suppression. Nine patients had VL performed (one had a repeat VL) in the three-months following SCAR and virological suppression was noted in three participants. Pre-admission ART was documented for 31/45 (69%), with 26/31 (84%) on SA guideline specified first-line ART and 5/ 31 (16%) on second-line ART [20]. Cotrimoxazole prophylaxis had been prescribed for 29/34 (85%) of patients with CD4 cell count <200cells/ uL.
SCAR phenotypes and offending drugs
Supplementary Table 1 details the RegiSCAR probability and clinical characteristics of the admission SCAR by phenotype. DRESS was the commonest phenotype occurring in 32/48 (67%). No significant differences in demographics or TB and HIV baseline characteristics were noted between phenotypes. Amongst DRESS cases, 23/32 (72%) were definite or probable, while 10/13 (77%) of SJS/TEN cases were definite or probable. Eight of 13 cases had >30% body surface area (BSA) involvement and were designated TEN. The median (IQR) LOS was 26 ) days for all SCAR and similar across all phenotypes. Supplementary Table 2 provides details of suspected drugs with the highest Naranjo scores, and the outcomes of SDC. SDC to FLTD treatment was performed in 30/37 (81%) TB-SCAR; two patients died prior to SDC, one TB diagnosis was refuted, three went straight onto a modified regimen due to severity of organ involvement, and one did not undergo SDC for unknown reasons. Seventeen TB-SCAR patients had a positive reaction to ≥1 FLTD, with ten reacting to a single TB drug and seven to >1 FLTD.
Of the 45 HIV-SCAR cases, 38 (84%) were on ART at the time of discharge, of which 10 patients commenced first-line ART in-hospital. Thirty-six PLWH were alive at 12-months post-SCAR, 24 (67%) were still collecting ART, while 12 (33%) were no longer in HIV care. Of the 31 PLWH who were on ART pre-SCAR, regimens were changed in 10 (32%), four to new dolutegravir-based fixed-dose combinations, four The study included prospective cases with different outcomes as the following example cases show: Case A. Admission for FLTD-SCAR with SDC in hospital, prolonged or modified TB treatment and follow-up clinical information available to collect for 6and 12-months; Case B. Admission for FLTD-SCAR and then dead either during admission or in the 12-month follow up period, and Case C. Admission for FLTD-SCAR with SDC in hospital, prolonged or modified TB treatment and then loss-to-follow-up during the 12-months post-SCAR. FLTD, first-line anti-tuberculosis drugs; GSH, Groote Schuur Hospital; LTFU, loss to follow up; PHC, primary health care; SCAR, severe cutaneous adverse reaction; SDC, sequential drug challenge; TB, tuberculosis.
due to a SCAR culprit drug in the initial ART regimen (two nevirapine, one tenofovir/efavirenz combination and one dolutegravir) and two unknowns. PLWH who remained in care post-SCAR admission showed increases in median (IQR) CD4 counts at 6-and 12-months (
Discussion
Our study reports 6-and 12-month outcomes for the largest cohort of HIV and TB-associated SCAR reported to date, with >70% of patients with HIV-associated TB. Our major findings include: i) one-fifth of patients died, most commonly in the first three-months post-SCAR, ii) the majority of TB regimens require modification of ≥1 drug, but despite altered and prolonged therapy 65% of patients have successful TB outcomes, iii) nearly one-third of ART regimens are changed post-SCAR, and iv) if HIV-SCAR patients are retained in care, 6-and 12-month immune recovery can be expected.
People with HIV-associated TB admitted with SCAR had advanced immunosuppression, with a median CD4 cell count of 90 cells/uL. TB remains the leading cause of death in PLWH and patients with a CD4 cell count <100 cells/uL have reported 6-and 12-month mortality of 6-25% [21,22]. Furthermore, a predominantly European review and survival analysis of SJS/TEN admissions, where HIV-infection was ±9%, showed mortality rates of up to 34% at one year [23]. Thus, although the mortality rate of 19% in our cohort is high, and indicates the profound vulnerability of this patient population, it does not appear that SCAR admission by itself significantly increased mortality. This is consistent with findings in a related cohort of only SJS/TEN-SCAR and a review of mortality in DRESS syndrome, where the mortality rate was 3%. This is consistent with the lower end of the mortality scale of DRESS in HIV/TBuninfected individuals in the developed world [24,25]. Several factors may be driving this lower-than-expected mortality amongst people with HIV-associated TB compared to other SCAR cohorts, including the younger population with less co-morbid organ dysfunction, and differences in immune-responses to specific drugs e.g. FLTD versus allopurinol [24]. However, due to the high number of people no longer in clinical care at 12-months, we are cautious in drawing conclusions regarding mortality and predictors of mortality. TB-SCAR necessitated modifying and lengthening TB treatment regimens in 80% of patients. Our unit has pioneered SDC to ensure that TB-SCAR patients, especially with associated HIV, are re-established timeously on as many FLTDs as possible [9]. In this cohort nearly twofifths of TB-SCAR patients had ≥2 FLTDs included in their regimens after SCAR and seven patients recommenced all four FLTDs. This supports efforts to incorporate SDC for SCAR into national policy in high HIV/TB burden settings. Despite modification of TB regimens, 65% of SCAR patients had successful treatment outcomes. However, this is lower than SA studies that report TB treatment outcomes among TB patients in general (combined treatment completion and cure rates) which range from 70% to 82% and fall significantly short of the World Health Organization goal of 85% [21,[26][27][28].
HIV care was also disrupted by SCAR admission, with one-fifth changing ART regimens within 12-months post-SCAR. Furthermore, we could not find any record of HIV care or ART for 12 post-SCAR patients, which may reflect death, movement out of the province, or ART interruption. However, if HIV patients remained in care, immune recovery, as measured by CD4 cell count, progressively improved in the 12-months post-SCAR. CD4 count improvement was slower over the first 6-months compared to national expected rates [199 (89-427) vs. 315 (198-463) cells/uL], which may have several contributing factors including: interruption or delayed initiation of ART; lower baseline CD4 count, and even potentially direct immunological effects of SCAR [29,30]. However, these effects appeared to wane as 12-month CD4 cell counts were similar to the national average (median 319 vs. 358 cells/ uL).
This study has important limitations. Despite attempts at telephonic contact and home visits, for certain patients the linkage to ART and TB care services was reliant on SPV recording electronic clinical encounters and medication dispensing. Thus, we were unable to determine if patients were accessing care elsewhere, and we need to assume that dispensed medication equates to treatment adherence. Additionally, data capturing in some clinics may be incomplete accounting for some of the missing data. Generally, CD4 cell count data supports ART adherence, and the integrated coverage of SPV is well established [31]. The observational nature of this study and reliance on routine clinical care meant there was missing data, particular for VL, which are not regularly measured in primary care. Therefore, we have been cautious in our conclusions. This is the largest study of its kind following HIV, TB and HIVassociated TB individuals post-SCAR and it demonstrates the complexity created by SCAR amongst this vulnerable population. It also demonstrates the impact of SCAR on HIV and TB treatment. Mortality, although high, was like other non-SCAR HIV/TB cohorts demonstrating that the management and SDC strategy used may have optimised these patients' outcomes. However, there remains a need to support ongoing research to improve prevention and treatment of SCAR amongst PLWH [32,33]. Additionally, prospective registry-related follow-ups and clinical review may help to further improve both short-and long-term outcomes and understanding of the natural history in these complex patients. 1 Denominator used is number of HIV positive in each cohort. 2 Denominator used is PLWH alive at 12-months. 3 Denominator used is number of patients on ART pre-SCAR. 4 The one patient who had a recurrence of TB was likely untreated as they were discharged on a backbone regimen of moxifloxacin, terizidone and ethionamide but they never returned for SDC. They subsequently returned with a DRESS syndrome likely secondary to cotrimoxazole and tolerated RHZE on reinitiation.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Median CD4 (cells/uL) counts at baseline, 6 months and 12 months for the overall cohort, co-infected subgroup and DRESS and SJS/TEN SCAR phenotypes. GFBDE CD4 trends were not included as there was limited CD4 information available for the three patients with this SCAR phenotype. DRESS, drug rash with eosinophilia and systemic symptoms; GBFDE, generalised bullous fixed-drug eruption; HIV, human immunodeficiency virus; SCAR, severe cutaneous adverse reaction; SJS/TEN, Stevens-Johnson syndrome/toxic epidermal necrolysis; TB, tuberculosis.
|
2023-05-10T15:16:42.150Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c259dddb4eb589003673d987022d8eeca9113d80",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jctube.2023.100374",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eaf689ca740bdb3637679a81369a004dd4217adb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
233292706
|
pes2o/s2orc
|
v3-fos-license
|
Structure and Properties of Biopolymeric Fibrous Materials Based on Polyhydroxybutyrate–Metalloporphyrin Complexes
Ultrathin fibrous materials based on natural bacterial polymer polyhydroxybutyrate (PHB) were prepared by the electrospinning method. Using scanning electron and optical microscopy techniques the macrophysical characteristics of the fibrous layer were determined and classified. The physicomechanical characteristics of the resultant materials and their changes caused by ozonation were determined as well. Structure formation in the ultrathin polyhydroxybutyrate fibers containing low antibacterial concentrations was studied. The effect of low concentrations of zinc tetraphenylporphyrin and iron(III) chloroteteraphenylporphyrin complexes on the structure of polyhydroxybutyrate-based ultrathin fibers was elucidated. Techniques used in the study were X-ray diffraction analysis, ESR spin probe method, differential scanning calorimetry, and optical and electron scanning microscopy. It was shown that addition of the metal porphyrin complexes caused changes in the degree of crystallinity and in the crystallite size of the PHB fibers, while the proportion of dense domains in the amorphous phase of the polymer fiber increased.
INTRODUCTION
Development and study of biopolymer-based nonwoven fibrous materials intended for medical application attract much practical interest today [1]. One of the most promising methods for producing nonwoven materials with large surface area is electrospinning (ES). The ES method is based on pulling the polymer solution as a thin viscous jet in a fi eld of mechanical and electrostatic forces, followed by forming a fi ber with the diameter ranging from tens to thousands of nanometers.
Study of electrospun nonwoven materials allowed summarizing several key factors governing the structural organization in the material at the macrostructure [packing and relative positions of the nonwoven fabric elements (fi bers)] [2] and the microstructure (orientation of the polymer molecules in a material) [3] levels.
Macrophysical characteristics of fi brous materials are essential for describing in detail the features of the fi brous layer and establishing the relationship between the fi ber formation process and a number of properties determined by the parameters of both the individual fi bers and the entire material. Among the basic characteristics of the structural organization in a fi brous material of critical importance are the following: the relative density of the fi bers of the structure, fi ber orientation index, materials intensity, average surface density, and average fi ber diameter.
These characteristics have a sizable effect on the physicomechanical properties of fi brous materials. The overwhelming majority of electrospun polymer materials consist of sufficiently dry fibers that are practically incapable of reversible elastic deformations. One of the most important criteria for assessing the mechanical properties of such materials is the behavior under uniaxial tension test conditions.
In respect to medicinal products and materials, of great research interest is how they are infl uenced by ozonation, which is one of the most effective methods of their sterilization and disinfection [4]. Particularly important is assessment of changes in the mechanical properties of the material, caused by ozonation. Polymer matrices with bactericidal properties can be prepared with the use of various types of chemical compounds able to inhibit the growth of pathogenic microorganisms. Among new biologically active substances, mention should be made of metalloporphyrin complexes acting as homogeneous catalysts in autooxidation of a number of biogenic substances. Intermediates generated during this process are reactive oxygen species such as superoxide anion radical, peroxide and hydroxyl radicals, and hydrogen peroxide with known cytostatic activity. Oxidation reactions involving these radicals and radical ions cause bactericidal effect.
Highly porous polymeric carriers of biologically active substances fi nd extensive biological and medicinal application as prolonged-action matrices, cellular engineering scaffolds, antibacterial therapeutic agents, controlled drug release matrices, etc. [5]. Antibacterial polymeric systems can be developed using porphyrin complexes with various metals [6]. High effectiveness is demonstrated by zinc and iron tetraphenylporphyrin complexes which cause UV irradiation-induced conversion of molecular oxygen into reactive species exerting a strong oxidative effect on bacterial microfl ora. One of the most economically and technologically effective methods of creating fi lm-type matrices based on nanosized and ultrathin fi bers is electrostatic drawing of fi bers from polymer solutions and melts [7]. Numerous studies have shown that the morphology of the polymer fi bers signifi cantly affects the set of physicomechanical and diffusion properties and the biodegradation kinetics [8]. The supramolecular structure of the fibers is infl uenced not only by the molecular characteristics of the polymer and the process parameters of electrospinning such as polymer concentration, temperature, etc., but also by additions of substances of different chemical nature to the spinning solution [9,10]. In view of the above-said, obtaining electrospun fi bers with target morphology is an urgent and practically signifi cant task.
The aim of this study was to examine the infl uence exerted by zinc tetraphenylporphyrin (ZnTPP) and iron(III) chlorotetraphenylporphyrin [Fe(III)ClTPP)] complexes on the supramolecular structure of electrospun ultrathin poly-3-hydroxybutyrate (PHB) fi bers.
EXPERIMENTAL
In our study we used natural biodegradable polymer poly-3-hydroxybutyrate of the 16F series from Biomer (FRG), produced by bacterial fermentation. The viscosityaverage molecular weight of PHB was 2.06×10 5 . The fi bers were obtained by the ES method using a singlecapillary electrospinning laboratory unit at a capillary diameter of 0.1 mm, electric current voltage of 12 kV, electrode gap of 18 cm, and solution conductivity of 10 μS/cm. For producing fi brous matrices with antiseptic properties served ZnTPP and Fe(III)ClTPP complexes [11][12][13]. Electrospinning solutions were prepared in chloroform at 50°C using an automatic magnetic stirrer. The PHB concentration in the solution was 7 wt %, and the content of the complexes in the electrospinning solution, 1, 3, or 5 wt % of the PHB mass.
The electrospinning conditions exert a great infl uence on the nature and structure of fi ber distribution in the material. Importantly, the structure of the material as a whole is irregular, with randomly oriented fi bers. In this study, fi ber distribution was examined by a set of methods of optical and scanning electron microscopy.
Mechanical properties were evaluated by the uniaxial stretching method on a DEVOTRANS (Turkey) tensile testing machine in accordance with GOST (State Standard) R 53226-2008 "Nonwoven fabrics: Methods for strength determination." Ozonation of the materials was carried out with the use of ozonizer in the laboratory of the Emanuel Institute of Biochemical Physics, Russian Academy of Sciences. Ozone was generated from oxygen by electric discharge process, where an increase in voltage led to that in the gas concentration. The experiment was carried out at the working ozone concentration of 5.5×10 -5 M. The absorbed ozone volume was estimated using an SF-46 Lomo spectrophotometer via measuring the optical density of the medium at a wavelength of 254 nm. The gas fl ow rate was 101.8 mL/min, and the time of ozonation of the material samples ranged from 3 to 5 min.
The X-ray diffraction analysis of the PHB fi bers was carried out on a diffractometer with a linear positionsensitive (coordinate) detector [8,9] (CuK α radiation, sample-detector distance 110 mm, measurements in the region of small and large scattering angles using transmission geometry) and on an HZG4 diffractometer (Freiberger Präzisionsmechanik, FRG) with a diffracted beam graphite monochromator in the Bragg-Brentano refl ection geometry (CuK α radiation, measurements in the region of large scattering angles using refl ection geometry). X-band ESR spectra were recorded on an EPR-V automated spectrometer (Russia). TEMPO stable nitroxyl radical was used as a probe. The radical was introduced into the fi bers from the gas phase at a temperature of 50°C; its concentration in the polymer did not exceed 10 -3 M. The geometry of the fi brous materials was examined with a Micromed Polar 3 ToupCam 5.1 MP (China) optical microscope in reflected light at magnifi cation 200x and with a Hitachi TM-3000 scanning electron microscope (Japan) (at an accelerating voltage of 20 kV; a 100-200 Ǻ thick gold layer was sprayed on the surface of a nonwoven fi brous material sample). The DSC study of the samples was conducted on a Netzsch DSC 204 F1 instrument in an argon atmosphere at a heating rate of 10°C/min.
RESULTS AND DISCUSSION
In this study, three main types of fi ber distribution were identifi ed: regular, medium, and random. Figure 1 shows the microphotographs of different types of fi ber packing in the material, which were obtained with a Polar 3 Mikromed (Russia) polarizing transmission microscope. Table 1 lists a number of macrophysical characteristics that are distinctive to these materials and characterize the morphology of their fi brous layer.
The relative density of the structure corresponds to the proportion of fi ber-free volume of the material and is related to the packing density of the fi bers in the porous layer of the material; for electrospun materials it typically ranges from 80 to 98%. The specifi c mass of the fi bers in the material is described through the surface density of the layer. The fi ber orientation index characterizes the direction and specifi c features of their crimp per unit area for a specifi ed thickness of the fi brous layer. Combined, these characteristics enable assessment of the effi ciency of the ES process and provide a means to prevent many fi ber surface defects, as well as elastic shrinkage and adhesion of the fi bers during the jet curing on the electrode, and also to infl uence the formation of the functional, in particular physicomechanical properties.
An important parameter for assessing the uniformity and degree of variation of the characteristics of individual elements in the fi brous material structure is the fi ber diameter distribution. In this study, it was evaluated from the per unit area numbers and average diameters of fi bers (400×300 μm) as derived from a series of micrographs obtained by direct methods of optical and scanning microscopy (Fig. 2). Study of the structural features of the materials of interest showed that, with a decrease in the average fi ber diameter, the fi ber twist, crimp, and packing density tend to increase. This improved mechanical characteristics such as breaking length of the material, while having little effect on the elongation at break.
Taken together, the above-listed characteristics of the macrostructure of the nonwoven materials allow fairly accurately assessing the average interfi ber distance and the fi ber packing density, type, and average diameters, as well as deviations from average values, variation per unit area, and presence of defects. The irregularity of the materials obtained in this study did not exceed 10%. Depending on fi ber distribution, the breaking load ranged from 1.4 to 2.2 N, and the relative deformation, from 1.1 to 4%.
Considering the intended medicinal application of the materials studied, we chose the regular distribution as the best suited to evaluating the mechanical properties of the PHB-based nonwoven fi brous fabrics and their changes caused by sterilization with ozone. The volume of the gas absorbed during ozonation, depending on the type of the fi ber distribution in the material structure, was estimated at 300-330, 400-440, and 450-480 mol/m 2 for regular, medium, and random distribution types, respectively.
A series of experiments showed that, under the infl uence of ozone, the breaking load of the PHB-based nonwoven fi brous materials displayed an approximately twofold increase (see Table 2). Moreover, mechanical properties such as the modulus of elasticity, relative deformation, and maximum elongation at break noticeably increased as well.
Improvement of the strength properties of the material, caused by ozonation, may be accounted for primarily by oxidation of the PHB macromolecules. An increase in the number of functional groups, characteristic of the ozone oxidation mechanism, causes an increase in polarity of the molecules via accumulation of oxygencontaining functional groups, thereby improving the strength characteristics of the material. Another possible reason for enhancement of the strength properties consists in increases in the degree of crystallinity and in the size of the crystalline PHB formations in the fi ber, caused by ozonation.
Further, we considered some aspects of formation of the supramolecular structure of the PHB fi bers upon addition of low concentrations of antibacterial agents such as tetraphenylporphyrin metallocomplexes.
Adding ZnTPP complex to the PHB solution caused signifi cant changes in the fi ber morphology. The original PHB fi ber (inset in Fig. 3a) exhibited alternation of cylindrical and spindle-like segments. The presence of thickened segments in the fi ber structure is attributable to low electrical conductivity and low surface tension of the polymer spinning solution. The average diameter of the cylindrical segments of the fi ber was 1-3 μm, and the spindle-like segments had a maximum diameter of ~10 μm and a length of 20-30 μm. Adding 1-5% ZnTPP to the PHB solution caused complete disappearance of the spindle-like segments in the fi ber structure (inset in With increase in the ZnTPP concentration in the spinning solution its viscosity grew due to the intermolecular interaction of the polar molecules of the complex with the polar groups of PHB. The surface tension of the solution increased, and the primary solution jet did not split. The diameter of the resultant fi bers of PHB containing 3-5% ZnTPP complex was estimated at 3-4 μm. Figure 3 presents the X-ray diffraction patterns recorded in the region of large scattering angles for the PHB fi bers containing 0 and 5% ZnTPP. . The position of peaks (lines) in these diffraction patterns corresponds to the crystal lattice of PHB with an orthorhombic unit cell (a = 0.576 nm, b = 1.320 nm, c = 0.596 nm). According to the optical microscopy data, the PHB fi bers lie predominantly in the material plane, so no noticeable preferred orientation is manifested by the crystallites in the fi bers proper. It was found that, upon adding 1, 3, and 5% ZnTPP, the degree of crystallinity of the PHB fi bers remained unchanged at 45-53%. The calculation of the average crystallite sizes was based on (020), (101) and (111) diffraction peaks obtained using Bragg-Brentano geometry for all the samples. It is seen that the crystallite sizes are identical for PHB and PHB containing 5% ZnTPP, specifi cally, L 020 = 26-27 nm in the crystallite plane, and 5-6 nm in the L 101 and L 111 planes.
The small-angle X-ray diffraction patterns of the fi bers of PHB containing 0, 1, 3, and 5% ZnTPP exhibit a peak corresponding to the long period D = (S max ) -1 of 5.4-5.7 nm for all the samples studied. Small-angle X-ray scattering analysis by the Tsvankin method revealed the PHB crystallite thickness of ~4 nm for all the samples. The latter data agree very well with the crystallite thicknesses derived from the breadth of the diffraction lines in the region of large scattering angles.
The structure of the amorphous domains is substantially determined by the proportion of the crystalline and paracrystalline formations. Small ZnTPP additions cause the proportion of paracrystalline structures in PHB to increase, thereby changing the structural and dynamic properties of the amorphous domains. The molecular dynamics of these domains can be studied most conveniently by the ESR method using stable radicals. The ESR spectra of TEMPO radical in the PHB matrix exhibit a complex shape, being a superposition of two spectra corresponding to two populations of radicals with different correlation times τ 1 and τ 2 , of which τ 1 characterizes the molecular mobility in denser and τ 2 , in looser amorphous domains (Fig. 4). With increasing porphyrin concentration in the PHB fi bers the proportion of dense domains exhibited nearly sixfold increase. The correlation time in the dense domains also increased, with τ changing most dramatically upon adding 1% porphyrin to the fi ber; further increase in the ZnTPP concentration caused a smoother increase in the correlation time.
In this study, the correlation time τ 2 was calculated from the ESR spectra as 5×10 -11 < τ 2 <10 -9 s. The dependence of τ 2 on the ZnTPP concentration is nonlinear. We presumed that the increase in the correlation time τ in the mixed compositions was due to deceleration of the molecular mobility because of amorphous phase condensation. An increase in the proportion of paracrystalline structures is commonly paralleled by that in the proportion of straightened chains in the amorphous interlayer and, thereby, by deceleration of the molecular mobility. Such changes in the amorphous phase are accompanied by a decrease in the radical concentration.
Thus, the introduction of ZnTPP into PHB led to condensation of the amorphous phase of the polymer during the fi ber spinning process and, accordingly, to deceleration of the molecular mobility. The proportion of dense domains increased and, as a result, the radical concentration in the fi ber decreased.
Next, we studied nonwoven fibrous materials containing the Fe(III)ClTPP complex. As seen from Fig. 5, addition of this complex to the PHB solution caused signifi cant changes in the fi ber morphology. The original PHB fi ber (Fig. 1a) exhibited alternation of cylindrical and spindle-like segments. The presence of thickened segments in the fi ber structure can be explained by low electrical conductivity and low surface tension of the polymer spinning solution. The average diameter of the cylindrical segments of the fi ber was estimated at 1-6 μm, and the spindle-like segments had a maximum diameter of ~10 μm and a length of 20-30 μm. Adding 1% Fe(III)ClTPP complex led to the formation of fibers with average diameters of 1.5, 3, and 5 μm in the fi brous material. Fibers formed at higher Fe(III)ClTPP concentrations had an average diameter of 3 μm, while fi bers with diameters of 1.5 and 6 μm almost completely disappeared (Fig. 5b).
DSC studies of the fibrous materials revealed a sharp increase in the proportion of the crystalline phase of PHB with increasing FeClTPP concentration in the mixed composition. The data obtained suggest that porphyrin produced a plasticizing effect on the PHB crystallization in the fiber electrospinning process. Specifi cally, the addition of FeClTPP caused increases in the intermolecular distance and the mobility of the chain. This facilitated orientation with increasing FeClTPP concentration, leading to increased proportion of crystallites and mesomorphic structures.
The data from small-angle X-ray diffraction analysis of the original PHB fi bers and of the PHB fi bers of different compositions, containing FeClTPP drug, suggest the following. Adding FeClTPP caused an increase in the proportion of interfi brillar regions with high proportions of oriented macromolecules, and specifically these molecules produced additional crystalline formations with a higher longitudinal size.
The structure of the amorphous domains is largely determined by the proportion of crystalline formations. Accordingly, adding low FeClTPP concentrations caused an increase in the degree of crystallinity of PHB and a change in the crystallite sizes, thereby affecting the structural and dynamic properties of the amorphous domains.
As seen from the ESR data (Fig. 6), an increase in the FeClTPP concentration in the fi ber caused the proportion of dense domains to increase. The same holds to the radical correlation time, whereby the molecular mobility decelerated in the dense domains of the polymer, while remained practically unchanged in the loose domains of the amorphous phase.
Data from experiments on inoculation of test cultures [S. aureus p 209 (Staphylococcus aureus), S. typhimurium (Salmonella typhimurium), and E. coli 1257 (Escherichia coli)] on the PHB-based nonwovens impregnated with iron(III) porphyrin complex are indicative of good prospects for hygienic application of these materials [14].
CONCLUSIONS
Based on the experimental data obtained in this study, all the samples of the nonwoven material prepared from PHB by the electrospinning method at the process parameters maintained in the range specifi ed can be divided into three groups that reliably describe the properties of the material structure: those characterized by regular, medium, and random fi ber distribution. The mutual infl uence of the crystalline and amorphous domains in crystallizing biopolymers and their compositions remains a fairly complex and poorly studied area of modern polymer materials science. Our research showed that adding low ZnTPP concentrations to the PHB fi bers caused an increase in the proportion of crystalline and paracrystalline structures. As a response to these changes in the crystalline phase, changes in the spin probe rotation dynamics in the amorphous domains were observed. X-ray diffraction analysis showed that 1-5% ZnTPP additions caused no changes in the supramolecular structure of PHB, including the unit cell parameters of the crystal structure, degree of crystallinity, crystallite size, long period, and degree of crystallinity in the fi bril. At the same time, adding Fe(III)ClTPP led to additional crystallization and condensation of the amorphous domains in the PHB fi bers.
Biochemical tests revealed a great potential offered by the PHB-based nonwoven fi brous materials for medical applications, e.g., for treatment of skin bacterial diseases. Our experiments confi rmed the effectiveness of ozone sterilization of products based on these materials without compromising their mechanical properties.
|
2021-04-19T13:40:01.758Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6fe37d4fe4fc5a1351110ec79a38e03a69c7d6e9",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1134/S1070363221030245.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd375f89fa8831651626100713a51ce3dc3b3c67",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260378798
|
pes2o/s2orc
|
v3-fos-license
|
A quantum double-or-nothing game: The Kelly Criterion for Spins
A sequence of spin-1/2 particles polarised in one of two possible directions is presented to an experimenter, who can wager in a double-or-nothing game on the outcomes of measurements in freely chosen polarisation directions. Wealth is accrued through astute betting. As information is gained from the stream of particles, the measurement directions are progressively adjusted, and the portfolio growth rate is raised. The optimal quantum strategy is determined numerically and shown to differ from the classical strategy, which is associated with the Kelly criterion. The paper contributes to the development of quantum finance, as aspects of portfolio optimisation are extended to the quantum realm.
I. INTRODUCTION
An early application of information theory to gambling and finance can be found in a 1956 paper by John L. Kelly [1] − an associate of Claude Shannon. Kelly showed how a logarithmic utility maximising investor should allocate capital between bets with known winning probabilities and pay-outs. In the simplest case the investor bets a fraction of wealth on the outcome of a biased coin in the 'double-or-nothing' game. An outcome of 'heads' doubles the gambler's stake, while 'tails' loses the stake. The optimal fraction − called the 'Kelly criterion' − is to bet 2p − 1 for a coin that comes out heads with probability p − assuming p ≥ 1/2. This maximises the logarithmic utility, which is equivalent to maximising the growth rate of the portfolio.
Kelly's results are an elaboration of earlier work of Daniel Bernoulli and others, who studied the 'St. Petersburg paradox'. The question was how to evaluate a coin flipping game, where the pay-out depends on how often 'heads' occurs in succession [1]. If 'tails' first occurs on the N -th throw, then the pay-out is equal to 2 N . The coin is considered 'fair' and the probability for a having a pay-out 2 N is 2 −N . The paradox was that whilst the expectation value of the bet is not finite the value gamblers were willing to pay for such a pay-out was finite. One 'resolution' of the paradox goes back to Daniel Bernoulli, who in 1738 suggested that large gains should be discounted more than smaller gains or losses, since gains and losses are perceived non-linearly due to risk-aversion. In general, returns should be adjusted for risk. As the saying goes, 'a bird in the hand is better than two in the bush'. Bernoulli suggested as a value function, which turns gains into utility, the logarithm of the gain. This shifts benefits from absolute to relative gains and makes the result independent of the current wealth of the bettor. His derivation uses the equation dy = k dx x , where y is value or utility, x the wealth and k a constant. This leads to y = k log x + c. The result is that gambles are worth taking, if the value or utility increases, when participating in the gamble at the price asked. The utility of the bet following Bernoulli's description is then ∞ i=1 2 −i log(2 i ) = log 4 assuming k = 1 and c = 0. Therefore, a bettor following Bernoulli's rationale would be willing to pay up to 4 units to play the game. A person offering the bet would have to account for possible big losses, which are accounted differently from big gains, and would not normally arrive at the same price.
The work of Kelly was championed by Cover, Ziemba and others, and found general application in finance. It overlaps with the popular mean-variance portfolio theory introduced by Harry Markowitz. A review of modern developments of the Kelly criterion can be found in MacLean et al. [4], and some applications are in [5][6][7][8].
Instead of a series of coin flips we consider a sequence of examined spin-1/2 particles. Again, there is a 'double-or-nothing' game to wager on, but the pay-outs depend on the results of variable measurements. What distinguishes the classical from the quantum case is the added degree of freedom associated with the measurement directions of the quantum particles. Instead of just flipping a coin one can measure the spin particle in an arbitrary direction. The outcome probability is direction dependent.
This paper highlights some novel aspects of quantum game theory, quantum gambling and in extension quantum finance. 'Quantum gambling' is likely to have applications in finance as the quantum scale becomes of practical importance in communication and computation. The importance of hedging against errors that have economic consequences rises. The quantum gambling toy model described in the paper should be of relevance for analysing such situations. The conclusion provides further information.
A summary of the rest of the paper follows. The next section introduces gambling with spin particles, followed by a section on the quantum version of the coin flipping game. The subsequent section give a heuristic description of the optimal strategies. Section five delves into numerical calculations. In a sub-section the special case of prior 1/2 is considered. The penultimate section covers the optimal strategy expressed as 'contours of equivalence', and a conclusion rounds off the paper.
II. GAMBLING WITH SPIN PARTICLES
This section describes how one can 'quantum gamble' with spin particles. A gambler, or to use the more courteous term: investor [2], is presented with a sequence of quantum spin-1/2 particles. Unbeknown to the investor is the polarisation of the particles, which are either all prepared in state ρ or state σ. The investor is only informed that the probability for ρ is ξ and for σ is 1 − ξ. The investor is further told that a 'double-or-nothing' game is linked to measurements of the particles. The investor can bet any fraction [3] of owned assets on the outcome of the measurement. If the measurement outcome in the direction of choice is spin-up, then the investor's stake is doubled. If, on the other hand, the outcome is spin-down, the stake is forfeited.
If only one bet is considered, then the straightforward aim is to maximize the winning probability, which fixes the measurement direction. If a sequence of bets is to be considered, another factor influences the choices, since judicious measurements are accompanied by a gain of information, and allow progressively a more and more accurate determination of the polarisation direction of the particles. Balancing short-term winning probability with information gain and increased profitability is novel and reflects the quantum nature of the problem. In the next section, some relevant calculations for spin gambling are presented.
The tunable inputs are then reduced to δ, ξ and the number of available particles N . The index i runs from 1 to N . The initial wealth W 1 is 1 without loss of generality, since the wealth axis [4] can be arbitrarily scaled. The optimal measurement angle to maximise the information gain is , and the optimal portfolio growth angleφ is ϕ shifted by π/4, i.e.φ = ϕ+π/4, if and only if ξ = 1/2. For each of the N particles gambled on and analysed a separate α i is chosen. The probability of a spin-up outcome for such a measurement in the α i direction is The updating of the prior leads for spin-up outcomes to and for spin-down outcomes to The optimal investment fraction for a chosen α i is f i = 2p up i − 1, if p up i ≥ 1/2 and otherwise α i is shifted by π/2 replacing p up i by 1 − p up i . The updating of the wealth process leads, in the case of a spin-up outcome, to and, in the case of a spin-down outcome, to If one now simulates a chain of measurements in the α i directions, then one gets a series of spin-up and spin-down outcomes with probabilities p up i and 1 − p up i . Let's represent a spin-up outcome by + and a spin-down outcome by −. The outcome sequence would then look like + − + + + − ... + − + + + + + + with progressively more +'s as one edges the prior away from 1/2. The optimisation algorithm will adjust α i to maximise the probability-weighted logarithmic utility of W N
IV. THE OPTIMAL STRATEGY: A HEURISTIC DESCRIPTION
In this section, a heuristic description of the the optimal strategy is given. It requires the backward solution of an optimization problem, similar to the evaluation of a European option on a binomial tree.
If only one particle remains to be measured, then maximization of the spin-up probability is key, since any 'information gain' cannot be exploited in the future. Therefore, one maximises in the last round solely portfolio growth. This determines the angle α N and with it the spin-up probability, i.e.
sin(δ) cos(δ) . The 'Kelly criterion' allows then determination of the optimal fraction, and the resulting wealth (and ξ) can be calculated for both measurement outcomes, and with it the achievable probability-weighted utility.
Next we move one step back. The angle α N −1 at step N − 1 updates in a distinct way for the two possible measurement outcomes both the wealth and the prior. By maximising over the range of allowed measurement angles α N −1 one can determine the maximal achievable utility as the probability-weighted sum of the utility at the final step. This is then the utility associated with that particular point in the two dimensional wealth & ξ space. A similar mechanism works for earlier steps all the way back to step one.
The optimal strategy is then given by the contour map of equivalent utility lines at each step over the wealth and ξ space. Figures 1-6 and 9-10 show examples of contour lines for the polarisation angle δ of 7.5 • , 30 • , 60 • and 90 • . Due to limitations of the mesh some boundary effects can be seen in figures 5 & 6. Each angle comes with a pair of plots, i.e. with and without the wealth presented on a logarithmic scale. The points connected by the k-th contour line yield the same final wealth utility at step N . Any logarithmic utility maximising bettor should be indifferent between any of the points on these lines. The endpoints of the k-th contour line correspond to ξ = 1 or ξ = 0 and wealth of W N /(2 ( N − k)), i.e. a doubling of wealth until one reaches W N . The other points on the same contour line have higher wealth but a ξ closer to 1/2, and therefore lower optimal winning probability. Besides the contour graphs the heat maps (see figures [7][8] are also of interest, which show that utility is strongly linked to the remaining number of steps as well as wealth W , but less to the prior ξ. Next, a description of the optimal strategy, derived by numerical means.
V. NUMERICAL SIMULATION: ALGORITHM AND PSEUDO CODE
The numerical simulation is the topic of this section. The algorithm works backwards, and defines the utility surface at each step as a function dependent on the wealth value W & prior ξ. At the final step N , all points on the straight line contour have a fixed wealth W N but arbitrary value for ξ. This can be represented as a vector W N , ξ N , N, log 2 (W N ) . The utility surface for the previous step is then defined, and general equations for calculating U k (W, ξ) are taken from section III. The utility value for each step is calculated through an iterative process. The pseudocode below outlines the computational method for calculating the utility surface.The result is a multidimensional array of utility values, with non-grid-point utilities evaluated by interpolation.
15:
Define utility function using the above values
16:
Optimize α to find α ⋆ that maximizes utility Visualization is achieved by extracting contours from the array. The simulation was written in Python; NumPy was used for numerical operations; SciPy optimized and interpolated the modules; and Matplotlib was employed for visualization.
VI. CONTOURS OF EQUIVALENCE
In this section a technique for visualising the optimal gambling strategy is introduced. It relies on finding points in the two dimensional wealth and prior space that lead to the same final 'utility' outcome. These sets of equal 'utility' points form contour lines. Each line differs by the number of remaining particles to be measured and, consequently, bets to be made.
These contour lines all have easily calculated levels of wealth at the k-th step from the end for extreme values of the prior. These anchor points for the prior ξ at 0 and 1 are associated with a wealth of 2 k−N W N . There is a simple strategy that moves from these points to the final contour. Since the probability of winning is one, the gambler bets everything and doubles the money at every stage. Between these two extremes lies a range of points, which require numerical evaluation.
A. The curious case of prior one half: growth without information gain or information gain without growth In this subsection, the special case of prior one-half is considered. This choice simplifies formulae. The optimal angle for information gain was given in section III and differs in this special case from the one-round maximal growth angle given in section IV by π/4. The maximal information gain direction entails a betting fraction of zero, whereas the maximal growth measurement direction
VII. CONCLUSION
The paper investigated a quantum betting game and showed how an optimal strategy can be established. Some differences between the classical and quantum case were discussed, and some strategies were compared. Simple extremal strategies were most easy to compare. One could for example maximise 'information gain' or 'portfolio growth'. As we have shown, the optimal strategy lies somewhere in between and depends on δ, ξ and the number of steps N . If ξ is either zero or one, then information gain is impossible and maximising short and long-term growth becomes synonymous. A special case is ξ equals one-half, where an optimal 'information gain' measurement entails zero 'portfolio growth', and an optimal 'portfolio growth' measurement entails zero 'information gain'. Applications of 'quantum gambling' are maybe currently few and far between in finance or the wider world outside laboratories. However, besides the ability to probe and test quantum mechanics, one could envisage scenarios where the outcomes of quantum-scale events play a role in decision-making. In this case, quantum game theory and applications like the quantum gambling problem discussed in the paper are of interest.
One can further find applications in quantum computing and information theory. Imagine a quantum communication channel, where a misinterpretation of the message leads to a financial loss. The evaluation and insurance against such faults can be formulated in certain cases as a quantum gambling problem of the type discussed in the paper. One of the authors -B.K.M. -thanks D.C. Brody and L.P. Hughston for stimulating discussions.
|
2023-08-03T06:42:42.627Z
|
2023-08-02T00:00:00.000
|
{
"year": 2023,
"sha1": "7b6dd8a1ce0b9150e979f9786d169db5005868c1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7b6dd8a1ce0b9150e979f9786d169db5005868c1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Economics"
]
}
|
238752871
|
pes2o/s2orc
|
v3-fos-license
|
OSVidCap: A Framework for the Simultaneous Recognition and Description of Concurrent Actions in Videos in an Open-Set Scenario
Automatically understanding and describing the visual content of videos in natural language is a challenging task in computer vision. Existing approaches are often designed to describe single events in a closed-set setting. However, in real-world scenarios, concurrent activities and previously unseen actions may appear in a video. This work presents the OSVidCap, a novel open-set video captioning framework that recognizes and describes, in natural language, concurrent known actions and deal with unknown ones. The OSVidCap is based on the encoder-decoder framework and uses a detection-and-tracking-object-based mechanism followed by a background blurring method to focus on specific targets in a video. Additionally, we employ the TI3D Network with the Extreme Value Machine (EVM), which learns representations and recognizes unknown actions. We evaluate the proposed approach on the benchmark ActivityNet Captions dataset. Also, an enhanced version of the LIRIS human activity dataset was proposed by providing descriptions for each action. We also provide spatial, temporal, and caption annotations for existing unlabeled actions in the dataset - considered unknown actions in our experiments. Experimental results showed our method’s effectiveness in recognizing and describing concurrent actions in natural language and the strong ability to deal with detected unknown activities. Based on these results, we believe that the proposed approach can be potentially helpful for many real-world applications, including human behavior analysis, safety monitoring, and surveillance.
I. INTRODUCTION
Video understanding is a challenging issue in computer vision. It requires sophisticated techniques to process the diversity of humans and objects appearances in different environments and their relationships over time.
The ability to detect and identify specific events is also a critical step towards video understanding. Video events are high-level semantic concepts perceived by humans in a video sequence [1]. Each event is composed of one or more meaningful objective actions, such as walking or jumping, and interaction with objects, such as typing a computer or The associate editor coordinating the review of this manuscript and approving it for publication was Khoa Luu .
handshaking [2]. Each perceived concept consists of an entity (human, object, action, or scene attributes) that occupies a specific position in a frame and may vary in size, color, shape or other specific attributes.
Video description (also called video captioning) is one of the many problems under video understanding. It has become a hot topic in computer vision and deep learning [3] and requires solving many different tasks simultaneously, including object detection and classification, action detection and recognition, and visual relationships among humans and objects. A video description approach may be employed in various applications such as human-robot interaction, video indexing, assistance to the visually impaired, understanding sign language, and video surveillance. VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Current deep learning techniques are effective to learn discriminative spatio-temporal features from raw data. They are used to solve several complex tasks, such as object detection and classification [4], human action recognition [5], [6], video summarization [7], semantic image segmentation [8], and video understanding [9]. However, a step beyond the simple categorical classification of actions in scenes is to describe events in a human-comprehensible language. To accomplish this, it is crucial to understand the semantics of a given video scene.
Despite the efforts and progress that have been made in the video description task, it is still an open problem and has attracted much attention [3]. Existing approaches are limited to the fixed list of activities in the training corpus and have focused on generating a holistic description of short-length videos with only one main action happening in the video. However, in practical applications, such as safety monitoring and surveillance, videos may have concurrent activities, and humans can perform many different actions and even create new movements and hand gestures at will.
A more realistic approach is to assume an open-set scenario for describing actions. Open-set classifiers allow performing classification by enclosing each class in the feature space and reserving space for new classes to emerge, unlike closed-set classifiers, which assign infinite spaces to training classes. This strategy allows rejecting data from previously unknown classes instead of wrongly assigning the class label with the highest probability value [10].
Following this idea, a video captioning approach in an open-set scenario can adequately describe known actions and deal with unknown ones. Thus, it is essential to detect if the performed action was seen during the training step to correctly describe known actions or activities and avoid generating wrong descriptions of new detected actions.
Based on that, this work presents a novel open-set video captioning framework that aims to describe, in natural language, not only single but also concurrent events occurring in a video. The proposed approach uses an open-set action recognition model to detect unknown actions, thus avoiding incorrect descriptions and hallucinations. Some recent works have successfully performed video action recognition in an open-set scenario [10], [11]. However, to the best of our knowledge, this is the first time such properties are explored in the video captioning task.
The proposed representation learning approach is based on the encoder-decoder framework and uses a detection-andtracking-object-based mechanism followed by a background blurring method to define the targets and recognize the concurrent actions to be described. Additionally, we employ the Triplet Inflated 3D Neural Network recently proposed by [11], which uses Deep Metric Learning and the Extreme Value Machine (EVM) [12] as the open-set classifier. The main contributions of this paper can be summarized as follows: • We propose a novel video captioning framework to recognize and describe concurrent actions/activities performed by humans in an open-set scenario; • We present a novel open-set mechanism to detect outof-domain videos of unseen activities; • We present extensive experiments and analysis, using 2D and 3D feature representations, demonstrating the effectiveness of our approach. The remainder of this paper is organized as follows. Section II presents a brief description of related works. In Section III, we present the theoretical aspects related to the proposed method for open-set action recognition. In Section IV, we describe in detail the proposed framework. Next, in Section VI, we present the experimental settings, their results, and a discussion. Finally, in Section VII we present the conclusions and suggestions for future research directions.
II. RELATED WORKS
Early proposed methods for the video description task started with template-based methods in which the Subject (S), Verb (V), and Object (O) were detected and then, used in a sentence template [3]. Although these methods could generate descriptions based on grammar, they did not take into account the spatial and temporal associations between entities and suffered from the lack of diversity of generated sentences. Inspired by the rapid development of deep learning techniques in the Computer Vision and Natural Language Processing area, video description research has recently become a hot topic.
The video description approaches based on deep learning methods are mainly designed in the encoder-decoder architecture [3], [13]. The encoder is usually a combination of 2D and/or 3D CNN and LSTM that converts the input into a feature vector representation of fixed length. The decoder is usually an LSTM or GRU that generates a sequence of words.
Pre-trained deep learning models, such as VGGNet [14] or ResNet [15], are commonly used to extract spatial features from frames. These features are usually combined across the frames by an average pooling or max-pooling operation, resulting in a single fixed-length feature vector representation for a short video clip. Besides, the C3D [16] or I3D [17] models, pre-trained in a large dataset such as the Sports-1M dataset [18] or Kinetics dataset [19], are used to extract temporal features. The use of pre-trained models on large datasets provides a strong visual representation of objects, actions, and scenes depicted in the video [20].
Reference [20] proposed the first end-to-end learning approach based on deep neural networks for the video captioning task. A variant of AlexNet pre-trained on a subset of the ImageNet [21] dataset was used to extract visual features from frames. Then, the mean pooling method was employed, resulting in a single vector representing the entire video. Finally, two stacked LSTM was used to generate the sentence.
Since then, many approaches have been proposed to use attention mechanisms to dynamically select spatial and temporal features focusing on important frames and regions inside them, providing meaningful visual evidence for caption generation [22]- [26]. The use of attention mechanism has improved the video captioning task suggesting that the this method can efficiently improves the descriptions, especially in discontinuous videos, by focusing on specific parts of the visual input.
Considering that open-domain videos cover a broad range of topics, such as sports, music, food, and so on, some approaches have been proposed to generate sentences guided by latent topics [27] and semantic attributes [28]. The use of multimodal data, such as visual, audio, motion, and textual information, was also explored in some works [29], [30]. The combination of audio, movement, and visual information has been shown to play an important role in the description generation process.
The dense-captioning events task was proposed by [31] and consists of detecting and identifying all events in a given video and describe them in natural language. Their proposed approach uses DAPs [32] to localize temporal event proposals and a caption module based on LSTM to generate a sentence for each event proposal. Reference [33] also propose a unified end-to-end approach for video dense captioning. However, instead of using RNN for description generation, the authors used Transformers [34]. Their proposed approach is composed of three components: a video encoder, a proposal decoder, and a captioning decoder. The video encoder is composed of multiple self-attention layers. The Temporal Action Proposal (TAP) is based on ProcNets [35], which was designed to detect actions in long videos. Moreover, the captioning decoder module uses Transformers to generate the sentence for each event proposal.
Despite achieving promising results, these approaches often fail to describe concurrent activities happening in a video. Also, the datasets used to evaluate these approaches are created with videos extracted from movies or YouTube videos. Such videos cover a broad range of topics, such as sports, music, food, and so on, and a wide variety of different individual and collective actions performed by humans, animals, and even moving cartoon objects. These videos also present specific challenges, including the presence of discontinuity points between frames, as reported by [36], which may result in inadequate temporal representation features.
Besides the limitations presented above, the lack of welllabeled data is a crucial problem in the deep learning area. The zero-short learning task has been studied to classify actions with no or few examples during the training step [28], [37]. Some approaches have been proposed for visual descriptions task [38], [39] to describe novel objects not presented in paired image sentence dataset. The zero-shot video captioning task [40] focuses on describing out-of-domain of a novel activity without paired captions, but with the knowledge of the activity.
The approaches presented so far assume that all possible classes are already known during the train or test phase. However, new classes emerge as time passes in the real-world dynamic environments. An open-set Human Action Recognition approach requires the classifier to accurately classifies known classes seen during the training stage and deals with unknown classes, which are unseen and with no semantic information provided during the training stage [10]. In this work, we also exploit the nature of the open-set recognition problem to propose a framework to describe videos in an open-set scenario. As previously stated, to the extent of our knowledge, there is a lack of related works in this approach in the literature, being the main original contribution of the present work.
III. THEORETICAL ASPECTS
This Section presents the fundamentals of the methods used in our open-set recognition module: the Extreme Value Machine and the Triplet Inflated 3D Neural Network.
A. THE EXTREME VALUE MACHINE
The Extreme Value Machine (EVM) was initially proposed by [12] to perform open-set classification. In the EVM, the modeling of each class in the training set is based on a set of extreme vectors, which are associated to a Probability of Sample Inclusion ( ).
The key concept of EVMs is the use of margin distributions, which is the distribution of the half margin distances of the training data. In the original formulation, one can consider x i as a training sample and y i the corresponding label. Considering x i and x j , where ∀j, y j = y i , x j can be considered the nearest point to x i and, in this case, the margin estimate for the pair ( The m ij value can be computed for the τ nearest points and the distribution of the margins is estimated with those points using the Extreme Value Theorem (EVT). The EVT states that the minimum values of x i is given by a Weibull distribution [12]. The probability of inclusion for a point x is given by in which x i − x is the distance between x and x i , λ i and κ i are the Weibull's shape and scale parameters. Each is considered an EVT rejection model and (x i , x , κ i , λ i ) corresponds to the probability that a sample is not beyond the negative margin. Even though a sample has zero probability around the margin, the model can also be extended to support soft margins. The probability that a point x belongs to class C l , where l is the class index, is given by Equation 2: (2) VOLUME 9, 2021 Finally, the classification function is: in which δ is a threshold responsible for defining the boundary between known and open-space. In order to reduce the size of the model, many redundant pairs can be discarded with minimal impact on performance. Details of this procedure can be found in [12].
B. TRIPLET INFLATED 3D NEURAL NETWORK (TI3D)
The TI3D is a Deep Metric Learning Neural Network introduced in [11]. It uses the I3D as the base model to build a cosine triplet loss network. The TI3D learns a feature mapping such that intra-class distances are small and inter-class distances are large.
The TI3D takes three inputs: Anchor, Positive, and Negative. For the human action recognition task, the Anchor (a) represents a video of any given action, the Positive (p) represents a video of the same action, and the Negative (n) represents a video of a different action, both w.r.t. the anchor. Given N (a, p, n) triplets, the Triplet loss function L is defined by: Anchor, Positive and Negative embeddings, respectively, α is the margin parameter, and denotes the cosine distance between two vectors x i and x j : Additionally, the symbol + indicates the operator This loss function attempts make the cosine distance between Anchor and Positive samples smaller than the distance between the Anchor and Negative instances by, at least, a margin of α. Alternatively, it will force examples of the same class to be mapped closer than examples of different classes (or even previously unknown examples).
We employ the TI3D with its default parameters and use hard and semi-hard triplet mining, as shown by [11]. Semihard triplets are defined as triplets in which the distance between the Anchor and Positive is smaller than the distance between the Anchor and Negative videos, but this distance is smaller than the margin parameter, i.e., Hard triplets are defined as triplets in which the distance between the Anchor and Positive is larger than the distance between the Anchor and Negative, i.e., (f (x a ), f (x p )) > (f (x a ), f (x n )). This triplet mining strategy ensures that only triplets with a positive loss w.r.t. Eq. 4 are used during training.
IV. METHODS
In this section, we present the OSVidCap framework for video captioning. It consists of five main modules: Target Detection and Localization (TDL), Features extraction, Open set module, Encoder, and Caption Generation. The overall architecture of OSVidCap is presented in Figure 1 and detailed as follows.
A. TDL MODULE
Detecting multiple concurrent events in a given video is essential to describe them in natural language adequately. The Target Detection and Localization (TDL) module consists of a mechanism designed to detect and track significant moving objects in a given video, which are considered the main concepts of the event. The output of this module consists of video segments for each moving object detected with a blurred background.
More specifically, the TDL module detects and tracks humans but is easily adaptable for other moving objects (such as animals and vehicles). We employ the Yolo-v4 [4] to detect humans and track them using the Deep SORT method [41]. The human-human or human-objects interaction is captured when they overlap in consecutive frames. In such cases, the entities are considered a single region of interest in the final video segment.
Finally, inspired by [42], we use a background blur method to guide the sentence generator module to focus on each region of interest in each video segment during the generation of the sentences.
B. FEATURES EXTRACTION
When human actions are described, it is important to consider details of the person, place, and action [43]. Thus, the Encoder module comprises four main classes of features extracted from a given input video as shown in Figure 1. All these features were extracted using off-the-shelf models, pre-trained on large datasets, which proved to be beneficial for video captioning tasks [20], detailed as follows: • Scene type features: A sample of 16 evenly-spaced frames per video was used to extract the max-pooling features from the last convolutional layer using the VGG model pre-trained on the Places365 dataset. 1 The final representation is a 512-dimensional feature vector.
• Spatial Features: For extracting spatial features, we used the ResNet-101 model [15], pre-trained on the Imagenet dataset. From a sample of 16 equally spaced frames, we extracted a 2048-dimensional semantic feature vector of each frame from the last pooling layer. Then, an average pooling operation was performed, resulting in the final feature vector of dimension 2048.
• Temporal Features: The ResNeXt-101 with 3D convolutions [44], pre-trained on the Kinetics dataset [19], was used to extract a 2048-dimensional semantic feature vector for every 16 frames (with 50% of overlap). Then, followed an average pooling to obtain a final vector with 2048 features.
• Human body skeleton features: We used the ST-GCN model [45], pre-trained on the Kinetics dataset, to extract significant complementary information for the spatial and temporal features. This is a graph-based model for modeling dynamic skeletons extracted with the Openpose toolbox [46]. It is aimed to capture motion information in dynamic skeleton sequences. We performed a global max-pooling operation over all skeleton sequences to obtain a single 256-dimension feature vector for a given video. The combination of skeleton features with spatial and temporal features was intended to improve the performance in action recognition and, consequently, in the descriptions of the videos [47]. Except for the scene type features extracted from the original video frames, all other features were computed with the video segment processed by the TDL module. All these features are used in the encoder model to compute the feature final vector representation.
C. OPEN SET MODULE
The TI3D was initialized using the weights of the I3D and trained according to Section III-B. Then, it was used to extract features from both training and test videos. The features are used to train the EVM classifier, which predicts each action in the test set as known or unknown. The output of the module supports the caption generation by signalling whether the action belongs to a known or unknown class.
The TI3D was trained for 20 epochs, updating the triplets every epoch using the hard and semi-hard triplet mining strategy proposed by [11]. The learning rate was set to 0.02, the margin parameter to 0.2, and the batch size to 256. For the EVM, we set the tail size τ to 10% of the number of samples in the train set, the cover threshold for model reduction was set to 0.5, and the probability of inclusion (δ) to 0.5. These parameters were empirically set, based on previous experiments on the LIRIS dataset [48] used in this work.
D. ENCODER
This block aims to derive a feature vector representing the essential concepts to predict the next word for describing the ongoing action in the video. All the previous features extracted from the video were mapped into a common highlevel abstract space by a feedforward network (FCN) with ReLU activations, as depicted in Figure 1.
Before Features Fusion (FF) step, we fuse the output processed by the Open Action Recognition Module with the processed Temporal Features (F tp ) to consider the unknown action information. Notice that the processed Place-type features (F p ), Spatial features (F sp ), and Human body skeleton features (F sk ) were remained to preserve essential information for caption generation, such as information about the place-type and number of people detected in the scene.
The output calculation of the encoder module provided by the FF can be formulated as follows: in which W 1 , W 2 , W 3 , and W 4 are weight matrices; U p , U sp , U sk , and U tp are features from the input modules: scene type, spatial, human body skeleton, and temporal, respectively; b 1 , b 2 , b 3 , and b 4 are the bias vectors; denotes the ReLU activation function; ⊗ denotes elementwise multiplication operator; * is the convolution; is the concatenation operator; and O uk denotes the feature vector provided by the TDL module.
E. CAPTION GENERATION
This module consists of the sentence generation and uses two Long Short-Term Memory (LSTM), a variant of Recurrent Neural Network (RNN), which works better with longterm dependencies. The first LSTM encodes the preceding VOLUME 9, 2021 sequence of words S = s 0 , s 1 , . . . , s t−1 . The second LSTM predicts the next word based on the output of the first LSTM combined with visual features computed by the Encoder module. The LSTM calculation formula used in this work is given by the following equations: in which U g and W g are weight matrices; x t is the input at time t; h t−1 is the previous state; and f t , i t , and o t are the forget, input and output gates, respectively. The calculations of unit gates are: in which U f , U i , U o , W f , W i , and W o are weight matrices, b f , b i and b o are bias vectors, and σ denotes the sigmoid activation function.
V. DATASET
There are a few datasets publicly available for video captioning task [3]. The most used datasets in the literature are MSVD [49] and MSR-VTT [50], containing a wide variety of open domain short videos. Each video has only a single main activity and multiple sentences with different details describing the video. Despite the availability of annotated datasets for the video captioning task, none of them contain specific information about the action performed in each video, such as an action categorization. This information is essential in detecting and recognizing known and unknown events in an open-set scenario. Also, they do not contain concurrent events happening in the same video.
To overcome the above-mentioned limitations, we improved the LIRIS human activities dataset with captions and temporal annotations of new actions. Furthermore, we evaluate the generalization of our method on the largescale ActivityNet Captions dataset. Both datasets are detailed as follows and are made available for further studies. 2
A. LIRIS CAPTIONS DATASET
It was designed for recognizing complex and realistic actions in videos and made available for the ICPR-HARL'2012 competition. The full dataset contains 828 actions (including discussing, telephone calls, giving an item, etc.) performed by 21 different people in 10 different classes. Each action performed in a video contains spatial annotations in a bounding box and temporal information (the beginning and end of action). It was organized into two independent subsets: the D1 subset, with depth and grayscale images, and the 2 http://labic.utfpr.edu.br/datasets/UTFPR-OSVidCAP.html D2 subset, with color images. The dataset also has unannotated actions, such as walking, running, whiteboard writing, book leafing, etc.
In this work we used the D2 subset that contains 367 annotated actions from 167 videos. Each action consists of one or more people performing one or more different activities. Besides, we extract 116 video segments in 15 different unannotated actions from the original videos to be used as unknown classes. Each new video segment was also annotated with spatial, temporal, and description information.
Reference [51] suggested that the number of reference sentences directly affects the accuracy of automated metrics. Also, those authors affirm that using five sentences models obtain a substantial boost in performance compared with only one sentence. Following this work, we improved the LIRIS human activity dataset with five different descriptions for each action, as shown in Figure 2.
B. ACTIVITYNET CAPTIONS DATASET
The ActivityNet Captions dataset [31] is a large dataset proposed for dense-captioning events, which involves both detecting and describing events in a video.
It contains 20,000 videos split into around 50%, 25%, 25% for training, validation, and testing set, respectively. All videos were taken from the ActivityNet Dataset [52], a benchmark for video classification and detection, which covers 200 classes of activities. The dataset also has an overlap of 10% of the temporal descriptions, thus indicating the presence of concurrent events. Each video is annotated with a series of temporally localized descriptions.
Although the ActivityNet Captions dataset is available for download as a collection of Youtube video links, many of these videos are no longer available for download, as reported in previous works [53], and only the pre-computed C3D features provided by the authors are not helpful in our experiments. Thus, we used 12,714 videos that were still available for download. Videos shorter than 3 seconds were disregarded due to the small number of extracted frames. As our approach focused on describing entire videos and not detecting a series of events, we used the ground-truth event proposals to extract 34,934 video clips for each temporarily localized description provided in the annotations.
While ActivityNet Captions was originally designed for video dense captioning, we adapt it to our task by including action annotations to evaluate the generality of the proposed method in a large-scale dataset. Due to the considerable effort required to annotate each video clip manually, these annotations were collected from the ActivityNet dataset based on the video name, which is the same in both datasets. Each resulted action class contains, on average, 114 videos for training and 55 videos for testing. The action annotations were used to split videos into known and unknown classes for the detection of known and unknown actions.
VI. EXPERIMENTS A. IMPLEMENTATION DETAILS
The proposed OSVidCap framework uses an encoder-decoder architecture. Therefore, both the encoder and caption generation modules (decoder) were trained in an end-to-end way. Before training, all captions were tokenized and converted to lowercase. Sparse words occurring less than three times in the training set were replaced with the unknown token. The fasttext [54] word embedding pre-trained on the Common Crawl Corpus was used to embed features into a 300-dimensional feature vector. It provides much more powerful and effective low-dimensional word representations for video captioning than other techniques such as sparse one-hot encoding vectors [55].
During the training step, a begin-of-sentence and endof-sentence token were added to the sentence to deal with varying lengths. Also, an unknown tag was used to replace sparse words. We input the begin-of-sentence token into our Caption Generation Module to start the description generation process during the test step. Then, previously generated words are used as input to produce the following words until the max sentence length or the end-of-sentence token is achieved. In our experiments, the max sentence length was set as 19 and 25 for the Liris dataset and ActivityNet Captions dataset, respectively. Zero padding is applied if the sentence is shorter than the max number of words. The Beam Search method was employed to select the best sentence and avoid local optima. In our experiments, the beam size k was set to 3.
We empirically set the hidden state LSTM with 512 units and applied dropout with a rate of 0.5 on the input and output of the LSTM. The Adam algorithm, with a learning rate of 5 × 10 −5 was used for optimization. The cross-entropy loss was used to train our model. All experiments were implemented using Tensorflow and Keras library.
To demonstrate the effectiveness of the proposed method, we have conducted two experiments to analyze the influence of the open set module and compare the video caption performance with related works.
1) EXPERIMENTS ON THE LIRIS DATASET
Due to the small number of videos and known actions in the Liris dataset, we performed a 5-fold cross-validation procedure to assess the OSVidCap performance. The same training and testing set of each cross-validation fold was used to train the open set module. In addition, to evaluate the effectiveness of the proposed approach in detecting unknown events, we include in the testing set 116 videos with unknown actions as described in Section V-A.
2) EXPERIMENTS ON THE ActivityNet CAPTIONS DATASET
The OSVidCap performance to generate captions of known events was performed using the standard data split. 3 Since this dataset was made available as a challenge, the test set was not provided with the ground truth. Thus, we follow the previous works [33], [53] and report the results on the validation set. The effectiveness of the proposed approach in detecting unknown events was performed using a 5-fold cross-validation procedure. Each fold contains known videos of 40 actions for the training and testing set, as explained in Section V-B. We also included in the testing set v r random videos from other classes as unknown actions. The v r was defined as the same number of videos presented in the training set to avoid imbalanced data.
B. EVALUATION METRICS
The captions generated by the proposed framework were evaluated according some metrics, frequently used in the area: BLEU [56], METEOR [57], ROUGE-L [58], and CIDEr [51]. All metrics were computed using the COCOcaption API [59].
BLEU is a metric based on n-grams precision modified and measures the predicted sentence proximity with one or more reference descriptions. Following most previous works for video captioning [3], we used four-grams with the BLEU metric, which is referred as BLEU-4. METEOR is based on the precision, recall, and harmonic mean and consists of creating an alignment between uni-grams from candidate and reference sentences. The word matching supports morphological variants including stemming and synonyms. CIDEr is a consensus-based metric and measures the similarity of a generated sentence against a majority of a set of ground-truth sentences. It employs morphological variations by changing each word in their stem (or root form) to resolve word-level correspondences. ROUGE-L computes the recall and precision scores using the longest common subsequences (LCS) technique and tends to reward long sentences with high recall. In our experiments, BLEU, METEOR, and ROUGE metrics were normalized to range from 0 to 100, with 100 as identical to the reference sentence. CIDEr ranges from 0 to 1000, with 1000 as identical to the reference.
C. QUANTITATIVE RESULTS
In this section, the performance evaluation of the proposed method is presented and compared with two recent existing approaches.
SGN [60] exploits the use of semantic groups based on meanings such as people, objects, or actions, rather than frame by frame for understanding a video. It is comprised of four main components: (i) a Visual Encoder component that aims to extract visual features from video frames; (ii) a Phrase Encoder which produces phrase representations from words by using the self-attention mechanism; (iii) a Semantic Grouping which employs a semantic aligner to align the video frames with phrases; and (iv) a Decoder based on LSTM with temporal attention.
Non-Autoregressive Coarse to-Fine (NACF) model [61] proposes a coarse-to-fine captioning procedure using a bi-directional self-attention-based network as caption generator. For improving caption quality, the decoder method is decomposed into two stages. First, a coarse-grained ''template'' is generated. Then, dedicated decoding algorithms generate fine-grained descriptions by filling in the generated ''template'' with suitable words and modifying inappropriate phrasing via iterative refinement.
For a fair comparison, all the methods utilize the ResNet-101 and ResNext-101 features as input, and the reported results were obtained using Microsoft COCO caption evaluation tool [59]. Furthermore, all approaches were set with the same maximum sentence length and minimum word frequency during training. Table 1 presents a comparison performance of the OSVid-Cap with existing approaches on LIRIS dataset. It can be noticed that our model OSVidCap (S+T) achieved better performance in terms of Rouge-L and CIDEr and competitive performance in terms of Bleu and Meteor. Also, our model OSVidCap (S+T+SK+P) surpasses the compared approaches by 4.9% of BLEU-4, 5.1% of METEOR, 4.3% of ROUGE-L, and 9.3% of CIDEr. This suggests that our approach can better describe concurrent events in videos. In addition to spatial (S) and temporal (T) features, the model considered Human body skeleton (SK) extracted from human movements and Place-Type (P) features extracted from places. This points out that specialized features can be essential to better describe similar actions or actions according to the context (place). Such feature enrichment provides essential information to distinguish some actions, such as shaking hands and giving a small item to a second person. Also, the place type gives meaningful semantic information, as some actions tend to happen in specific places. Table 2 presents the video captioning comparison on Activ-ityNet Captions dataset. It can be noticed that the proposed approach also achieved better or competitive results across all metrics, showing robust generalization to other contexts and scenarios. It is also noteworthy that the values of the metrics presented in Table 2 are significantly lower than those presented in Table 1 due to the complexity of the datasets, as reported in section V. The performance reported on this dataset is similar to those reported in recent literature [53], [62]. Note that, despite having used the same dataset to report the results, they are not comparable with the presented approach, as the videos and features used for training, validation, and testing are different. In both datasets, the use of Place-type features did not show significant improvements. This may indicate that previously used features can also describe this visual information or are irrelevant for the video description task.
In Table 3, one can observe the evaluation performance of the open-set module in detecting known and unknown actions on the Liris Dataset. Results are presented in a 5-fold cross-validation procedure. The proposed method achieved satisfactory results in detecting known and unknown classes with an average F1-Score of 86.2%. Table 4 shows the evaluation performance of the openset module in detecting known and unknown actions on the ActivityNet Captions dataset. Five experiments with different numbers of the known classes in a cross-validation procedure were performed. The proposed method achieved satisfactory results in detecting known and unknown classes with an average F1-Score of 79.80% when ten classes were considered as known actions.
In Table 4, it can also be seen that the average precision of the unknown class is about 9% higher than the known class, and the average recall of the known class is 13% higher than the unknown class. This shows that the proposed approach achieves better results in detecting unknown classes than known classes. The automatic annotation process of video actions on the ActivityNet Captions dataset, as described in section V-B, also produced some annotation noises during the training and testing process. These noises can be a performed action with a different label or even a video without human actions. Figure 3 depicts an example of a video presented in the dataset. It can be observed that the video has different events with different start and end times. The automatic annotation process set the action class ''Removing ice from car'' to all video clips. However, in this example, only two video clips are related to the annotated action. Therefore, the degradation in the average precision metric of the known class may have been caused by the presence of these annotation noises. When considering new actions as known classes, the average F1-Score decreased due to the cumulative annotation errors provided by the automatized annotation process, as reported below. Table 5 reports the impact of the open-set component on the video descriptions generated by the proposed approach. The results reported in the Liris dataset used the same data in a cross-validation procedure, as used in Table 3. For reporting the results on ActivityNet Captions Dataset, we used the 5-fold cross-validation applied in Table 4.
These results are significantly higher when compared with those reported in Tables 1 and 2 because, in this experiment, we considered videos in the test set with unknown activities. For these videos, the model is supposed to generate descriptions such as ''a person is performing an unknown action''.
The experiments with unknown actions in the testing set suggested that Place-type features did not lead to a significant improvement. However, these features are important to understand scenes in which the information about the place type is relevant, for example, to describe whether the person is entering or leaving an office or writing a whiteboard in a classroom. In the testing set used to report the experiments in Table 5, several videos from unknown classes were included to evaluate the proposed open set module. Therefore, the overall influence of the Place-type features has quantitatively decreased due to the small number of sentences that require such features. To the best of our knowledge, this is the first work to address the video captioning task in an open-set world by generating captions of known events present in the training set and dealing with unknown events not previously seen.
D. QUALITATIVE RESULTS
In Figure 4, we illustrate three examples of video descriptions generated by the baselines method SGN and NACF and the proposed OSVidCap. Figure 4a depicts a scene with two sequential actions. First, a man in a striped t-shirt talks to VOLUME 9, 2021 a woman in front of a whiteboard. Then, another man in a black t-shirt enters the room and gives an item to the man in a striped t-shirt. Figure 4b shows two concurrent events. While a man and a woman are handshaking, another man is leaving baggage unattended. Finally, in Figure 4c, three events take place in the video. At the same time, a man is performing an unknown action. Another man leaves an item in the letterbox cabinet and then enters the room.
For the examples of Figure 4, our approach described concurrent actions better than the baselines. In Figure 4a, the OSVidCap correctly described the ongoing action but wrongly represented the color of the t-shirt, suggesting that the model did not learn this information from the input features. Possibly, more specific features should fix this issue.
In Figure 4b, we can observe that the compared approaches could not detect the shake hands action, suggesting the importance of using human body features in describing human action videos. Also, they fail to detect and describe concurrent actions in videos.
We can realize the importance of the open set module in the situation considered in Figure 4c). While the OSVidCap detected an unknown action performed by a man and correctly described it as such, the compared approaches generated a wrong description. It is worth highlighting that this action was previously labeled as unknown and did not appear in the training set.
VII. CONCLUSION AND FUTURE WORKS
The majority of artificial intelligence methods rely on the closed-set world assumption. The same holds for the specific case of automatic video captioning systems. Existing methods based on a closed-set world can adequately describe only the temporal events previously seen during the training step. Unless they are trained with all existing events and actions of interest, they will not be able to recognize unknown events found in videos in the wild. Furthermore, most current approaches for video description focus only in single actions occurring at a time, while in the real-world, concurrent events may take place. To address the above-mentioned issues, in this paper we proposed the OSVidCap framework, that can detect and describe concurrent events in an openset world scenario. From a given input video, the TDL module detects and tracks humans and outputs a set of video segments to be described. Then, spatial and temporal features are extracted from each video segment. Also, the open set module, built upon the TI3D metric learning approach coupled with an extreme value machine (EVM), classifies each detected action as a known or unknown class. Then, the Encoder module computes the features and generate a fixed-length vector that represents the whole video content. Finally, the caption generation module, based on the LSTM, generates the descriptions in a human-comprehensible form.
Experimental results demonstrate the effectiveness of the framework in describing concurrent events in a given video.
Also, the open-set module allows the framework to describe unknown events. Our experiments also show that different features such as the Human body skeleton and Place-type features are quite relevant to understand fine-grained actions, frequently performed in specific environments. Such features enrichment provides a better video representation for generating a more detailed description. Furthermore, due to the lack of specific datasets for evaluating concurrent events in an open-set scenario, we have contributed new annotations of unknown actions in the LIRIS human activity dataset that can be used as a benchmark for the proposed task.
Despite the excellent results achieved by OSVidCap, we observed that it could provide a more detailed description of people, for instance, including the type and color of the clothes. This enrichment of details can plays an important role for applications in surveillance. The TDL module is capable of capturing individual humans or objects of interest and simple interactions between them, by capturing the overlapping region among objects. However, the proposed module may fail to capture more complex human interactions.
Therefore, in future work, the proposed framework will be evolved by enriching the description of people in the scene, as well as to improve the detection of events involving persons that interact at a distance, such as watching TV or throwing an object to another person. Another future work involves providing a human evaluation over a subset of testing data, as existing metrics used for automatic evaluation of video captioning may not properly correlate with human judgment.
|
2021-10-14T13:31:45.308Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "cb9040d00d157f594685c196f78e3666d8aa6b8c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09552885.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "bb74c6a0a3d398a08b934a5bad846e716e8014ba",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
67860993
|
pes2o/s2orc
|
v3-fos-license
|
Low-intensity focused ultrasound for the treatment of brain diseases: safety and feasibility
The use of focused ultrasound (FUS) as a tool for blood-brain barrier (BBB) permeabilization is opening new ways for the treatment of several pathologies, in particular brain tumors and neurodegenerative diseases. However, even if there are promising results in these fields, the efficacy and safety of this technique is unknown in long-term follow-up. The study of Blackmore et al. [Theranostics 2018; 8(22):6233-6247. doi:10.7150/thno.27941] evaluated the long-term effects of FUS on brain parenchyma in aged mice with Alzheimer's disease. This is the first study which applied a multimodal analysis to demonstrate the safety of FUS in aged brain in view of a potential introduction of this technique in common clinical practice in the future.
Focused ultrasound (FUS) represents a non-invasive therapeutic strategy that is currently used for the treatment of several disorders. In the field of neuroscience, magnetic resonance imaging (MRI)-guided FUS (MRIgFUS) is commonly used for treating essential tremor, chronic neuropathic pain, parkinsonism, and Parkinson's disease. Moreover, MRIgFUS has been successfully tested for the ablation of deep intracranial tumors, for acute ischemic stroke, and for specific psychiatric disorders. All of these applications utilize the thermoinducing and thermoablative properties of FUS which are able to reach high temperatures (>55 °C) in non-surgical sites (deep or eloquent encephalic zones), leading to coagulative necrosis, protein denaturation and cell apoptosis.
Recently, the potential of FUS has been experimentally expanded by exploiting its ability to temporarily alter the permability of the blood-brain barrier (BBB) . As a result, it is possible to not only deliver drugs or genetic material in targeted brain areas but also activate and modulate specific functional areas [1].
The permeabilization of BBB can be performed through FUS as well as microbubbles (MB) and/or nanoparticles (NP) that serve as carriers and vectors for drugs, including immunotherapydrugs, or genetic material. These are released directly into cells or into the brain interstitial system [2].
This new perspective of using FUS could be useful to treat a large number of pathologies in the neurological and neurosurgical fields. Presently, MRIgFUS has already been used in several clinical trials for the treatment of cerebral tumors (e.g., NCT02343991, NCT03551249, NCT03616860, NCT03712293, and NCT03714243), epilepsy (e.g., NCT03657056 and NCT02151175), consciousness disorders in acute brain injury (e.g., NCT02522429), as non-invasive option to deep brain stimulation (e.g., NCT02382965, NCT03717922, and NCT03347084), and as a method to improve the reuptake of β-amyloid in patients affected by Alzheimer's disease (e.g., NCT02986932).
Ivyspring International Publisher
MRIgFUS also offers another therapeutic strategy for neurodegenerative diseases. The transient permeabilization of the BBB could increase the clearance of β-amyloid plaques in an animal model of Alzheimer's disease, improving the bioavailability of endogenous antibodies and activating glial cells [3,4].
After the study of Raymond et al. [5], which demonstrated that FUS can make the delivery of immuno-therapeutic agents in transgenic mice with Alzheimer's disease more effective, a more recent work demonstrated that the association of MRIgFUS and endovenous administration of BAM-10 anti-Ab antibodies can mitigate β-amyloid plaque development. The anti-amyloid effect is confirmed by other studies that showed a significant improvement of cognitive tasks in tested animals and, at the same time, a reduction in β-amyloid plaques and an improvement of neurogenesis in the sonicated brain areas [6,7,8,9].With reference to the neuroinducing and neuroprotective effects of FUS, Scarcelli et al. [10] assumed that they could positively induce the endogenous regulation of Brain-Derived Neurotrophic Factor (BDNF) expression. In the same way, many studies investigated the potential to promote neurogenesis and angiogenesis using FUS with pulse repetition frequency, which seems to increase the BBB permeability better than FUS with set pressure strategy, or using microbubbles as a way to amplify these effects, opening the door to several therapeutic solutions in neurology and psychiatry [11,12,13].
However, despite the fact that FUS may be used in multiple fields, there are only a few studies that have investigated the long-term risk of potential brain damage. In 2015, Downs et al. [14] first analyzed the clinical and neuroradiological (MRI) changes that FUS might lead to in primates subjected to multiple sessions of sonication. The follow-up was performed at 20 months and did not reveal the presence of any permanent neurological damage. Similarly, other studies were conducted with the purpose of detecting issues related to repeated sonication of specific brain areas. O'Reilly et al. [15], by means of MRI scans and histological analysis, examined the safety and short-term effects of FUS in canine models with β-amyloid plaques. This work assessed the safety and efficacy of FUS, but did not highlight the possible effects that it might have on an aged brain or on a brain affected by a neurodegenerative disease. Moreover, Kovacs et al. [16] have recently analyzed the histological and neuroradiological changes that repeated sessions of FUS + MB might induce on a murine model. In this work, the steryl inflammation that FUS usually promotes seemed to be linked to persisting alterations of BBB, signs of vascular damage, inflammation, and neurodegeneration. All of these factors are common in traumatic brain injury.
In this context, the study of Blackmore et al. [17] has added data to our knowledge of secondary long-term effects of the permeabilization of BBB by SUS (scanning ultrasound) in aged animals.
In this work, the authors tested 12-month old mice with multimodal analysis and then exposed them to six weekly SUS treatments and compared the results with a case control group. Subsequently, several features concerning spatial memory, metabolic and cell function shifts (detected by measuring changes in brain volume), and tissue microstructure (evaluated by diffusion tensor imaging, DTI) were analyzed in vivo. Then, the synaptic level of activity was evaluated both functionally by electrophysiology and morphologically by analyzing the structural anatomy of neurons. Each of these analyses demonstrated experimentally that SUS is not only a safe treatment ( in long-term follow-up) but also shows a neuroprotective effect, especially in an aged brain. This work, even with the limitations already highlighted by the authors, reveals the advantage of utilizing a multimodal approach in the analysis of histological and functional characteristics of sonicated tissues.
Alzheimer's disease and other dementias are serious conditions and common causes of mortality and morbidity among the elderly population for which there are no effective therapies available. For these reasons, it is, therefore, advisable to continue to develop FUS in order to verify its safety and fully strengthen its potential in clinical practice. New and important data could be deduced from recent clinical studies concerning the use of FUS in neurodegenerative diseases.
|
2019-03-11T17:22:20.954Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "63130d7608e9b10af3af314a8fb0b0a189fdc649",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7150/thno.31765",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63130d7608e9b10af3af314a8fb0b0a189fdc649",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244912974
|
pes2o/s2orc
|
v3-fos-license
|
1158. Pediatric Group A Streptococcal Peritonitis: A Single-Center Eleven Patient Case Series
Abstract Background Pediatric group A streptococcal peritonitis (GASP) is a rare but serious infection, with few cases reported in the literature. Utah has an unusually high incidence of invasive GAS (iGAS) disease, but the frequency and characteristics of pediatric GASP are unknown. Methods We performed a retrospective chart review to identify GASP in Utah children from 2000-2019. GASP was defined as isolation of GAS from peritoneal fluid or blood and clinical signs of peritonitis. Results : Eleven children with GASP were identified, with slight female predominance (n=6). Median age was 6 years; males were significantly younger than females (1.4 versus 7.2 years, p=0.01). GAS was isolated from 4 of 8 blood and 8 of 11 peritoneal cultures obtained. Peritoneal fluid PCR was positive for GAS in one patient. Ten patients underwent laparotomy. Peri-appendiceal inflammation prompted appendectomy in 7 patients; only one had pathologic findings of acute appendicitis. Four patients developed streptococcal toxic shock syndrome and 7 required intensive care. Non-white race (n=4) and lack of appendectomy (n=5) were associated with more severe outcomes. Median antibiotic duration was 27 days. Median hospitalization was 8 days. All patients survived. Figure 1. Schematic representation of GAS peritonitis patient clinical course. Each patient is represented by a single line. Duration of symptoms prior to hospitalization, as well as duration of hospitalization (day 0 representing admission), intensive care, antibiotic administration, and timing of procedural interventions are noted. Duration of antibiotics after discharge for patient 3 was unable to be verified, as indicated by a question mark. Hospitalization, general pediatric hospital care. PICU, pediatric intensive care unit. IR, interventional radiology. Conclusion We present the largest pediatric case series of GASP to date. Diagnostic hallmarks included gastrointestinal symptoms, fever, systemic inflammation, and peritoneal enhancement without an abdominal source. Peri-appendiceal inflammation was common, although acute appendicitis was rare, and appendectomy was associated with a less severe course. GASP should be considered in patients with acute abdominal processes given increasing incidence of iGAS infections. Disclosures All Authors: No reported disclosures
Session: P-64. Pediatric Bacterial Studies (natural history and therapeutic) Background. Conventional culture remains the gold standard to facilitate a targeted antimicrobial regimen in the treatment of bacterial infections. However, certain pediatric infections are caused by fastidious organisms and treatment with antibiotics prior to specimen collection may hamper growth of pathogens in routine culture. The use of 16S rRNA in culture negative infections has improved identification of bacterial pathogens in select scenarios. However, the specific impact of 16S rRNA on clinical decision making, especially in pediatric infections, is not well-defined. This study aims to elucidate the utility of 16S rRNA on clinical management of pediatric infections.
Methods. A retrospective analysis was done on different clinical specimens which had 16S rRNA performed from August 2016 -March 2020 in our institution. Detailed chart review was performed to determine how the 16S rRNA result impacted clinical decision making. Clinical utility was defined as change in patient's overall antimicrobial regimen, pathogen confirmation, and treatment duration.
Results. Seventy-four samples from 71 pediatric patients were included in the analysis: 32 (43%) were fluid specimens and 42 (57%) were tissue specimens. Significant clinical utility was identified in 30 (40.5%) of 74 clinical samples (p < 0.0001). Of all specimens, pulmonary samples yielded the most clinical utility (n=9, 30%) followed equally by joint fluid (n=6, 20%) and bone (n=6, 20%). There was no significant difference in clinical utility between fluid and tissue specimens (p= 0.346). In 64 patients whose antimicrobial spectrum coverage was analyzed, patients with broad spectrum coverage was decreased from 48 to 21 and narrow spectrum coverage increased from 16 to 43 using 16S rRNA result, though not significant (p= 0.4111). Of all patients included in the analysis, the median number of antibiotics used before 16S rRNA result, 2, was significantly decreased to 1 (p < 0.0001).
Conclusion. 16S rRNA has a significant impact in terms of decreasing number of antibiotics used in treatment of pediatric infections. Pulmonary specimens have the highest clinical utility among all samples. Additional cost benefit analysis needs to be completed to further determine clinical benefit.
Disclosures. Background. Pediatric group A streptococcal peritonitis (GASP) is a rare but serious infection, with few cases reported in the literature. Utah has an unusually high incidence of invasive GAS (iGAS) disease, but the frequency and characteristics of pediatric GASP are unknown.
Methods. We performed a retrospective chart review to identify GASP in Utah children from 2000-2019. GASP was defined as isolation of GAS from peritoneal fluid or blood and clinical signs of peritonitis.
Results. : Eleven children with GASP were identified, with slight female predominance (n=6). Median age was 6 years; males were significantly younger than females (1.4 versus 7.2 years, p=0.01). GAS was isolated from 4 of 8 blood and 8 of 11 peritoneal cultures obtained. Peritoneal fluid PCR was positive for GAS in one patient. Ten patients underwent laparotomy. Peri-appendiceal inflammation prompted appendectomy in 7 patients; only one had pathologic findings of acute appendicitis. Four patients developed streptococcal toxic shock syndrome and 7 required intensive care. Non-white race (n=4) and lack of appendectomy (n=5) were associated with more severe outcomes. Median antibiotic duration was 27 days. Median hospitalization was 8 days. All patients survived. Each patient is represented by a single line. Duration of symptoms prior to hospitalization, as well as duration of hospitalization (day 0 representing admission), intensive care, antibiotic administration, and timing of procedural interventions are noted. Duration of antibiotics after discharge for patient 3 was unable to be verified, as indicated by a question mark. Hospitalization, general pediatric hospital care. PICU, pediatric intensive care unit. IR, interventional radiology.
Conclusion. We present the largest pediatric case series of GASP to date. Diagnostic hallmarks included gastrointestinal symptoms, fever, systemic inflammation, and peritoneal enhancement without an abdominal source. Peri-appendiceal inflammation was common, although acute appendicitis was rare, and appendectomy was associated with a less severe course. GASP should be considered in patients with acute abdominal processes given increasing incidence of iGAS infections.
Disclosures. All Authors: No reported disclosures
|
2021-12-07T16:03:27.263Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9e6c36ad2342cf5a1f73aaa8489a411acba90fea",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/ofid/article-pdf/8/Supplement_1/S670/41521907/ofab466.1351.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83dfb1d41c84ce846ae2eb0cd5818132dbbf7cfa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
54727330
|
pes2o/s2orc
|
v3-fos-license
|
Impact of the in-medium conservation of energy on the π − / π + multiplicity ratio
An upgraded version of the isospin dependent Tübingen QMD transport model, which allows the conservation of the total energy, is presented. This is achieved by including in the energy-balance equations of the density, isospin asymmetry and momentum dependent inmedium baryon potential energies. It leads to an effective modification of particle production thresholds with respect to the vacuum ones. Compatible constraints for the symmetry energy stiffness from π−/π+ multiplicity ratio and elliptic flow experimental data of Au+Au collisions at 400 MeV/nucleon can be extracted in this case. However, an important dependence of the π−/π+ observable on the strength of the isovector part of the Δ(1232) isobar potential is also demonstrated. The present lack of information on this quantity prevents a precise extraction of the value for the symmetry energy stiffness employing the mentioned observable alone.
Introduction
The π − /π + multiplicity ratio (PMR) has been proposed as a suitable observable to constrain the density dependence of the symmetry energy (SE) above the saturation point [1].Various attempts in this direction [2][3][4][5] have resulted in a confusing picture: constraints on the high density dependence of SE ranging from a very soft to stiff have been determined by employing different transport models and/or parametrizations of the symmetry potential.Additionally, most models lead to a contradiction between the π − /π + multiplicity ratio and neutron/proton elliptic flow ratio extracted constraints.Attempts to find a solution to this problem by studying the impact of in-medium modification of the pion-nucleon interaction [6], of the kinetic part of the SE term [7], neutron skin thickness [8] or particle production thresholds due to the inclusion of self-energy contributions [9,10] on the PMR value have proven unsuccessful.
Transport models that employ momentum or isospin dependent meanfields do not conserve the total energy at microscopical level, in two-body collision, resonance decay or meson absorption processes.This fact translates into an energy conservation violation at an event by event basis and also when an average is performed over a large number of events.Consequently, an upgrade of the Tübingen transport model, that alleviates this problem, has been developed.All the relevant details of this upgrade together with the most relevant results to the PMR problem have been presented in detail in Ref. [11].In this conference proceeding we will present in greater detail the motivation that has lead to the mentioned upgrade, some relevant details of the model and end with a few selected results.
The model
The starting point for the model employed in this study has been the QMD transport model developed in Tübingen [12] which has been upgraded to accommodate for density dependent cross-sections, various parametrizations of the optical potential and of the isovector part of the equation of state (EoS) of nuclear matter [13].For the present study the Gogny inspired parametrization of the SE has been used [14], which allows the adjustment of the stiffness of the isovector part of the EoS by the introduction of a parameter denoted x.Positive and negative values of this parameter correspond to a soft and respectively stiff choice for the stiffness of the SE.
The poorly known in-medium baryonic resonance's potentials are chosen as follows.The isoscalar part is set to be equal to that of the nucleon, a choice common to most transport models.The isovector resonance potential is related to that of the nucleon by making use of the decay branching ratios of each isospin quadruplet component into the possible pion-nucleon pairs [15].The total resonance potential can thus be written as where V N is the isoscalar nucleon potential and V v =δ, with the definition δ=(1/3)(V n -V p ), V n and V p being the isovector components of the neutron and proton potential respectively.
To justify the approximations introduced next, the extent to which the conservation law of total energy is obeyed is investigated, the results being presented in Fig. 1.The left panel presents results for three different parametrizations of the optical potential for central Au+Au collisions at an impact energy of 400 MeV/nucleon: the Gogny inspired one of Ref. [14] and the two parametrization (Hartnack-Aichelin) extracted from the analysis of proton-nucleus scattering data in Ref. [16].It is observed that the amount of energy conservation violation (ECV) dependents strongly upon the choice of the optical momentum but is it in all cases much larger than the rest mass of the pion.The isospin dependent part of the potential and the inelastic channels are seen to impact the magnitude of ECV rather modestly at this impact energy.The right panel of Fig. 1 presents the dependence of the amount of ECV on the kinetic energy of the projectile nucleus.An monotonically increasing behavior on this quantity is observed, however a saturation phenomenon takes place towards higher impact energies, a trend that can be explained by the same behavior of the optical potential as a function of the momentum of the incident particle.
It is thus clear that transport models that fail to conserve the total energy at the level of few GeV are not appropriate for the description of production of particles with rest masses much smaller than this value.The problem can be alleviated by including in the process of the determination of the kinematics of final states of two-body collisions, resonance decay and meson absorption processes of the in-medium meson and baryon potentials.In this processes elementary two-body reactions lose their local character becoming part of a much more complicated N-body process which allows the conservation of total energy via exchanges mediated by the intermediate and long-range part of the nucleon-nucleon interaction.In a Feynman diagrammatic picture, the leading order contributions are expected to arise from initial-state and final-state interactions, while contributions originating from more complicated diagrams (rescattering terms) are assumed to be much smaller and thus are completely neglected.All the pertinent details of the approximations involved, the invariant masses at which elastic and inelastic channels cross-sections are evaluated, the needed modifications to the detailed balance formula, in-medium effects on cross-sections and numerical approximations required to make such calculations feasible time-wise are presented in Ref. [11].
Selected results
The inclusion of contributions of the hadronic potentials in the equation of energy conservation leads to threshold shifts for particle production with respect to their vacuum position [9].In the case of π mesons the effect is much stronger for the negatively charged pion [11] which leads to a total different dependence of the PMR on the SE stiffness compared with the case when total energy is not conserved: a higher PMR value is favored by a stiffer asy-EoS, a behaviour that was also reported by the study in Ref. [10].This allows, using the prescription for the in-medium Δ potential of Eq. 1, to extract a constraint for the density dependence of SE above the saturation point which is compatible with the one extracted from experimental data of the elliptic flow ratio of neutrons and protons [11].Additionally, the dependence of the PMR on model parameters like compressibility modulus of the isoscalar part of the EoS, parametrization of the optical potential, inclusion or exclusion of medium effects on elastic and inelastic elementary cross-sections is found to be small [11].A comparison of theoretical and experimental data for the charged pion multiplicities up to values of the impact energy of 1.0 GeV/nucleon shows a good agreement (left panel of Fig. 2).The PMR is however found to be extremely sensitive to the strength of the isovector Δ potential and to a smaller extent to its isoscalar component, as shown in the right panel of Fig. 2. By changing the strength of the isovector Δ potential from 0 to 3 times that of Eq. 1 almost any values for the stiffness parameter x can be extracted from a comparison with the experimental data.
In conclusion, the conservation of the total energy in transport models is a mandatory ingredient if a consistent description of particle production is to be achieved.In particular, the multiplicities and multiplicity ratio of charged pions in heavy-ion collisions close to or slightly above the vacuum pion production threshold are greatly impacted by imposing this constraint consistently.Constraints for the symmetry energy dependence on density compatible with the ones extracted from others heavy-ion observables (most notably flows) can be extracted only within this scenario.However, it was also demonstrated that PMR is extremely sensitive to the strength of the unknown isovector Δ potential, which hinders at present the use of this observable for the purpose of determining the density dependence of the symmetry energy above the saturation point.
Figure 1 :
Figure 1: Left Panel: Magnitude of the energy conservation violation in central Au+Au collisions at an impact energy of 400 MeV/nucleon.Results for three different parametrizations of the optical potential are presented: Gogny (full curve), old Hartnack-Aichelin (long dashed-curve) and new Hartnack-Aichelin (dash-dotted curve).Additionally, for the case of the Gogny potential results for the cases when the isovector part of the potential (short dashed curve) and the inelastic channels (dashed double dotted curve) are ignored are presented.Right Panel: Dependence of the magnitude of the energy conservation violation on the impact energy below 1 GeV/nucleon for the Gogny inspired potential.
Figure 2: Left Panel: Comparison between theoretical and experimental π − and π + multiplicities for impact energies below 1.0 GeV/nucleon.Theoretical predictions for sub-threshold impact energies are also provided.Right Panel: Sensitivity of the PMR to variations of the strength of the isocalar (band) and isovector (curves) Δ(1232) potentials at an impact energy of 400 MeV/nucleon as a function of the stiffness parameter x.The experimental value is depicted by the horizontal band.Both panels display results for central 197 Au+ 197 Au collisions.
|
2018-12-07T03:36:48.380Z
|
2016-05-01T00:00:00.000
|
{
"year": 2016,
"sha1": "f6778e2377831b0acc9648f0418f3a558c01da12",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/12/epjconf_nn2016_07016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9fe184e46e6809eeb9ee3fdec0c7684a6a61117e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
198791894
|
pes2o/s2orc
|
v3-fos-license
|
Football in Turkey from 1960 to Present Day: Unpreventable Violence
This study seeks to investigate the issue of violence in football in Turkey from 1960 up to present day along with incidents, riots and stampedes that occurred related to violence in football. The study was conducted considering three main periods, which are from 1960 to 1980, 1980 to 2000 and finally 2000 and today. The reason why these particular periods were chosen was that each period had its own characteristics related to football-related violence and other aspects that affected it. To collect data, 15 members of fan groups were interviewed related to the football-related violence. Furthermore, the study took into account not only physical aspect of violence but also social and psychological aspect of it. Findings revealed that Turkey has been a country where football-related violence has never been completely prevented due to reasons such as coups, coup attempts, political issues, social behaviors shaped by these political issues, regulations and media considering these periods. And based on the participant opinions, it was found that main actors responsible for the violence are top officials in the football federation, politicians and media. As a result, football-related violence remains to be a major problem in football games not only on the pitch but also off the pitch.
Introduction
Football is a form of game that is extremely common and in great demand all over the world along with its all aspects. Nearly every single country has indigenised this particular type of sports. It is unknown where and when exactly football was invented. However, according to Federation Internationale de Football Association (FIFA), the contemporary history of the world's favourite game spans more than 100 years and it all began in 1863 in England, when rugby football and association football branched off on their different courses and the Football Association in England was formed (Fifa.com). In the wake of this development, professional leagues that encompassed games among rivals were launched and spread all over the world. Football did not remain to be confined to solely English territory and was adopted by a great many countries where it was considered as the favourite game. In fact, the occasion that made football so popular was the inaugural FIFA World Cup which was hosted by Uruguay in 1930. It was that time when people started to regard football not merely as a game but as a lifestyle as well as industry. This perspective led to the formation of various groups that subsequently affected the football game both positively and negatively. In a positive manner, football made people unite regardless of their race, socioeconomic status, education level and many other aspects. On the other hand, it also brought about a plethora of unfavourable issues that have not come to an end up to now. Among these unfavourable issues are hooliganism, aggression, riots, protests, violence and match-fixing. In fact, this study will mainly focus on the issue of violence as violence has never been prevented completely despite all the measures taken by governing bodies as well as football associations. Violence is definitely opposed to the idea of the invention of football as football is a type of sports and refers to tolerance, entertainment and mental comfort. Football is under the undesirable impact of violence with problems related to violence that occurs all over the world. Unfortunately, this is the case in every country though in various degrees. One of these countries where violence is experienced densely in football games is Turkey.
Literature Review
It is observed that the issue of violence in football has scarcely been studied when literature is reviewed. Besides, the main points that have been taken into account are hooliganism which has its origin in English territories and subsequently spread all over the world and identities related to football violence. Among the scholars who have devoted time to carrying out international researches upon violence in football are Giovanni Carnibella et al. (1996), Martha Newson (2017), Ramon Spaaij (2008), Eric Dunning (1999), Jamie Cleland andEllis Cashmore (2016) andJoel Rookwood& Geoff Pearson (2012). In their study titled as Football Violence in Europe Carnibella et al. (1996) stated that the game of football has been associated with violence since its beginnings in 13th century England. They reported that "football hooliganism" originated in England in the early 1960s, and has been linked with the televising of matches and with the "reclaiming" of the game by the working classes (ibid, p. 5). They also pointed out in their cross-national variations that Britain is not the only country where violence in football occurs and that countries including Germany, the Netherlands and Belgium have significant problems with similar levels of football-related violence (ibid. p. 7). In his study Football, Fan Violence, and Identity Fusion, which takes into consideration a cognitive approach to the phenomenon of football violence, Newson (2017) discussed two football cultures, one of which is in UK and the other of which is in Brazil to illustrate the ways in which identity fusion can help understandings of football violence. According to Newson (2017), as fan groups invest much time and financial sources in their teams" events and display visual symbols of allegiance to their teams, a particular inter-group social bonding occurs. Furthermore, Ramon Spaaij (2008) stated in his study called Men Like Us, Boys Like Them: Violence, Masculinity and Collective Identity in Football Hooliganism that football hooliganism is a complex, heterogeneous and dynamic phenomenon that should be studied in its different social and historical contexts and that the search for a general theory of football-related violence therefore seems misleading and futile. Besides, he argues that there exist some striking commonalities in the collective identifications of football hooligan formations in different national and local contexts (ibid. p. 3).
As regards researches on football-related violence in Turkey, the country which is taken as basis in this study, it is seen that scholars in this field have mainly focused on national and local issues superficially in their studies (Çakmak and Çelik, 2016;Bulgu, 2005;İlhan, 2014;Yücel et al., 2015). As pointed out by Carnibella et al. (1996), Turkey is among the countries including Greece, the Czech Republic and Albania which were in the early stages of football-related violence in mid-1990s and where sporadic violence was also reported along with isolated incidents. As of mid-1990s, Turkey has experienced a plethora of violent incidents related to football. Therefore, this issue should be handled in a great many contexts and with different approaches to fully grasp why football-related violence occurs in Turkey and how it should be prevented. This study will attempt to deal with the causes and analysis of football-related violence in Turkey and potential measures that were and can be taken to prevent this phenomenon to a great extent.
From 1960 to 1980
Until the 1960s, the world experienced huge turmoil due to two world wars that had destroyed the whole world not only socially and economically but also politically and psychologically. Having struggled to dress the wounds stemming from these wars, the world was trying to find solutions to create a new world order. In fact, football was an inevitable part of this chaos that also affected huge groups. Besides, football was also used by dictatorships as a means of propaganda to have an impact on these groups. For instance, as Eric Niiler (2018) pointed out, Adolf Hitler was hoping for a bit of revenge after losing the propaganda battle at the 1936 Berlin Olympics. It is also stated by Niiler that Global tournaments like the World Cup are never free of politics and that was especially true in 1938 during the run-up to World War II when the fascist leaders of Germany and Italy were eager to put their stamp on the final outcome and that those games were supposed to be a showcase for the Nazi regime and the athletic superiority of the Aryan race.
However, in spite of all measures taken by governing bodies, violence in football remained to be out there. For, the turmoil that arose from all these factors led football to be used as a means of expressing reactions with or without groups they belong to. That being said, football violence turned into a major problem after the 1960s with on-pitch and off-pitch riots and stampedes that were triggered by team supporters who are called as hooligans today. Among these riots were those that occurred in countries including England, Peru, Argentine, Germany, Italy and Congo though the list is not exhaustive. One of these countries that had its share is Turkey where football violence was sometimes at the top of the agenda.
Turkey underwent major problems particularly arising from military-based administrations among which were coups and memorandums in the 1960s and 1970s. The violent atmosphere that was created by figures at senior positions during these years influenced the society in an unfavourable way. The affected society tended to keep away from violence in football, which can be thought to be a partially low-level type of violence. On the other hand, the two worst incidents that occurred in the football history of Turkey concurred between 1960 and 1980 when coups and memorandums were preeminent issues. The worst incident occurred between the fans of Kayserispor and Sivasspor clubs at the Atatürk Stadium of Kayseri in Turkey. After the home team went to leading by a goal in the half time, supporters of both teams started to bully and annoy each other with rocks in hands. These supporters were also armed with knives, pistols and many other weapons that led to a much bigger riot that overflowed the pitch. The stampede that was caused by the fleeing crowd in front of the stand exits resulted in more than 40 deaths and at least 300 injuries. Incidents did not stop even after the game and spread all over the both cities so that the Turkish government had to intervene in for resettle. The second one occurred between the soccer teams Kırıkkale and Tarsus, leading to 4 deaths and more than 100 injuries. It broke out after the football players of the two teams fought each other before the game began. Thus, the match was postponed to a further date. During the match day, there were provocations by Kırıkkale supporters who wrote "Death to Tarsus people" or "Champion Kırıkkalespor" on high hills and 10 buses full of supporters of Tarsus club came to Kırıkkale city to support their team. They were given 450 tickets to enter in the stadium but the stand that was allocated for them had room for only 200 people. The first half time ended with 0-0 draw and after the second half time whistle was blown, both teams scored goals to end the match with 1-1 draw. After the end of the match, as the goalkeeper of the home team punched the goal scorer of the away team, the fuse was ignited and the players of both teams launched a stampede and resulted in a riot that would involve the angry supporters of both teams.
Though these matches are considered as the worst ones of the football history in Turkey, there were also small-scaled incidents that can be discussed in another study as the main point of this paper is to highlight the most severe occasions that occurred related to violence in football.
From 1980 to 2000
At the very beginning of 1980, Turkey was exposed another coup by the military that was considered the most effective one as the coup was staged against not a single ideology or social group but against the whole country. The figures that were involved in the coup had a rather oppressive regime that silenced and suppression the whole society resulting in curfews and an immobilization within the country. Thus, it can be implied that neither ordinary people nor football supporters could dare take to the streets or trigger a riot. As a result, the 1980s did not see any football-related violence and incidents due to the abovementioned suppression that was applicable at that time. Though the oppressive regime was overthrown in the late 1980s and thus football-related violence slightly erupted, the period between the 1980s and 2000 saw highly small-scaled incidents. One of these matches occurred during a match between Kocaelispor and Ankaragücü clubs when supporters had a fight due to disagreement over their seats. Following the fight, a supporter was shot dead.
As the Turkish society was on the verge of undergoing a huge transformation, these days referred to reconciliation among rivals and thus supporters who decided to bring peace to stands as they believed there was no benefit for anybody involved in football to trigger riots and lead to deaths. Yet, it is also possible to state that as of the mid-1990s, media stepped in and was the precursor of a new era in football which would be totally associated with violence. This era would start off from the very beginning of the millennium.
From 2000 up to Today
In the wake of developments triggered by the power of media and considering football as an industry as well as booms in the level of success of Turkish football clubs both in their own league and international tournaments, the football culture and on-pitch and off-pitch reactions of supporters were reshaped and brought into a new dimension. Besides, Turkey underwent a huge change as the country grappled with economic crisis that emerged from the very beginning of the millennium. Thus, this crisis could not be expected to bypass the football industry. Along with economic collapse as well as continual elections, the society was striving to find a way out. As Turkish society has enormous admiration for football, the members of this society were regarding football and stands as phenomena to relieve and get out of stress by feeling the sense of belonging to a specific club or group. Among the particular groups of supporters that are known to have been established long ago are Çarşı for Beşiktaş club, UltraAslan for Galatasaray club, Genç Fenerbahçeliler for Fenerbahç e club, Texas for Bursaspor club and Nefer for Eskişehirspor club and so on. However, discourses of media and executives of top clubs, reactions of security forces and football federations have a crucial but negative role in reshaping and worsening the situation which has a huge impact on stands and supporters. All these factors have created a violent atmosphere that is unable to be prevented. Özgen and Balcı (2017) revealed that the violent events in the stadiums differentiate depending on the ages and educational levels of the fans as well as on how often they watch matches. Özgen and Argan (2017) found that the arrogances of the fans differentiate depending on how often they watch matches.
Some of the most unfavourable incidents occurred in the 2000s. In 2000, two Leeds United fans were killed in violent clashes between English and Turkish football supporters in Istanbul and another man was seriously injured after fighting ahead of UEFA Cup clash between the Premiership side and Galatasaray (BBC, 2000). In 2002, the match between Beşiktaş and Trabzonspor ended 5-0. After the end of the match, the supporters of defeated team Trabzonspor took revenge by removing the seats of the stadium and scattering them all over the pitch resulting in a riot that would provoke the supporters of rivals later on. In another match that was played in 2002 between Fenerbahç e and Galatasaray, the two top clubs of Turkish football that have been in an eternal rivalry for a long time, Fenerbahç e won the match with 6-0, and the goalkeeping coach of the away team was injured with hard objects coming from the stands. This incident overshadowed the victory of the home team. Such relatively small-scaled incidents were followed by worse incidents afterwards. In 2004, Beşiktaş and Çaykur Rizespor took on and as reported by UEFA (2004), sixteen-year-old Cihat Aktas died after receiving two stab wounds when fighting broke out among a section of home supporters at half-time during the match at the Inönü stadium in Istanbul. According to the report, Aktas was taken to hospital but died subsequently (ibid, 2004). In 2005, Fenerbahç e hosted the British football club Everton for a scrimmage before the season started and had won the match with the score of 5-0, and a fan named Yusuf Behar was shot in his leg and the source of bullet was not found (Tinaz et al., 2015). Having travelled to the airport after the match between Fenerbahç e and Çaykur Rizespor, some people who were then identified as Trabzonspor supporters shot the driver of the bus that was en route and transported the Fenerbahç e football club and fortunately, the head of security sitting next to the driver stepped on the brakes and prevented a potential disaster. And finally, in 2019 Amedspor Faaliyetler and Sakaryaspor A.Ş. had a match which would never be forgotten due to an unprecedented occasion. Mansur Çalar, the captain of the home team Amedspor Faaliyetler, slashed the players of the away team while the match was going on and had a lifetime ban with licence revocation.
Unpreventable Violence
Particularly as of 2010, nearly all the stadiums in Turkey were renovated, technological devices were used at all stadiums, new laws and schemes were introduced, punishments and security measures were intensified. However, the issue of violence still remained as a major problem with rise in the number of violent incidents though statistically the number of deaths and wounds remarkably decreased. One of the outstanding developments that occurred related to violence in sports and football, as violence was mainly associated with football, was the introduction of the law numbered 6222 on the Prevention of Violence and Disorder in Sports in July 2011. This law also brought with it the introduction of Passolig e-ticket scheme that required supporters to have a card in order to participate in matches. In fact, this has a correlation with actions that were carried out by British Football Association to prevent the hooliganism and violence in football. Through the use of measures taken by the police, legislation by the government and rules set in place by the English Football Association, England has taken the lead in regulating and preventing the problem (Footballnetwork, n.d.).
Despite all these measures and reforms as well as regulations and schemes, violence did not cease to be a major problem of sports and football as violence still occurs psychologically and physically on and out of the pitch. Turkey is one of the countries that still have much to do to prevent the violence in football as the society is not ready yet to overcome the issue completely. The term violence is so embodied in the society that it is not easy to totally eradicate this embodied feeling though it does not seem as bad as it was decades ago. Considering that the media in Turkey has an unfavourable stance towards supporters of football and supporters themselves have a disruptive discourse rather than a constructive one, it seems unlikely that violence will be removed from the minds of supporters and football fields. This fact also stems from the very nature of the Turkish society that approaches violence as a part of them. Indeed, Turkish people are very naï ve and understanding people but they do not tend to exhibit this type of behaviour when violence is the case.
Sampling
Data collection through in-depth interviews is highly popular among qualitative researches and provides in-depth information related to research phenomena. In this context, purposeful sampling was used to reach 15 members of football fan groups of the relevant periods to investigate the unpreventable violence in Turkish football during the periods in question. Patton (2002) states that in qualitative researches, the researcher may stop data collection process when data repeat themselves, and he defines this as the saturation point in data collection. As a result of the interviews conducted with the relevant fans, it was found that the saturation point presented by Patton (2002) was archived and data collection process was finished.
Data Collection Tool
A semi-structured interview form was prepared to conduct interviews with fans included in the study. The interview form approach expressed by McCracken (1988) was used. In this context, a literature review was carried out related to the research phenomena and an interview form including 5 questions in total was prepared by obtaining the opinions of academicians that have studies about violence in sport.
Data Collection
Interviews were conducted in open public cafes in accordance with the procedure of the semi-structured interview. Silverman (1993) claims that researchers must stand of judgemental and directive reactions. In this context, the researcher has stood of being directive to remain neutral. Interviews lasted between 27 and 48 minutes. All interviews conducted as part of the study were recorded with a recorder. All data obtained during the in-depth interviews were recorded on computer
Data Analysis
Data obtained were analysed in accordance with basic methodological principles of the descriptive analysis. In this context, the N-vivo program, which is commonly employed in qualitative researches, was used. A coding list was formed from the most repeated data in this study. Four themes were formed by the researcher in line with the relevant coding list. The themes formed were given with a literature discussion related to the opinions of participants.
Result
Data obtained from participants were discussed and presented through four themes. In this sense, ıt was determined that there were four main factors that led to the occurrence of the football-related violence during the period in question in parallel with the opinions of the participants.
The impact of media T-9
"Turkish Football Federation, Referees, Administrators, Fans, and media are the major responsible actors of the football-related violence."
T-11
"I think that those who are mainly responsible for the violence are administrators and media, and that there is a great deal of advertisement of football and people are provoked." The impact of referee T-7 "Decisions given by referees maximise the violence, thus referees should be given seminars and those who are capable should be assigned with the elimination of partisan referees. Biased referees should overcome bias and fans should be trained"
T-4
"Our referees should be neutral as a whole, and the football community, in particular the authorities, should concretize their abstract discourses against this situation, at least to contribute to decreasing the level of violence." The impact of administration T-3 "The only one, who is responsible for the violence, I believe, is those who administrate." "The attitudes, statements, releases, and acts of the football club executives enhance the violence." The impact of politics T-1 "The statements of politicians that disrupt and dissociate the society also affect the sports and lead to the expression of violent feelings of people."
T-10
"The impact of politicians on the sports and football, and the belief that politicians favour some certain teams, though the football federation is autonomous, cause the fans of other teams to exhibit violent behaviours."
Conclusion
Violence has always been associated with football though the notion of violence is totally opposed to the idea that football is a game that involves entertainment, comfort, getting rid of stress and the union of people regardless of race, colour or any other characteristics. As of the very beginning of the inaugural World Cup up to now, violence has been out there either psychologically or physically and this has sometimes been carried out under the impact of political figures or events. Nearly every country on the map of the world has been exposed to football-related violence to a certain degree. Nonetheless, countries such as England, Argentine, Peru and Turkey have felt the very nature of football-related violence on their territories. In fact, hooliganism was coined by English people as it was first introduced by them. Turkey is one of the countries that could not eradicate football-related violence. This paper focused on particular periods when violence in football was a huge problem and had to be taken care of. Based on the opinions of the participants that were included in the study, it was determined that people mainly think that those who are responsible are top officials in the industry and politicians as well as the media. Though they also think that referees should be trained at higher levels, referees are seen less problematic than other figures that may provoke the football-related violence.
Considering the period between 1960 and 1980, Turkey experienced one coup and one memorandum managed by the military and thus the society was extensively affected by these events, results of which could also be observed in football. Though there was no impact of media or other issues that could trigger violence in football in Turkey as technology was not advanced enough at that time, violence somehow occurred and this can be explained only through the violent atmosphere which was created by the two military-based events. Though Turks were struggling to bind up the wounds of a huge war that broke out at the beginning of the 20 th century, they found themselves in a violent atmosphere again when military-based events emerged. And this atmosphere negatively affected the football game as could be expected.
Considering the period between 1980 and 2000, the atmosphere was not so different from that of the 1960s and 1970s. Besides, Turkey underwent transformations and economic crisis along with coups or coup attempts. Though it is possible to mention that the violent culture that was created was trying to be eradicated with some peace processes among rival groups of supporters, the nature of violence was embodied in the society and would go on afterwards.
Finally, though the 21 st century can be considered as a period when the number of deaths and wounds due to violence in football decreased, violence was manifested psychologically in the minds of the society and supporters of football and regarded as an oppressive factor. This psychological barrier could not be blocked and led to the continuum of football-related violence, contributing to the formation of hate speech environment among rival groups of football supporters. Unfortunately, this atmosphere remained permanent in the minds of people who were involved in violence in football.
|
2019-07-26T13:34:27.329Z
|
2019-06-27T00:00:00.000
|
{
"year": 2019,
"sha1": "cc98419cc3cf862f2aadd9d19422b240a17e42a2",
"oa_license": "CCBY",
"oa_url": "http://redfame.com/journal/index.php/jets/article/download/4351/4549",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8cc49584a3a7b1f3ab7f7009ad4d29d00588ac35",
"s2fieldsofstudy": [
"Sociology",
"History"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
3781466
|
pes2o/s2orc
|
v3-fos-license
|
Web-Based Interventions for Weight Loss or Weight Loss Maintenance in Overweight and Obese People: A Systematic Review of Systematic Reviews
Background: Weight loss is challenging and maintenance of weight loss is problematic. Web-based programs offer good potential for delivery of interventions for weight loss or weight loss maintenance. However, the precise impact of Web-based weight management programs is still unclear. Objective: The purpose of this meta-systematic review was to provide a comprehensive summary of the efficacy of Web-based interventions for weight loss and weight loss maintenance. Methods: Electronic databases were searched for systematic reviews and meta-analyses that included at least one study investigating the effect of a Web-based intervention on weight loss and/or weight loss maintenance among samples of overweight and/or obese individuals. Twenty identified reviews met the inclusion criteria. The Revised Assessment of Multiple SysTemAtic Reviews (R-AMSTAR) was used to assess methodological quality of reviews. All included reviews were of sufficient methodological quality (R-AMSTAR score ≥22). Key methodological and outcome data were extracted from each review. Results: Web-based interventions for both weight loss and weight loss maintenance were more effective than minimal or control conditions. However, when contrasted with comparable non-Web-based interventions, results were less consistent across reviews. Conclusions: Overall, the efficacy of weight loss maintenance interventions was stronger than the efficacy of weight loss interventions, but further evidence is needed to more clearly understand the efficacy of both types of Web-based interventions.
Introduction
Obesity and overweight have reached epidemic proportions globally and pose a major risk for serious chronic diseases, including type 2 diabetes, cardiovascular disease, hypertension, sleep apnea, osteoarthritis, and certain forms of cancer [1].Such conditions may further impact individuals' quality of life and well-being [2].Moreover, people suffering from weight disorders are at greater risk of social, emotional, and psychological problems such as depression, poor self-esteem, and social isolation [3].Functional interventions aimed at reducing weight and maintaining weight loss, while working on related pathologies, are typically combined treatment options (nutritional, physical, behavioral, psychological, pharmacological, surgical) [4].Although these usually lead to short-term weight loss, long-term maintenance of results is rarely achieved [5,6].Consequently, alternative integrative programs aimed at supporting long-lasting weight loss are typically needed.As a result, a number of Web-based interventions for weight loss or weight loss maintenance have been recently developed, and their efficacy has been tested in a number of randomized controlled trials (RCTs).Web-based therapy could help patients overcome barriers to treatment such as long distances to clinics and long waiting times.Most Web-based interventions have zero waiting time, and all are considerably cheaper than face-to-face therapy, enabling widespread dissemination of treatment [7].Furthermore, Web-based interventions are cost-effective and provide greater user access, flexibility, and anonymity [8].Therefore, Web-based interventions are especially relevant for patients who might not otherwise access treatment for reasons such as fear of social stigma associated with seeking treatment.
The published systematic reviews and meta-analyses of Web-based interventions for weight loss and weight loss maintenance reveal conflicting conclusions.Thus, the purpose of this meta-review was to (1) examine the published systematic reviews that included at least one study assessing the efficacy of a Web-based intervention for weight loss and/or weight loss maintenance for samples of participants who are either overweight or obese, (2) produce a summary of the scientific evidence, (3) identify the strengths and weaknesses of Web-based interventions to help clinicians select the best treatment option for their patients, and (4) provide empirically supported suggestions for practice.
Methods
This review was carried out according to the guidelines proposed by Smith et al [9].The protocol for this study was registered in 2015 in the International Prospective Register of Systematic Reviews (PROSPERO).
Inclusion and Exclusion Criteria
Given the absence of an established standard definition for systematic reviews, the following inclusion and exclusion criteria provide the parameters used for defining systematic reviews for this meta-review.Only reviews that satisfied the following criteria were included: (1) used a systematic review method (eg, critical review, literature review, meta-analysis), (2) indicated the method for identifying and evaluating studies for inclusion, (3) included at least one study assessing the efficacy of a Web-based intervention for weight loss and/or weight loss maintenance on the absolute variation and/or the change in percentage of body weight or body mass index (BMI) for a sample of overweight and/or obese people, and (4) received a methodological quality score of 22 or higher on the Revised Assessment of Multiple SysTemAtic Reviews (R-AMSTAR; see methodological quality assessment section for details).There were no restrictions for participant age, publication year, or publication language to obtain the maximum number of reviews possible.Non-English publications were translated to facilitate data extraction.
Search Methods
As suggested by Smith et al's guidelines [9], the following electronic databases were searched: PubMed, the Cochrane Library, PsycINFO (ProQuest platform), and the Centre for Review and Dissemination (CRD), which includes the Database of Abstracts of Reviews of Effects (DARE).Search terms were identified for each of the following relevant categories: population (obese, obesity, overweight), intervention (online, Web, computer), outcome (weight loss, weight loss maintenance), and review type (review, meta-analysis).Boolean searches were then conducted to systematically link the various combinations of category terms (and their variations through truncation) as search terms, Medical Subject Headings (MeSH) keywords and Emtree keywords to identify potential systematic reviews [10] In addition, the contents of Obesity Reviews, Annual Review of Public Health, and the Journal of Medical Internet Research were searched using the following syntax: (review OR meta-analysis) AND (online OR web OR computer) AND ("weight loss") AND (obes* OR overweight).
As a supplement to electronic searching, reference lists were checked to identify additional potential systematic reviews.The search was performed for records published through December 2015.
Selection Process
Titles and abstracts of records resulting from the literature search were independently screened by authors FR and SP.When further clarification was needed, the full text was retrieved.Disagreements were resolved by a third author (AS).In accordance with one of Smith et al's recommendations [9], the review team included at least one person with methodological expertise in conducting systematic reviews (GMM and AS) and at least two experts on the topic under review (GC, GMM, and GP).
Data Extraction and Management
Authors SP and FR independently extracted the following data and resolved any disagreements in consultation with a third author (AS): (1) authorship and publication-related information; (2) aims of the review; (3) Reviews that included studies that did not investigate the efficacy of weight loss and/or weight loss maintenance programs among obese and overweight participants were coded for the total number of included studies and the number of included studies involving treatments for weight loss and/or weight loss maintenance in a sample of obese and/or overweight persons.Additional relevant information was obtained by retrieving original studies and contacting review authors as necessary for coding purposes.
Methodological Quality Assessment
The R-AMSTAR [11] was used to quantitatively measure the methodological quality of included systematic reviews by assessing the presence of the following 11 domains: (1) an a priori design, (2) duplicate study selection and data extraction, (3) a comprehensive literature search, (4) the use of status of publication as an inclusion criteria, (4) a list of included/excluded studies, (5) characteristics of included studies, (6) documented assessment of the scientific quality of included studies, (7) appropriate use of the scientific quality in forming conclusions, (8) the appropriate use of methods to combine findings of studies, (8) assessment of the likelihood of publication bias, and (9) documentation of conflicts of interest.Each domain's score ranged between 1 and 4, and the R-AMSTAR total scores had a range of 11 to 44.A total score of 22 (ie, a mean of two criteria for each item were satisfied) was required for systematic review inclusion, thus excluding low-scoring systematic reviews [11].The authors in charge of extracting data from the selected reviews (SP and FR) also preliminarily and independently assessed the methodological quality of the contributions.A third author (AS) resolved any discrepancies.
Data Synthesis
First, reviews were analyzed and relevant information was extracted and recorded.Then, the results across the different reviews were aggregated through a second-order qualitative synthesis of treatment efficacy conclusions for weight loss interventions and then for weight loss maintenance interventions.Quantitative results were recorded but no second-order overall effect was calculated from the included meta-analyses including similar sets of studies because a meta-analysis of meta-analyses is possible only if the data from individual studies have not been used in more than one meta-analysis [9].Thus, pooled effects of overlapping reviews were only compared in order to investigate the consistency of results.
Ultimately, the strengths and weaknesses of the various Web-based interventions listed across the reviews were summarized.
Included Reviews
A flowchart indicating the selection of included systematic reviews is presented in Figure 1.Searches of electronic databases identified 561 reports, of which 43 were duplicate and 437 were excluded based on information from the title and abstract.The remaining 81 reports were then evaluated for inclusion by reviewing the full text of each report, resulting in the exclusion of 61 reports for the following reasons (3 reports were omitted for more than one reason): (1) no systematic review was presented (n=17), (2) none of the included studies evaluated the efficacy of a Web-based treatment for weight loss and/or weight loss maintenance (n=34), (3) the included study samples were not exclusively comprised of overweight and/or obese participants (ie, study samples were also comprised of normal-weight participants; n=9), (4) weight change (loss or maintenance) was not measured or summarized in terms of absolute variation and/or change in percentage of body weight or BMI (n=2), and (5) the review R-AMSTAR methodological quality score was less than 22 (n=2).A total of 20 systematic reviews were finally included [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31].Multimedia Appendix 1 details the reasons for exclusion and whether inclusion or exclusion was based on information from the title and abstract or full text for each of the evaluated reports.
The number of databases that were searched for each systematic review ranged from 1 [13,23] to 10 [27].A total of 351 studies were evaluated across the 20 systematic reviews, of which only loss and/or weight loss maintenance.Of the 83 studies, 73 evaluated Web-based interventions for weight loss and 10 evaluated Web-based interventions for weight loss maintenance (see Multimedia Appendix 2 for details).The 83 studies were mostly conducted in the United States, Europe, or Australia; 51 of 83 studies were only included in one systematic review and the remaining 32 studies were included in more than one systematic review.The study that was included in the most systematic reviews [37] was included in a total of eight reviews.
Methodological Quality of Included Reviews
The R-AMSTAR scores of the 20 included reviews (Table 3) ranged from 23 to 43 points, with a mean of 30.5 (SD 5.5), a median of 30.5 (IQR 9.25).The highest mean score across the 20 systematic reviews (mean 4, SD 0) was for providing the characteristics of the included studies (item 6), whereas the lowest mean score was for the inclusion of conflicts of interest (item 11; only Hartmann-Boyce et al [27] fully satisfied this criterion).
Efficacy of Web-Based Interventions for Weight Loss and/or Weight Loss Maintenance
Effect sizes of Web-based interventions for weight loss and weight loss maintenance, together with the specific comparison interventions, are reported in Table 4.The intervention purpose (ie, weight loss or weight loss maintenance) is also specified in Table 4.In addition, details for each meta-analysis, such as the number of studies used to calculate effect sizes, the heterogeneity among included studies, and the combined sample size are also reported in Table 4. Except for Kodama et al [15], all meta-analyses performed quantitative data synthesis separately for both the type of condition compared to the Web-based intervention and whether the purpose of the intervention was weight loss or weight loss maintenance.Given that several primary studies were included in more than one meta-analysis, issues related to statistical independence prevented meta-meta-analysis of the effect sizes across the meta-analyses.Overall, the meta-analysis effect sizes were relatively small in magnitude, suggesting that although Web-based interventions were significantly more or less effective than the comparison conditions, this difference may have little clinical relevance.
Web-Based Interventions Versus Control Conditions (Minimal Interventions)
Across reviews, Web-based interventions were found to be significantly more effective than minimal treatments in reducing weight.Specifically, Wieland et al [18] found that Web-based interventions were significantly more effective than minimal treatments in reducing weight and BMI at 3-and 6-month follow-ups.Young et al [26] and Weinstein [12] also obtained a significant difference in weight change favoring Web-based interventions over controls.Additionally, in Raaijmakers et al's review [31], six technological-based interventions generated a significantly greater effect in terms of weight loss than no treatment conditions.In Bennett et al's review [20], more than half of the identified trials reported significantly greater weight loss outcomes for eHealth interventions compared to control conditions.Similar results were found in Levine et al's review [30], in which 12 Web-based interventions (75%) resulted in greater weight loss compared to control conditions.Finally, Grunenberg et al [21] found a Web-based intervention to be more effective than control groups (waitlist and standard waiting treatment) at reducing both BMI and weight.Neve et al [14] was the only review to report no significant difference in weight loss between Web-based interventions and control groups at treatment termination.This contrasting finding may be attributable to Neve et al [14] including fewer studies that tested this comparison (n=3) than the other reviews that found Web-based intervention to be more effective at promoting weight loss compared to control conditions.Also, the meaning of the term "control condition" varied across reviews from no intervention [31] to providing participants with a weight loss manual [14].Additionally, Neve et al [14] combined treatment effects irrespective of time points (from 16 weeks to 12 months), whereas other reviews pooled the studies' effects separately for each follow-up point.
Web-Based Interventions Versus Non-Web-Based Comparable Interventions
Included systematic reviews that included studies comparing Web-based treatments with non-Web-based comparable interventions presented inconsistent results.For example, Raaijmakers et al [31] found Web-based interventions to be more effective than usual care, and Tsai et al [22] found greater weight reduction among the participants assigned to a Web-based condition (ie, Weight Watchers) than those receiving self-help interventions.Levine et al [30] also concluded that technology-based interventions can successfully supplement primary care interventions for weight loss outcomes.Finally, Weinstein [12] found that Web-based interventions are significantly more effective than their non-Web-based counterparts both when the latter consists of usual care or when participants receive information from a manual.
On the other hand, other reviews found Web-based interventions to be as effective as non-Web-based comparable interventions.Specifically, Burke et al [25] examined three studies on online dietary self-monitoring and found that online treatments resulted in significant within-group weight loss; however, when compared with a paper diary self-monitoring condition, the pooled effect size was no longer statistically significant.In addition, Bennett et al [20] found that eHealth approaches led to relatively modest weight loss outcomes with undetermined clinical significance when compared with traditional individual and group-based interventions.Finally, Reed et al [17] b Ι 2 : Percentage of the variation across studies attributable to study heterogeneity rather than chance, indicating the level of inconsistency across study results; Τ 2 : between-study variance.c Effect sizes were retrieved from original articles reporting a statistically significant pooled effect estimated from at least two trials.All studies except for those indicated used a random effects model to calculate the aggregated effect size.MD: mean difference; SMD: standardized mean difference (Cohen d; standardized weighted aggregated average difference score between conditions across primary studies that use different outcome measures/metrics; to facilitate aggregation across measures/metrics, the between-condition difference for each primary study is converted to standard deviation units that are then weighted with primary studies with more precise estimates carrying more weight in aggregation); WMD: weighted mean difference (unstandardized weighted aggregated average difference score between conditions across primary studies that use the same outcome measure/metric; the between-condition difference for each primary study is weighted with primary studies with more precise estimates carrying more weight in aggregation).d A fixed effect model was used to calculate the aggregated effect size.
Web-Based Interventions Versus Face-to-Face Interventions
This section summarizes results from systematic reviews in which Web-based interventions were compared with non-Web-based counterparts involving face-to-face interventions.In Wieland et al [18], face-to-face interventions were more effective at promoting weight loss than Web-based interventions.Also, Raaijmakers et al [31] reviewed a primary study in which face-to-face treatment led to a significantly greater reduction in weight than Web-based intervention.Similarly, Kodama et al [15] concluded that using a Web-based intervention as a substitute for a face-to-face intervention produced unfavorable results.
Web-Based Interventions Versus Hybrid Interventions Versus Face-to-Face Interventions
Web-based interventions were further compared with hybrid interventions (ie, including both Web-based and non-Web-based components) in several systematic reviews.For example, Kodama et al [15] came to the conclusion that adding face-to-face interventions to Web-based interventions increases the impact of the Web-based interventions on weight loss.In contrast, Wieland et al [18] reported that Web-based interventions and hybrid conditions (ie, Web-based intervention face-to-face treatment) did not differ significantly in their effects.In the study reported by Wieland et al [18], the hybrid condition was also compared with the face-to-face intervention without Web-based components.This pairwise comparison indicated that mean weight loss achieved by face-to-face treatments was significantly greater than mean weight loss achieved by hybrid conditions.In comparison, Reed et al [17] determined that computer-based treatments combined with standard interventions (ie, behavioral programs, face-to-face treatments) resulted in significantly more weight loss than standard interventions only, at least when short-term effects were considered.Similarly, Tsai et al [22] found significantly greater weight loss in participants receiving a Web-based treatment (ie, Weight Watchers program) combined with individualized contacts than in participants receiving a face-to-face intervention.Due to these contrasting results, it is not clear if hybrid interventions are more effective in increasing weight loss than single component interventions (ie, either only Web-based or only non-Web-based).
Enhanced Web-Based Interventions Versus Basic Web-Based Interventions
Several systematic reviews also compared the effects of Web-based interventions that differed on both the interaction level and the extent to which they were tailored to users' needs.Osei-Assibey et al [24] and Hartmann-Boyce et al [27] reported that Web-based tailored programs were more effective in weight loss than information-only websites, despite the disappearance of this difference by 18 months after treatment [27].Levine et al [30] concluded that interventions including clinician-guided software or feedback from personnel promoted greater weight loss than fully automated interventions, thus underlining the importance of the interactive component.Both Neve et al [14] and Wieland et al [18] were also in agreement about Web-based interventions with interactive components being effective in reducing weight.Specifically, the enhanced Web-based interventions considered in both reviews included additional programs, such as email-based behavioral therapy delivered by a doctoral-level therapist (including feedback and behavioral lessons), behavioral e-counseling provided by a counselor (weekly email behavioral counseling and feedback), and automated e-counseling (weekly automated and tailored messages).Sharma [23] reported a greater weight reduction in behavioral e-counseling conditions compared to basic Web-based programs, and Manzoni et al [16] concluded that Web-based behavioral programs enhanced by tailored feedback or self-monitoring resulted in more effective weight reduction than education-only Web-based interventions.Similarly, Osei-Assibey et al [24] found that weight change was greater for Web-based programs supporting collaborative interactions than for Web-based educational interventions.Furthermore, Weinstein [12] concluded that online counseling may be a valid alternative to time-consuming clinical programs and health care costs.Still, in Altman and Wilfley [28], an included study revealed a Web-based lifestyle behavior modification program to be more effective that a Web-based health education program at treatment termination, but not at 2-year follow-up (probably because program usage decreased over time).There was some evidence that website usage was associated with enhanced outcomes.For example, one study included in Chang et al [19] reported a Web-mediated walking program that was administered both alone and in conjunction with online community components.No differences were found in physical activity outcomes between participants who had access to social media versus those who did not; however, among participants using online communities, higher use of social media was associated with greater weight loss.Overall, these findings suggest that tailored and interactive Web-based interventions promote greater weight loss than basic Web-based interventions (ie, delivering information via the Internet).However, results also indicate that utilization of Web-based resources has potential to boost treatment effectiveness.
Within-Subject Comparisons
Arem and Irwin's review [13] summarized the results of studies measuring within-group effects of Web-based interventions by comparing weight outcomes before and after treatment.Findings indicate Web-based interventions caused a decrease in weight ranging from 0.8 kg (considered to be natural noise) to 4.9 kg.The authors concluded that the large degree of treatment heterogeneity across studies reduced their ability to make reliable conclusions.
Web-Based Interventions Versus Control Conditions (Minimal Interventions)
Six reviews compared Web-based weight loss maintenance interventions with control conditions with consistent results.Specifically, Neve et al [14], Manzoni et al [16], Gilmartin and Murphy [29], Young et al [26], Bennett et al [20], and Wieland et al [18] found that Web-based interventions were, on average, significantly more effective than minimal interventions in promoting weight loss maintenance.
Web-Based Interventions Versus Non-Web-Based Comparable Interventions
Two systematic reviews reported results of studies comparing Web-based interventions for weight loss maintenance with non-Web-based comparable interventions.Kodama et al [15] concluded that, in comparison with non-Web-based conditions, Web-based programs were ineffective.In contrast, Bennett et al [20] reported on a study in which an interactive Web-based intervention was compared to a monthly face-to-face or telephone-based intervention.In this case, the amount of weight regained did not differ significantly between the two interventions.Overall, the inconsistent results for this particular comparison of treatments may be due to the diverse characteristics of the non-Web-based interventions that were provided.
Web-Based Interventions Versus Face-to-Face Interventions
Change et al [19], Weinstein [12], Neve et al [14], and Manzoni et al [16] reported that maintenance of weight loss was similar between Web-based and non-Web-based face-to-face interventions.In comparison, Gilmartin and Murphy [29] and Wieland et al [18] concluded that Web-based treatments were less effective than face-to-face interventions, especially if the latter were intensive and not minimal [12].Specifically, Gilmartin and Murphy [29] stated that face-to-face interventions and facilitator-led interventions were more effective than remotely delivered methods such as Web-based interventions.Finally, Wieland et al [18] referred to three studies comparing face-to-face interventions with Web-based interventions for weight loss maintenance.Both minimal (once monthly or less) and intensive (more than once per month) face-to-face interventions were found to be more effective than computer-based interventions.However, the amount of weight lost by persons assigned to the control (minimal) conditions was relatively small and was not maintained in the long term, making the clinical significance of these differences unclear.
Strengths and Weaknesses of Web-Based Interventions for Weight Loss and Weight Loss Maintenance
Eleven systematic reviews provided information about the strengths and weaknesses of the evaluated Web-based interventions, with seven of them specifically identifying three advantages of Web-based interventions: They may enhance perceived self-control within treatment.Specifically, Levine et al [30], Manzoni et al [16], Kodama et al [15], and Raaijmakers et al [31] pointed out that Web-based interventions allow people to self-monitor their weight and behaviors, thereby increasing their perceived sense of control and ultimately reducing the number of dropouts [15].They may facilitate patient-patient and patient-expert interactions, thus allowing people to receive regular consistent feedback on their behaviors and answers to questions [16,30,31].
Web-based interventions for weight loss are more cost-effective than standard treatments [31].
Only three systematic reviews reported weaknesses associated with Web-based interventions for weight loss and weight loss maintenance.Arem and Irwin [13] reported that the limited effectiveness of Web-based interventions may be due to the restricted range of programs and updates that are available, which may not always be suitable to meet users' needs.Bennett et al [20] and Chang et al [19] indicated that Web-based treatments may be affected by low levels of familiarity and self-efficacy associated with managing Web technologies, as well as by limitations associated with access to the Internet.
Principal Results
To our knowledge, this systematic review of systematic reviews represents the first state-of-the-art analysis of Web-based intervention efficacy for weight loss and weight loss maintenance.According to the selection criteria, 20 systematic reviews were deemed eligible for inclusion.All 20 systematic reviews were published in 2005 or later.They mainly investigated Web-based interventions for weight loss, with only a few investigating Web-based interventions for weight loss maintenance.Findings from the meta-systematic review regarding Web-based interventions for weight loss and weight loss management were mixed; in fact, the findings within the included systematic reviews are often conflicting, particularly in relation to the efficacy of Web-based weight loss interventions.The conflicting results are likely due to the notable heterogeneity of inclusion criteria across the systematic reviews for selecting primary studies.Nevertheless, all the included systematic reviews demonstrated methodological rigor (R-AMSTAR score ≥22), although none received the highest possible score for methodological quality.Specifically, Hartmann-Boyce et al [27] was the only systematic review that fully met the 11th R-AMSTAR criterion of disclosing conflicts of interest, ensuring the validity of the systematic review results.Indeed, by not declaring conflicts of interest, it is impossible to rule out the existence of publication bias.The synthesis of the included systematic reviews identified both strengths and weaknesses of the Web-based interventions for both weight loss and weight loss maintenance.Web-based interventions may facilitate continuous and automated tracking of health-related behaviors by supporting self-regulatory techniques, patient involvement, and patient commitment to treatment.Moreover, Web-based connectivity permits the sharing of information among health professionals and peers.However, the efficacy and dissemination of Web-based interventions may be affected by the gap in access to computers and the Internet, as well as the lack of technological literacy among potential users.
Limitations
In conducting this systematic review of systematic reviews, it was sometimes difficult to make a clear distinction between Web-based interventions (delivered over the Internet) and computer-based interventions (delivered over the Internet or by installing computer software) because these terms are often used interchangeably or defined differently [18].It was also difficult to compare the overall effects across systematic reviews since they were calculated differently (ie, weighted mean difference vs standardized mean difference).Furthermore, conclusions of a second-order review are not drawn from results of primary studies, but from reviews that have synthesized the results of primary studies.Because the same primary studies were often included in more than one systematic review, not only did this overrepresentation prevent meta-meta-analysis of the efficacy of Web-based interventions for weight loss and weight loss maintenance, it also compromised the accuracy of the meta-systematic review findings, thus affecting the actual reliability of findings based on second-order data synthesis.In addition, because the number of primary studies on which each systematic review was based varied substantially (from 1 to 25), the findings from some systematic reviews were based on more evidence than the findings of other systematic reviews.Given the limitations associated with this meta-systematic review, the conclusions should be interpreted with some caution.
Are Web-Based Interventions for Weight Loss and Weight Loss Maintenance Effective?
This systematic review of systematic reviews concludes that Web-based interventions for weight loss are often more effective than minimal treatments (only Neve et al [14] reached different conclusions); however, when compared with non-Web-based or hybrid interventions, results appear inconsistent across reviews.More encouraging results in terms of weight loss were obtained when Web-based interventions were enhanced (ie, more interactive and tailored) than when they were basic (ie, information website).Nevertheless, Web-based interventions for weight loss were less effective than face-to-face interventions across the selected reviews.Results were more encouraging in relation to Web-based weight loss maintenance interventions, which were found to be more effective than minimal interventions across all the reviews and, in some reviews, as effective as the non-Web-based counterpart.The decision of whether or not to substitute an in-person intervention for weight loss maintenance with a comparable Web-based treatment mainly depends on patient costs, needs, and preferences.These conclusions should be considered cautiously.Reported effect sizes were small; for example, weight loss of 1 to 2 kg may not by clinically significant, irrespective of significance level.Also, conclusions might be affected by heterogeneity across primary studies.In fact, research designs differed in terms of type of intervention, sample size, duration, control condition, etc.Although the conclusions from this meta-systematic review are of significant interest, the real impact of Web-based interventions for weight loss remains unclear, suggesting the need for greater clarity in both the definition and specificity of the different types of Web-based treatments available, as well as how each intervention can be best matched to users' needs.Further evidence is therefore necessary.
Suggestions and Implications for Future Research
Authors interested in providing a new summary review of the literature on the efficacy of Web-based interventions for weight loss and weight loss maintenance for obese and overweight patients can refer to the list of included records reported in Multimedia Appendix 2. A total of 83 primary studies investigating the effectiveness of online interventions for weight loss and weight loss maintenance were identified, analyzed, and compared across systematic reviews.A single study-level review of these primary studies that pinpoints differences and inconsistences across the primary studies would be beneficial.In addition, meta-analysis of these primary studies would provide a quantitative summary of the efficacy of Web-based treatments for weight loss and weight loss maintenance.
Future systematic reviews should provide a high level of detail when reporting primary study effect sizes.Specifically, detailed information about the nature of the comparison conditions (especially for instances in which there are multiple comparison conditions) and the various types of efficacy outcomes is necessary to allow other researchers and practitioners to more clearly interpret the results and to facilitate replication of these studies.Also, the effects of Web-based interventions for weight loss and Web-based interventions for weight loss maintenance should not be compared with each other [15] because they differ in both aims and outcomes.In addition, researchers and practitioners should carefully consider the cost of Web-based intervention; although technology-based treatments are fundamental in reducing health care costs, cost-effectiveness is often not adequately evaluated (if at all) in comparing Web-based and face-to-face interventions.Therefore, interventions should be compared in terms of both efficacy and cost-effectiveness (see Raaijmakers et al [31] and Wieland et al [18] for examples of reviews that evaluated treatment efficacy and cost-effectiveness).
provide a summary of the characteristics of each included systematic review.Overall, 10 [12-21] of the 20 systematic reviews examined the effects of Web-based interventions for weight loss and/or weight loss maintenance, whereas the other 10 systematic reviews [22-31] examined the effects of both Web-based and traditional interventions for weight loss and/or weight loss maintenance.
9 ( 9 )
Only adults, RCT, BMI≥25, primary outcome weight loss PubMed Summarize the state of the science of Internet-delivered Arem and Irwin, 2011 [13] or weight loss management, weight loss interventions website or Web-based programming and highlight their strengths and weaknesses 22 (3) In USA, published 1989-2009, studies on effect and use of self-monitoring Medline, PsycINFO Evaluate the effect of selfmonitoring diet, physical activity level, and weight management program on Burke et al, 2011 [25] weight loss in behavioral treatment studies 25 (25) Only adults, RCT, published in peer-reviewed journal, primary PubMed, PsycINFO, CL, NIH Evaluate the effectiveness of Web-based interventions for weight loss and weight loss management Manzoni et al, 2011 [16] outcome weight loss or weight loss management 23 (23) Only adults, RCT, BMI≥25, website or Web-based programming Medline, EMBASE Review the weight loss or weight loss management effect of the Internet component in obesity treatment programs Kodama et al, 2012 [15] 11 (11) Only adults, RCT, BMI≥25, used computer/interactive tech-Medline, CC, CINAHL, PsycINFO Evaluate the impact of computer-based technology on interventions for weight loss Reed et al, 2012 [17] nologies, primary outcome weight loss or weight loss management, control group received non-computer-based intervention 18 (18) Only adults, BMI≥25, includes RCTs or quasi-RCTs, primary CC, Medline, EMBASE, CINAHL, LILACS, PsycINFO Assess the effect of interactive computer-based interventions for weight loss or weight loss management Wieland et al, 2012 [18] outcome weight loss or weight loss management, website or Web-based programming, lasted ≥4 weeksXSL • FO RenderX Included (relevant) studies, n (n) Inclusion criteria of studies Searched databases a Aim of the review Author(s), publication year 24 (6) Only adults, BMI>28, primary outcome weight loss or weight loss management, only male participants CINAHL, EMBASE, Medline, PsycINFO, PubMed, Sport Discus, Scopus, Web of Science Investigate the effectiveness of weight loss and weight loss management interventions and identify the characteristics associated with effectiveness Young et al, 2012 [26] 20 (20) RCT, published in peer-reviewed journal, primary outcome weight loss or weight loss management, social media component PubMed, PsycINFO, EM-BASE, Web of Science, Scopus Describe the use and impact of social media in online weight management program Chang et al, 2013 [19] 5 (5) RCT, BMI≥25, primary outcome weight loss or weight loss management, website or Webbased programming, control either waitlist or standard waiting treatment, psychologically based intervention for behavioral modification Medline, PsycINFO, Psyndex Investigate the effectiveness of Web-based psychological interventions for weight loss Grunenberg et al, 2013 [21] 6 (6) Only adults, in USA, English, BMI≥25, primary outcome weight loss or weight loss management, used computer/interactive technologies PubMed, EMBASE, CINAHL, CL, Web of Science Evaluate the efficacy of eHealth weight management programs Bennett et al, 2014 [20] computer/interactive technologies, ambulatory setting PubMed, Medline, EM-BASE, CD, CC Examine technology-assisted weight loss interventions and highlight innovation, impact, and pragmatism Levine et al, 2015 [30] 27 (12) Only adults, BMI≥25, used computer/interactive technologies, primary outcome weight loss or weight loss management PubMed, PsycINFO, Web of Science, Science Direct, CINAHL, EMBASE Evaluate the effectiveness of technology-based interventions on weight loss and quality of life Raaijmakers et al, 2015 [31] a BIOSIS: BIOSIS Preview; CC: Cochrane Central; CCTR: Centre for Care Technology Research; CD: Cochrane Database of Systematic Reviews; CINAHL: Cumulative Index to Nursing and Allied Health Literature; CL: Cochrane Library; CP: Cochrane Public Health Group and Evidence for Policy and Practice Information Centre; DARE: Database of Abstracts of Reviews of Effects; HT: Health Technology Assessment database; DR: Database of Abstracts of Reviews and Effects; LILACS: Latin American and Caribbean Health Sciences Literature; NIH: The National Institutes of Health; NIHCT: National Institutes of Health Clinical Trials database; RC: review of company Web sites; SCI: Science Citation Index.
found that computer-based technology led to significantly less weight loss than comparable interventions.Therefore, the research on the efficacy of Web-based interventions compared to similar non-Web-based interventions is inconclusive.This lack of consistency may be due to the large heterogeneity of non-Web-based comparison interventions in the primary studies.For example, the non-Web-based comparison interventions ranged from manualized interventions to a counseling program in the studies included by Raaijmakers et al [31].
Sorgente et al JOURNAL OF MEDICAL INTERNET RESEARCH XSL • FO RenderX vs basic Web-based interventions; Web vs face-to-face: Web-based intervention vs face-to-face intervention; intervention with Web vs intervention without Web: adding a Web-based component to an intervention vs the same intervention without the Web-based component; Web vs non-Web: Web-based interventions vs non-Web-based comparable interventions.
searched databases; (4) inclusion criteria; (5) number of included studies; (6) overall sample size and participant age, gender, race, and BMI; (7) overall length of treatment, including follow-up time points; (8) country in which the interventions were developed; and (9) outcomes of the interventions.
Table 1 .
Characteristics of the included systematic reviews (N=20).
Table 3 .
Systematic review quality (N=20).Item 1: a priori design; item 2: duplicate study selection and data extraction; item 3: comprehensive literature search; item 4: publication status as an inclusion criteria; item 5: list of included and excluded studies; item 6: characteristics of included studies; item 7: documented assessment of the scientific quality of included studies; item 8: appropriate use of the scientific quality in forming conclusions; item 9: appropriate use of methods to combine study findings; item 10: assessment of publication bias likelihood; item 11: conflict of interest documentation. a
Table 4 .
A summary of meta-analyses.
|
2018-04-03T01:37:01.575Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "963735f90b315b6aa8c756f0a3586a8c07fca21c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/jmir.6972",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "963735f90b315b6aa8c756f0a3586a8c07fca21c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244873818
|
pes2o/s2orc
|
v3-fos-license
|
RESTORATION AUTHENTICITY OR REALITY - A CASE STUDY
. In recent decades, the Venice Charter of 1964 [1] has provided the guiding principles for the conservation and restoration of ancient monuments. However, many interpret these principles as applying to historic structures in general, and not just monuments. The articles in the Restoration section of the Charter have several interesting statements (underlines are for emphasis) that are open to interpretation. In many cases, these statements cause a conflict of priorities, especially with funding being the overriding issue. In addition, local and national heritage agencies sometimes take a more liberal approach to restoration, particularly regarding authenticity. The statements under discussion are:
INTRODUCTION
The question of authenticity in preservation has been debated for decades.Prior to 1964 there was no general consensus of what degree of authenticity was appropriate.What first was conservation of cultural heritage has expanded to be considered historic preservation.The forms include such objects as archaeological sites, artistic sculptures and paintings, cultural landscapes, buildings, and monuments.
Boito in 1883 presented a series of preferences for dealing with cultural properties to an Italian technical congress of architects and engineers [2].The preferences adopted included a top priority to consolidation over repair.Next in priority would be repair rather than restoration.Proposed interventions were to be both identifiable and labelled as modern.Any elements or features that were to be removed should be documented and preserved for display at the site.He acknowledged that there may have already been renovations or additions subsequent to the original construction that now was a part of the history of site.Those renovation and additions might be deemed inferior or effectively hiding the original construction.
1931 Athens Charter
Followers of Boito were instrumental in creating the first Athens Charter in 1931 [3] which adopted many of Boito's ideas.Specifically, Section IV. -RESTORATION OF MONUMENTS states: "The experts heard various communications concerning the use of modern materials for the consolidation of ancient monuments.They approved the judicious use of all the resources at the disposal of modern technique and more especially of reinforced concrete.
They specified that this work of consolidation should whenever possible be concealed in order that the aspect and character of the restored monument may be preserved.
They recommended their adoption more particularly in cases where their use makes it possible to avoid the dangers of dismantling and reinstating the portions to be preserved." No specific techniques or methods were proposed but highlighted their concern over the potential damaging use of reinforced concrete.They did recommend "That, in each country, the architects and curators of monuments should collaborate with specialists in the physical, chemical, and natural sciences with a view to determining the methods to be adopted in specific cases;".Scientific determination was essential to deciding a course of action.Interestingly, authenticity is not discussed.
1964 Venice Charter
Possibly reacting to the reconstruction of buildings and monuments following two world wars, the 1964 Venice Charter [1] professed to save our heritage through preservation authenticity.It provided a tool for fervent preservationists to limit uncontrolled development.
Generally, only preservationists study such documents.Since they are not legally binding, local implementation is highly dependent on local advocates.
The Restoration articles set new standards for preservation and authenticity.
"ARTICLE 9.The process of restoration is a highly specialized operation.Its aim is to preserve and reveal the aesthetic and historic value of the monument and is based on respect for original material and authentic documents.It must stop at the point where conjecture begins, and in this case moreover any extra work which is indispensable must be distinct from the architectural composition and must bear a contemporary stamp.""ARTICLE 10.Where traditional techniques prove inadequate, the consolidation of a monument can be achieved by the use of any modem technique for conservation and construction, the efficacy of which has been shown by scientific data and proved by experience." "ARTICLE 12. Replacements of missing parts must integrate harmoniously with the whole, but at the same time must be distinguishable from the original so that restoration does not falsify the artistic or historic evidence." Subsequently numerous efforts have been made to refine the intent of authenticity including the 1965 UNESCO Archaeological Guidelines, the Burra Charter, the Declaration of Oaxaca, the Florence Charter, the Washington Charter, the Nara Document, the Charter of Brasilia, this Declaration of San Antonio, etc. Several of these are discussed in the following sections.
1994 Nara Document on Authenticity
In 1994, authenticity was the topic at the Nara (Japan) conference organized by the Japanese government in cooperation with the United Nations Educational, Scientific and Cultural Organization (UNESCO), International Centre for the Study of the Preservation and Restoration of Cultural Property (ICCROM) and International Council on Monuments and Sites (ICOMOS).It criticized the Venice charter and authenticity.The document [6] addresses authenticity "in response to the expanding scope of heritage concerns and interests in our contemporary world".
Article 11 states "All judgements about values attributed to cultural properties as well as the credibility of related information sources may differ from culture to culture, and even within the same culture.It is thus not possible to base judgements of values and authenticity within fixed criteria.On the contrary, the respect due to all cultures requires that heritage properties must be considered and judged within the cultural contexts to which they belong." It nearly removes authenticity from the requirements for cultural context.From this, countries were encouraged to develop their own criteria for dealing with preservation and authenticity.
1996 Declaration of San Antonio
In March 1996, the InterAmerican Symposium on Authenticity in the Conservation and Management of the Cultural Heritage was held in San Antonio, Texas, USA by the ICOMOS National Committees of the Americas to address the meaning of authenticity in preservation in the Americas.The Nara document was reviewed and critiqued.Recommendations were made to modify it by issuing a declaration [7].
When discussing authenticity and materials, it was stated that "..there are important sectors of our patrimony that are built of perishable materials that require periodic replacement in accordance with traditional crafts to ensure continued use.Similarly, there are heritage sites built of durable materials but that are subject to damage caused by periodic natural catastrophes, such as earthquakes, floods and hurricanes.In these cases, we also assert the validity of using traditional techniques for their repair, especially when those techniques are still in use in the region, or when more sophisticated approaches would be economically prohibitive."Thus there was an affirmation of protecting cultural heritage without limiting to authentic restorations.
RESTORATION IN THE UNITED STATES
The United States has numerous agencies that oversee historic preservation.At the national level, the National Park Service of the U.S. Department of the Interior controls historic preservation of national sites through the National Trust for Historic Preservation.Generally, each state has its own agency for state historical sites and finally local governments can have a regional agency.Often, determining the appropriate agency and designation is a challenge for any consultant.Several agencies are discussed as follows.Each has taken a local pragmatic approach to preservation.
New York City
The Landmarks Preservation Commission is a charter-mandated New York City commission.It is the largest municipal preservation agency in the United States.Created in 1965, it was formed to combat losses of historically significant buildings in New York City.According to the Landmarks Law [4], "the purpose of safeguarding the buildings and places that represent New York City's cultural, social, economic, political, and architectural history is to: • Stabilize and improve property values • Foster civic pride • Protect and enhance the City's attractions to tourists • Strengthen the economy of the City • Promote the use of historic districts, landmarks, interior landmarks, and scenic landmarks for the education, pleasure and welfare of the people of the City."The 1964 Venice Charter is not mentioned in the law creating the commission although advocates probably were aware of its existence.Yet, there is no mention of maintaining authenticity as the leading component of preservation.It emphasizes financial reasons as a major driving force.
The commission operates under a set of rules [5].Section 2-11 includes Repair, Restoration, Replacement and Re-Creation of Building Façades and Related Exterior Elements.Authenticity is addressed in Subsection (b)(3) "In all cases, except where noted, the repair, restoration, replacement or re-creation must match the original or historic materials and features in terms of its physical and aesthetic characteristics, including design, detail, profile, dimension, material, texture, tooling, dressing, color and finish, as applicable."In Subsection (b)(2), it states "Where replacement of large quantities of materials and/or significant architectural features is proposed, the applicant must provide an assessment of the deteriorated conditions warranting such replacement(s).Repair will be given priority over replacement if feasible."So, priority is given to repair over replacement but the rules provide a path to replacement.Subsection (d)(1) requires "Replacement materials and features should match the original or historic material or feature in terms of physical and aesthetic characteristics.For purposes of this subdivision, this means that replacement material should be "in-kind" in terms of using the actual original or historic material and installation techniques.In-kind replacement should be prioritized and fully considered prior to proposing substitute materials."While in-kind materials are a priority for replacement, substitute materials are allowed under another section.
Chicago
Chicago is another major US city with a history of protecting its historic structures.The Commission on Chicago Landmarks was created in 1968.The program [8] addresses exterior qualities of buildings that are "significant historical or architectural features".Chicago took the approach of basing its guidelines on the U.S. Secretary of the Interior's Standards for Rehabilitation of Historic Buildings [9] and extending them.Among their objectives they list "To identify, preserve, protect, enhance, and encourage continued utilization and the rehabilitation of such areas, districts, places, buildings, structures, works of art, and other objects having a special historical, community, architectural, or aesthetic interest or value to the City of Chicago and its citizens".
Several specific aspects of the standards include: "Distinctive features, finishes, and construction techniques or examples of craftsmanship that characterize a historic property shall be preserved.
Deteriorated historic features shall be repaired rather than replaced.Where the severity of deterioration requires replacement of a distinctive feature, the new feature shall match the old in design, color, texture, and other visual qualities and, where possible, materials.Replacement of missing features shall be substantiated by documentary, physical, or pictorial evidence.
Chemical or physical treatments, such as sandblasting, that cause damage to historic materials shall not be used.The surface cleaning of structures, if appropriate, shall be undertaken using the gentlest means possible." As with New York, we see a desire to prioritize the preservation of historic features.However for replacement of deteriorated features, both cities chose to require the imitation of the original features and not distinguish them as was proposed in the Venice Charter.This is in keeping with both the Nara Document and the San Antonio Declaration which essentially suggest selfdetermination of authenticity.
CASE STUDY-FAÇADE RESTORATION OF 1889 BROWNSTONE
This project was completed in 1996.It occurred during a time period when the Nara Document and the San Antonio Declaration were redefining whether restorations needed to be authentic.It is not clear whether the redefinition by these organizations was ground breaking or actually a reflection of what communities and cultures were already doing.
Specifically, the city and state where this project occurred had been requiring authentic restoration; repairs were to be performed using original materials and original techniques.This project was the first known departure from authentic with an historic residential property.
Figure 1 shows its building in 1996 and in its current condition.The primary difference is attributed to the photography and daylight.The restoration was documented previously [10].
Background
The building received local historic status in the 1970s.It has ornate stone carving from base to the roof that is unique.The brownstone is ornamental and overlays a brick structure.
The Owner purchased the building c. 1973 but by 1991, pieces of the brownstone (sandstone) elements had fallen from the building and safety concerns were growing.The Owner started inquiries as to how to restore the façade.Aided by HAF, the local preservation organization, efforts were made to obtain grant funding but with no success.However in 1992, HAF was able to attract a number of preservation specialists to a Sandstone Colloquium which included a hands-on assessment of the building.
Following a day of façade examination by the specialists, over 50 attendees met to discuss the specialists' findings.To the dismay of the Owner, there were many ideas and the predominant recommendation was to add sidewalk protection in the short term.Long term they proposed that most of the brownstone be removed and replaced with new carved pieces.The projected repair cost was $250,000 to $500,000 which greatly exceeded the building value.This was beyond the means of the retired Owner on a pension.
The sidewalk scaffolding was quickly added and the Owner continued to seek funding.Finally in 1995, the deterioration accelerated and material losses were far worse; the Owner became desperate.City building officials were demanding action.
A mason restoration contractor who attended the initial symposium and who provided the sidewalk protection stepped in to offer assistance by meeting with HAF and the building officials.
Historic buildings were expected to be restored using original materials and techniques.But the cost was beyond the Owner's ability to fund so negotiations with the city officials yielded an option: Remove the brownstone and plaster coat the brick without any ornamentation.
The intent was to provide a safe façade even if it meant losing the aesthetic character of the facade.The contractor estimated this might cost $50,000 to $60,000.The Owner agreed to this level of funding for a budget.
Contrary to local practice, the contractor proposed an alternate solution with the budget of $60,000.They would first remove the severely deteriorated brownstone and stabilize the areas still intact.Then they would re-evaluate the budget.The remaining funds would be dedicated to replacing the deteriorated brownstone with a brownstone patching material that could be carved to replicate the ornamentation.Patches would be anchored with stainless steel pins and wire.If funding was insufficient, plaster would be used as proposed by the city and some aesthetic features would be lost.This would met the city officials' goal to stabilize the façade yet restore some of the features.The city officials approved this concept and gave its first-ever building permit for a project which they did not know what the final appearance of the building would be.
Restoration
The Contractor recommended the Owner hire the author as her restoration engineer and the team was created.Scaffolding was first erected and a hands-on survey was performed.The problems discovered were related to inferior stone from the quarry and long-term deterioration.Figure 2 shows a sampling of the damage a) sill damage, b) exfoliation of vertical pier, c) underside of sill, and d) dentil damage.The damage was removed to sound material and patched.Damage was scattered throughout the façade.Details were developed cooperatively between the Engineer, Contractor and the material supplier [11] selected.The Owner anxiously watched the work proceed daily from across the street.The mason craftworker that installed and carved the patches was quite expert.He replicated the details exactly.In the end, the restoration contractor was able to repair all the deteriorated areas within the budget.The Owner, the HAF, and city officials were very pleased with the results.Their leap of faith in trusting the contractor was justified.
Performance
The façade condition in Figure 3 shows that the restoration has performed well for over 23 years.There is deterioration visible below a window sill (arrow, Figure 4).The brownstone is delaminating at the interface with the patches; it's likely the original deteriorated brownstone might not have been removed deep enough before installing the patch at this location.The current deterioration will need repairs in the coming years.The ornamentation of the synthetically-patched façade is nearly undetectable under most climatic conditions.During rainy weather, the patches are evident since they do not absorb water the same as the brownstone (slightly visible in Figure 3).However since the façade faces the sun, it dries relatively quickly and the patches blend in again.
SUMMARY/LESSONS LEARNED
-While the case study demonstrates a successful synthetic restoration that has performed well for over two decades, the real point here is that the concept of maintaining authenticity was challenged for practical reasons.Authentic replacement was cost prohibitive and the Owner did not want to have a non-descript face for her building.Reality for the Owner was that she wanted to enjoy her building and that meant restoring the façade aesthetics even if they were not be authentic.-The restored facade maintains the value of the property and fits within the historic context of the neighborhood.Simply stabilizing the façade would have maintained public safety but would not have produced a culturally acceptable solution.-As a society, authenticity should be the highest priority for our restorations.But, not at the expense of losing the very fabric of the buildings and monuments that we enjoy.As previously noted, this project coincidentally occurred within the time period where authenticity was being challenged by the principles of the Nara Document and the San Antonio Declaration.Communities (like New York City and Chicago for example) and owners were deciding what was important culturally and allowing alternate materials and modern techniques to be used in restorations.This self-determination now seems to be a mainstay of most preservation regulations in the United States.-Today, there are synthetic restoration products that have decades of use for which to be judged.That was not the case in the 1990s.-From my experience, engineers are not qualified to make an informed decision on accepting new materials that don't have a long history of use without the assistance of specialists.This case study project had a chemist as the owner of the company supplying the patching material who provided the material expertise and an expert restoration contractor who could assess the material.They were integral to the judging the restoration design and making the best decisions given the limited budget.-Judging the success of a restoration project can only be done through time.While heralded as a success in 1996, time has given us more data.Today, we know more about various synthetic products and proper installation techniques.The material used on this restoration has proven itself on numerous projects over the years and continues to be selected by restoration professionals.
|
2021-12-05T16:17:04.668Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "5e8e81e04e181da6865cf04820a484f3d57b6b36",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.scipedia.com/wd/images/9/9a/Draft_Content_965819167p525.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ba93e9aa10e6c116edc865f9403561e353207ccc",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
}
|
115147136
|
pes2o/s2orc
|
v3-fos-license
|
TNFR2 and Regulatory T Cells: Potential Immune Checkpoint Target in Cancer Immunotherapy
TNF has both proinflammatory and antiinflammatory effects. It binds to two structurally related but functionally distinct receptors TNFR1 and TNFR2. Unlike TNFR1 that is ubiquitously expressed, TNFR2 expression is more limited to myeloid and lymphoid cell lineages including a fraction of regulatory T cells (Treg). In general, TNFR1 is responsible for TNF-mediated cell apoptosis and death, and mostly induces proinflammatory reactions. However, TNFR2 mainly leads to functions related to cell survival and immune suppression. Treg play an indispensable role in maintaining immunological self-tolerance and restraining excessive immune reactions deleterious to the host. Impaired Treg-mediated immune regulation has been observed in various autoimmune diseases as well as in cancers. Therefore, Treg might provide an ideal therapeutic target for diseases where the immune balance is impaired and could benefit from the regulation of Treg properties. TNFR2 is highly expressed on Treg in mice and in humans, and TNFR2+ Treg reveal the most potent suppressive capacity. TNF-TNFR2 ligation benefits Treg proliferation, although the effect on Treg suppressive function remains controversial. Here, we will describe in detail the TNF-mediated regulation of Treg and the potential clinical applications in cancer immunotherapy as well as in autoimmune diseases, with the focus on human Treg subsets.
CD4+FOXP3+ regulatory T cells (Treg) have an indispensable role in maintain-
ing immune homeostasis and immune tolerance. They control unwanted immune responses that are involved in the regulation of immune tolerance to self as well as to foreign antigens. Loss-of-function mutation in FOXP3 locus, a gene encoding Treg lineage transcription factor FOXP3, leads to multiorgan associated autoimmunity. Abnormal numbers of Treg and/or impaired suppressive function of Treg are often found in various autoimmune diseases like type 1 diabetes (T1D) [1], multiple sclerosis (MS) [2], rheumatoid arthritis (RA) [3], psoriasis [4][5][6], and systemic lupus erythematosus (SLE) [7][8][9]. On the other hand, tumor-infiltrating Treg generally show potent suppressive functions, indicating that they regulate tumorspecific immune responses and might help tumor immune escape [10]. It seems logical to use Treg as a therapeutic target for diseases where the immune balance is impaired and could benefit from the regulation of Treg properties. Nevertheless, due to the intrinsic properties of Treg, i.e. heterogeneity and plasticity, several key questions need to be clarified before making Treg an ideal candidate for clinical applications.
Tumor necrosis factor (TNF) is initially expressed on cell surface as a membrane bound cytokine (mTNF), which can be cleaved by a metalloprotease TNF converting enzyme (TACE) to generate soluble form of TNF (sTNF) [11]. TNF binds to receptors, TNF receptor 1 (TNFR1) and 2 (TNFR2). In contrast to TNFR1, TNFR2 expression is restricted in certain cell types including lymphocytes [12]. TNF-TNFR1 interaction mostly induces proinflammatory reactions, whereas TNFR2 generally leads to the suppressive function of TNF [13]. It is known that TNFR2 is constitutively expressed on both murine and human Treg, and TNFR2+ Treg are the most suppressive Treg subpopulation [14][15][16][17]. The effect of TNF on Treg suppressor function remains controversial. In this chapter, we will describe in detail the TNFmediated signal transduction pathways, its effect on Treg cells, and the potential clinical applications in various immunopathologies.
Regulatory T cells and its plasticity
Treg exert their function in primary and secondary lymphoid organs and nonlymphoid tissues. FOXP3, as the lineage transcription factor of Treg, facilitates Treg thymic development by stabilizing its own expression and inhibiting transcription factors needed for the development of other helper T-cell (Th) lineages like T-bet for Th1, GATA3 for Th2, and RORγt for Th17 cells [18]. Next to FOXP3, Treg constitutively express a high level of the IL-2 receptor α chain (CD25) and a low level of the IL-7 receptor α chain (CD127) compared to human activated non-Treg. The combination of CD4+, CD25high, and CD127low has been used to isolate Treg for functional studies and for adoptive immunotherapy [19]. However, no unique Treg marker has been identified so far, although many molecules are proposed. These Treg-related cell markers include CD27 [20], CD62L [21], CTLA4 (cytotoxic T-lymphocyte-associated protein) [22], CD39 and CD73 ectoenzymes [23], Helios [24], Neuropilin-1 [25], HLA-DR [26], and the most recently identified combination of TIGIT and FcRL3, which results in the identification of human Helios+ memory Treg [27].
Compelling evidence indicates that both mouse and human Treg consist of various subpopulations and have a more or less plastic phenotype depending on the microenvironment they are in [28]. Based on the site of Treg generation, two major Treg subsets are classified, namely, thymus-derived Treg (tTreg) that develop in the thymus from CD4 single positive thymocytes which in general display high-affinity self-reactive T-cell receptors (TCRs), and peripherally induced Treg (pTreg) which emerge in the periphery from conventional CD4+ T lymphocytes (Tconv) in response to environmental antigens and tolerogenic stimuli. Studies in mice have shown that pTreg and tTreg are both required for full protection against colitis and lymphoproliferative disease [29,30], indicating that these two Treg subsets play distinct roles in protecting against immunopathology. However, the relative contribution of tTreg and pTreg in human immune tolerance remains a major unresolved issue, partially due to the lack of specific markers to definitively distinguish them. In fact, the transcription factor Helios was the first marker proposed to distinguish both mice and human tTreg from pTreg [31]. However, this has been disputed by studies showing that Helios can also be expressed by activated Tconv [32] and by pTreg upon in vitro and in vivo stimulation [33], precluding its TNFR2 and Regulatory T Cells: Potential Immune Checkpoint Target in Cancer Immunotherapy DOI: http://dx.doi.org/10.5772/intechopen.85632 use as tTreg-specific marker. Another cell surface marker that has been proposed to harbor the specificity necessary to distinguish between murine tTreg and pTreg is the coreceptor Neuropilin-1 [25]. Unfortunately, human Treg do not uniquely express Neuropilin-1 [34].
TNF/TNFR signaling pathways
TNF is firstly discovered as an inflammatory cytokine that is induced by the endotoxin [35]. Various immune cells produce TNF including macrophages, monocytes, dendritic cells, B cells, activated natural killer cells, and activated T cells. TNF is initially expressed on the cell surface as a trimeric type II transmembrane protein mTNF, which is then cleaved by the metalloproteinase TACE (also known as ADAM17) and released as soluble extracellular sTNF [36]. Both forms of TNF are present as bioactive homotrimers. There exist two structurally related but functionally distinct receptors, TNFR1 (p55) and TNFR2 (p75). TNFR1 is ubiquitously expressed on most mammalian cell types, and it binds to mTNF as well as sTNF, whereas TNFR2 expression is restricted to immune cells, neurons, and endothelial cells. TNFR2 binds with higher affinity to mTNF than sTNF compared to TNFR1. TNFR1 and TNFR2 share the similar extracellular TNF-binding motifs but differ in their intracellular domains. Both receptors lack intrinsic enzyme activity; thus, upon the ligand binding, they need to recruit the cytosolic proteins to initiate the intracellular signal transduction. Specifically, TNFR1 contains a homologous intracellular region called "death domain", which preferentially interacts with the adaptor protein named TNFR1-associated death-domain (TRADD) protein [37]. TRADD further recruits another two adaptor proteins, receptor interacting protein kinase 1 (RIPK1) and TNFR-associated factor (TRAF) 2, thus forming an enzymatic complex signalosome, which is also known as signaling complex 1. One of the main targets of the complex 1 is the enzyme complex called IkB kinase (IKK). Phosphorylation of IKK in turn leads to the canonical activation of the transcription factor NFkB as well as members of the family of MAPKs such as c-jun kinase (JNK) and p38 MAPK. The TRADD containing signaling complex 1 may further be converted to a death-inducing signaling complex, so-called complex 2, by adaptor protein Fas-associated protein with death domain (FADD). The complex 2 is able to further initiate downstream caspase cascades, thus inducing cell apoptosis and cell death [37].
The pathways induced by TNFR2 are slightly different from TNFR1. Due to the lack of death domain, TNFR2 is unable to recruit TRADD protein, but it can directly interact with TRAF2 [38]. In contrast to TNFR1 that drives apoptosis and cell death, TNFR2 induces the noncanonical activation of NFκB via the activation of the NFκB-inducing kinase (NIK), which further leads to the phosphorylation of IKKα and the processing of p100, a crucial step in the nuclear translocation of p52/RelB [38,39]. Interestingly, TRAF2 binding to TNFR2 is considerably weaker than its binding to TRADD protein. Upon binding to TRAF2, TNFR2 could also recruit cIAP1/2 proteins [39] that are involved in the TNFR1-mediated NFκB activation, indicating that there exists a crosstalk between TNFR1 and TNFR2 pathways. Another interesting adaptor protein called endothelial/epithelial protein tyrosine kinase (Etk) interacts with the C-terminal domain of TNFR2 in a ligandindependent manner [40]. TNFR2-mediated Etk phosphorylation is able to partially activate the growth factor receptor VEGFR2, which in turn results in the activation of PI3K/Akt pathway and cell survival.
A number of proteins are essential for the negative regulation of the TNF-TNFR pathways. A20, also named as TNF alpha-induced protein 3, is one of the most studied negative regulatory proteins. A20 is an ubiquitin editing enzyme. It limits NFκB signaling after activation by TNF [41]. Consistent with this, A20-deficient mice are hypersensitive to TNF exposure and die perinatally because of severe inflammation and multiorgan failure [42]. Intriguingly, A20 is recently shown to regulate the de novo generation of tTreg in a cell-intrinsic manner, while the suppressor function of A20-deficient Treg is unchanged in vitro [43].
Effect of TNFR2 on Treg
Although TNFR1 expression is not different between Treg and non-Treg cells, human Treg constitutively express high levels of TNFR2 compared to CD25-Tconv. Moreover, TNFR2+ Treg reveal the most potent suppressive capacity [14,44]. The effect of TNF on Treg suppressor function remains controversial. Several groups including ours demonstrated that sTNF preserved or even increased FOXP3 expression as well as Treg suppressive capacity in both mice and humans [15,[45][46][47]. The TNF-TNFR2 is crucial for sustaining FOXP3 expression and maintaining the stability of murine Treg in an inflammatory environment [44]. A similar phenomenon is also observed for human Treg in vitro [48]. There is also evidence for the negative effects of TNF on Treg function. Studies show that TNF impairs Treg function by reducing FOXP3 expression or enhancing its dephosphorylation [47,49]. In clinical practices, RA patients responding to anti-TNF antibody adalimumab showed an increased percentage of FOXP3 + cells as well as the restored regulatory function [50]. It should be noted that the nature of the TNFR2 antibodies used in these studies was likely different (agonistic versus antagonistic) [46]. Recent studies highlight that TNFR2 agonisms and antagonisms might regulate the phenotype and the suppressor function of Treg in a complete different way [46]. TNF priming induces the proliferation and activation of Treg in vitro [15,51] as well as in vivo via TNFR2 in an acute mouse GvHD model [52]. Our group have found that stimulation of human Treg with a TNFR2-agonist antibody preserved a stable Treg phenotype and function after ex vivo expansion [48]. Using TNFR2 agonist only was enough to prevent the loss of FOXP3 expression, whereas the sustained hypomethylation of TSDR (Treg-specific demethylated region) of FOXP3 gene locus required both rapamycin and TNFR2 agonist, suggesting that stabilization of FOXP3 expression requires both mTOR and NFκB signal pathways. In vitro restimulation of TNFR2 agonist plus rapamycin-expanded Treg led neither to the loss of FOXP3 protein nor the enhancement of IL-17A production, especially under proinflammatory conditions, indicating a well-preserved Treg stability. TNFR2 knockout CD4+ T cells have increased expression of RORγt and IL-17 production, which is dependent on the impairment of TNFR2-mediated activation of NFκB [53]. We speculate that a similar process of regulation may exist in human Treg where TNFR2/NFκB signaling might act as a double-edged sword to enhance FOXP3 but also to inhibit RORγt expression, thus contributing to Treg stability. Another possible explanation is that TNFR2 engagement results in an autocrine TNF-TNFR2 loop, which further regulates the expression of histone methyltransferase EZH2 [51], a subunit of the polycomb repressor complex 2 (PRC2). EZH2 is known to bind to FOXP3 thus helping FOXP3 to regulate the gene transcriptional repression [54].
TNFR2 agonists and autoimmune diseases
Defect in the function of Treg as well as the low numbers are the main properties of various autoimmune diseases. Therefore, restoring the proper functional Treg TNFR2 and Regulatory T Cells: Potential Immune Checkpoint Target in Cancer Immunotherapy DOI: http://dx.doi.org/10.5772/intechopen.85632 thus favoring the immune tolerance induction has become a final goal of treatment for patients with autoimmune diseases. As discussed above, ample studies show that either TNF and/or TNFR2 agonism has capacity to enhance Treg proliferation and activation. Furthermore, TNF-TNFR2 is essential to maintain the Treg function and stability in the inflammatory environment [44,48]. Impaired TNF-TNFR signaling pathways occur in several human diseases including T1D, SLE, IBD, and MS. For instance, a single-nucleotide polymorphism (SNP) in the first intron is linked to a decreased level of TNFR2 in carriers of the SNP and a high risk of disease susceptibility [55]. T1D patients have higher TNFR2 + Treg compared to healthy controls. The rationale for using TNFR2 agonists as a therapeutic option for autoimmune diseases was first shown in T1D. Using blood from patients with T1D, a dose-response relationship between TNFR2 agonism and the destroying of pathogenic autoreactive CD8 T cells was observed [56], suggesting inducing of TNF-TNFR2 pathway is an effective approach of selectively killing autoreactive T cells.
Currently used biologics targeting TNF include the anti-TNF antibodies infliximab, adalimumab, certolizumab, and the decoy receptor etanercept that binds to sTNF. Although they have a good safety profile, with increasing use of these drugs, paradoxical adverse events involving the skin, joints, and lungs have been described [57]. Skin manifestations are the most common adverse event and occur in about 25% of patients receiving anti-TNFs. The underlying mechanism is recently attributed to the TNFR2/A20 signal axis which is specifically responsible for TNF-mediated IL-17A inhibition [58]. Termination of NFκB activation is critical to prevent aberrant inflammatory responses. In memory CD4 T cells, A20 is identified as one of the strongest TNF-responsive genes with a strong inverse correlation to IL-17A expression.
TNFR2 antagonists and cancer immunotherapy
Tumor microenvironment preferably recruits TNFR2+ Treg cells which possess a highly immunosuppressive capacity, thus facilitating tumor immune escape. That TNFR2 knockout mice show improved immune responses to tumors might be caused by the lack of TNFR2 expressing Treg or have failed to develop systemic autoimmunity [59] or the decreased numbers and the impaired function of MDSCs [60]. In humans, the high level of TNFR2+ Treg is found in the peripheral blood of lung cancer patients [10] and in the tumor-associated ascites in ovarian cancer patients [61]. Moreover, the increased TNFR2 gene expression on Treg cells has been shown to be associated with exhaustion of CD8 cytotoxic T lymphocytes in metastatic melanoma patients.
In addition to being an inducer of Treg expansion, TNFR2 also acts as an oncogene which has been identified on at least 25 tumor types. Enhanced expression of TNFR2 on tumor itself has been also reported but not limited in human renal cell carcinoma, multiple myeloma, colon cancer, ovarian cancer, and cutaneous T-cell lymphomas (CTCL) [62]. In general, the overexpression of TNFR2 exploits this cytokine receptor for increased tumor cell proliferation and tumor growth. Genetic mutation/genomic gains of TNFRSF1B, a gene encoding TNFR2 protein, occur in patients with Sézary syndrome (SS), a rare form of CTCL often refractory to treatment. SS is characterized with high expression of TNFR2 on the tumor cells and Treg. Such gain-of-function mutation in TNFR2 leads to the enhanced noncanonical NKκB activation [63], a pathway primarily involved in cell expansion and growth. It seems being desirable to apply one approach that could successfully inhibit potent suppressive Treg and also directly prevent tumor growth by using the antagonistic molecules against TNFR2. Such TNFR2-specific blocking molecules would ideally inhibit Treg and permit Tconv proliferation and function, thus enabling to restore the antitumor immune responses and to induce tumor regression.
Strategies for blocking of TNF/TNFR2 signaling
A number of agonistic or antagonistic biological agents targeting to TNF and/or TNFR2 have been developed. Two potent dominant TNFR2 antagonist antibodies are developed by Faustman et al. group [64]. They report that these TNFR2 antagonists lock the TNFR2 receptor in the form of antiparallel dimmers, which further prevents the TNF binding as well as the intracellular scaffolding. Consequently, these dominant TNFR2 antagonists, even in the presence of TNF, could kill Treg isolated from ovarian cancer ascites more potently than it kills Treg from healthy donors. Interestingly, TNFR2 antagonistic mAbs are also able to directly kill TNFR2-expression ovarian cancer cell lines in vitro [64]. Similar effect is observed in another in vitro study where the cancer cells and lymphocytes were isolated from the end-stage SS patients [65]. In mouse model of colon and breast cancers, combining a blocking TNFR2 antibody with a kind of immune stimulant markedly enhances the antitumor efficacy of immunotherapy through reducing the number of tumor-infiltrating TNFR2+ Treg and increasing the number of IFNγ-producing CD8 cells [66].
Some pharmacological agents are found to regulate TNF and/or its receptors expression. Thalidomide and its analogues prevent the surface expression of TNFR2 on activated T cells, which is associated with the inhibition of TNFR2 protein trafficking to the cell membrane [67]. Treating acute myeloid leukemia patients with azacitidine and lenalidomide, a thalidomide derivative can reduce TNFR2 expression on T cells as well as TNFR2+ Treg in vivo, leading to enhanced effector immune function [68]. Cyclophosphamide is a DNA alkylating agent. It is commonly used as a cytotoxic chemotherapy in cancer treatment. In a mouse model, it is shown that cyclophosphamide treatment depletes TNFR2+ Treg via inducing the death of replicating Treg that co-express TNFR2 and KI-67 [69]. A re-expansion of Treg from lymphodepletion suppresses the effective antitumor immunity developed after cyclophosphamide treatment. Intriguingly, blockade of TNF signaling using etanercept inhibits TNFR2+ Treg cell expansion during recovery from cyclophosphamide-induced lymphodepletion and markedly inhibits the growth of established CT26 tumors in mice [70]. Altogether, it suggests that a TNFR2-targeted approach to inactive host Treg, especially in only tumor microenvironment, may offer optimal options for antitumor immune reactions.
Conclusions
Many surface receptors of Treg are also expressed on other immune cells, with TNFR2 being a prominent exception with highest density in the tumor microenvironment. TNFR2 is a functional receptor on Treg. Cell surface expression of TNFR2 not only identifies the potent Treg subsets but also is the property of tumor-infiltrating Treg. TNFR2 expression on some cancer-infiltrating Treg is about 100 times higher than on circulating Treg in control subjects. In other types of cancer, the abundance of TNFR2+ Treg in peripheral blood is higher than healthy ones. Targeting TNFR2 using small molecule agonists or antagonists is a promising but also a challenging task. Considering the suppressive property of Treg and its impaired functions in various immunopathologies, there is no doubt that novel (tumor-specific) antagonists against TNFR2 are promising for cancer immunotherapy. From the clinical utilities point of view, combination of TNFR2 inhibition with immune checkpoint inhibitors seems to be an attractive approach in reshaping modern cancer immunotherapy.
|
2019-04-14T07:23:05.924Z
|
2019-04-08T00:00:00.000
|
{
"year": 2020,
"sha1": "5eb33e699dcb3c3262c62af387bb869e0f2f5842",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/66411",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e289f4b3eb5cce326c38181e8bfafc523d12fe1b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
15510399
|
pes2o/s2orc
|
v3-fos-license
|
Adopting Curvilinear Component Analysis to Improve Software Cost Estimation Accuracy. Model, Application Strategy, and an Experimental Verification
Cost estimation is a critical issue for software organizations. Good estimates can help us make more informed decisions (controlling and planning software risks), if they are reliable (correct) and valid (stable). In this study, we apply a variable reduction technique (based on auto-associative feed--forward neural networks - called Curvilinear component analysis) to log-linear regression functions calibrated with ordinary least squares. Based on a COCOMO 81 data set, we show that Curvilinear component analysis can improve the estimation model accuracy by turning the initial input variables into an equivalent and more compact representation. We show that, the models obtained by applying Curvilinear component analysis are more parsimonious, correct, and reliable.
INTRODUCTION
Cost estimation is a critical issue for software organizations.The need is to get the best estimate when planning a new project.Improving the prediction capability of software organizations is a way of improving their competitive advantage.Prediction is a major task when trying to better manage resources, mitigate the project risk, and deliver products on time, on budget and with the required features and functions.This is our motivation for proposing estimation improvement techniques for software organizations that need to increase their competitive advantage.
Estimates can help us make decisions that are more informed if, and only if, we can rely on the results to be accurate.We call a result accurate if it is reliable (correct) and valid (stable).Better estimates can be obtained by improving the estimation model.An estimation model is composed of some input variables (explanatory or independent variables), one output variable (the estimate or the dependent variable), and a function that calculates the outputs from the inputs.There are many ways of improving the estimates.For instance, we can choose: (1) a better function (e.g., the one that describes more appropriately the relationship between inputs and output), and/or (2) more explanatory input variables.In the former case, we can choose the type of function, e.g., linear or logarithmic that fits best.In the latter, the real problem is that, once we have selected the complete input variable set, we need to remove the redundancy that negatively affects the performance of the estimation model (e.g., irrelevant variables).In fact, a more parsimonious model (with fewer parameters) is preferable to one with more parameters because the former is able to provide better estimates with the same number of observations [13].This task can be performed in many ways, e.g., shrinking the input set into an equivalent pattern or removing irrelevant variables (e.g., stepwise methods [9]).In this work, we use Curvilinear Component Analysis (CCA) as a input shrinkage technique, which produces (shrunken) data sets where we apply ordinary least squares (OLS) [14].We also define an application strategy to figure out whether CCA is worth to use or not.This core of this strategy is based on auto-associative artificial neural networks [3, pp. 314-319] that, to the best of our knowledge, have never been applied in the context of software estimation [11] even though its applicability is well known in the image-processing field.In particular, we apply CCA to a COCOMO 81 data set [12] and OLS functions.Even though we apply the CCA to the COCOMO 81 data set, the proposed methodology can be applied to any estimation model that uses past observations for prediction such as machine learning, neural networks, and ordinary least squares functions, and to any quantity of interest (e.g., effort, fault proneness).
This paper is based on the research hypothesis that CCA can improve software cost estimation accuracy.We argue that a methodology improves accuracy if improves the correctness and the variability does not worsen (i.e., variability is the same or better).It may happen that, if the correctness (bias) improves, the variability (spread) gets worse.So, we cannot argue that the accuracy is improved.For this reason, we investigate both the variability and the correctness of the estimation model.In order to investigate the research hypothesis we: (1) utilized two summary statistics as measures of bias and spread, calculated on the Relative Error (RE), where RE = (Actual Effort -Estimated Effort)/(Actual Effort), letting RE i be an accuracy measure on the i-th project and T be the number of the projects being estimated (Test set), and using Mean(RE i ) and STD(RE i ), for i = 1 to T, to measure the estimation model correctness (bias) and variability (spread), respectively, (2) elaborated an application strategy, and (3) tested it by using a number of randomly selected models.In this research, we show that the used technology (CCA) provides a significant accuracy improvement, that is, it is more correct with a similar variability than the estimate produced without applying any improvement methodology.However, it is possible that, applying CCA to different data sets and models, would lead to no improvement.This may happen when the data has no alignment into the space (this concept of "alignment" is explained in Section 3).So, we should apply different techniques for improving the accuracy as there is no best technique for all situations.Improvement techniques are substantially divided into two not mutually exclusive sets, stepwise input variable removal [9,14], where the input variables are recursively taken out, and input variable reduction [7], where the input variables are transformed into a shrunken representation.CCA refers to the latter and it is able to overcome some drawbacks of principal component analysis (PCA); this point is explained in Section 3. Note that, one may decide to apply first CCA and subsequently stepwise technique, only CCA, only stepwise, or no technique.The decision about whether and which technique to apply should be made based on the ratioin between cost and effectiveness [14,3].Unfortunately, the only approach for achieving the best result in terms of input variable reduction and accuracy is the exhaustive one.It consists of taking into account all of the possible configurations that can be obtained by combining the input variables.For instance, if we have N variables we can obtain 2 N different configurations.The problem is that, this procedure cannot be generally applied because 2 N is usually a number too big to be handled.For this reason, we study affordable techniques such as CCA to improve the estimation accuracy without applying the exhaustive procedure.
The rest of this paper explains the results of applying CCA compared to the results without improvement.We start with some introductory remarks on COCOMO and curvilinear component analysis.We continue with a discussion of our experimental design and results.We then apply statistical tests to the results to show that using CCA improves the accuracy of the log-linear estimation models in terms of correctness avoiding worsening the variability.
COCOMO
The estimation model that we considered in this study is COCOMO-I (COstructive COst MOdel) [1,2].We use this model since it is an open model with published data [12].Currently, COCOMO-I has evolved to COCOMO-II [2,4], but for the latter there is no published data.Our aim is to show the reliability and stability of using CCA so that others may repeat our experiment based on that available data.Therefore, we are not proposing the use of the older COCOMO-I model as an estimation model, we only use it to show that CCA is able to improve the accuracy of log-linear OLS functions.COCOMO-I is based on equation (1): Where, the effort (months) is measured by calendar months of 152 hours, including development and management hours, a and b are two parameters related to the domain (e.g., a is a value between 2.80 and 3.20 and b between 1.05 to 1.2), and KSLOC is estimated or calculated from a function point analysis.EM i is a set of 15 multipliers [1,2,12], which aim at weighting Eqn.(1) to provide more suitable results.For instance, there are seven multipliers (ACAP, PCAP, AEXP, MODP, TOOL, VEXP, and LEXP), which affect the effort more strongly as they increase, e.g., ACAP = "Analysts' CAPability" is a value ranging from 1.46 (= very low) to very high (= 0.71).
There are seven multipliers (DATA, TURN, VIRT, STOR, TIME, RELY, and CPLX), which affect the effort less strongly as they increase, e.g., CPLX = "process ComPLeXity" is a value ranging from 0.70 (= very low) to extra high (= 1.65).There is another multiplier (SCED = SChEDule constraint), which affects the effort more strongly either as it increases or decreases, e.g., giving analysts either too much or too little time can increase the effort).It takes values ranging from 1.23 (= very low) to 1.10 (= very high), but the central value (nominal) is 1.00.COCOMO-I can also be calibrated to local data to find a better fitting model.Since the COCOMO model is based on the assumption that the effort increases geometrically, Eqn.(1), we need to transform the model into another one where we can apply OLS (linearization).For this reason, we use a logarithmic transformation in taking the logarithm of Eqn.(1).Then, the resulting model is the following: That is, Where, Z = Ln(months), H 1 = Ln(KSLOC), and H i+1 = Ln(EM i ) with i = 1 to 15.Then, β 0 is the intercept, β 1..16 are the model coefficients, H 1..16 are the independent variables, and Z is the dependent variable (effort).In practice, before applying OLS to Eqn. ( 3), one has to calculate the natural logarithm (Ln) of each value in the data set (note that, in order to calculate β 0 a vector composed of only 1s has to be inserted into the data set).Any prediction model is evaluated by taking into account the error between the estimated and actual values.An absolute error (Actual -Estimated) makes no sense in software cost estimation, because the error should be relative to the size of the project (e.g., a larger project may have a greater error).For this reason, Boehm [1] defined the COCOMO performance in terms of Relative Error (RE), as formula (4) shows, where RE i is the relative error of project i in the test set.
Figure 1 reports on the accuracy procedure calculation that we considered in this paper.In particular, given a data set (DS) of size S DS and a training set of size S TrS with S TrS < S DS , and a test set of size S TsS = (S DS -S TrS ), the accuracy is calculated by Eqn.(5). ( Since the best accuracy is zero, MnRE is the bias of the estimation model, and the standard deviation of RE i is a measure of spread of the estimation model. In this work, we mainly focus on the bias (correctness) of the estimation model and its stability (validity).It is very important to note that, sometimes COCOMO is evaluated in considering the Magnitude of Relative Error (MRE) [5,12], where MRE i = abs(RE i ), instead of RE i .Then Eqn. ( 5) becomes the following: Another way of evaluating COCOMO is to use PRED (N).
A PRED (25) = 80% means that 80% of the estimates are within 25% of the actual error [5,12], i.e, 80% of the estimates in the test set has an MRE value not greater than 0.25.It is possible to prove that, formulas ( 6) and (7) are not accuracy indicators of the estimation model [8].This means that, it is incorrect to measure COCOMO (or similar parametric models) in terms of equations ( 6) and (7).In particular, Kitchenham et al. show that MMRE and PRED(N) measure the spread of the kurtosis of the random variable Z = Estimated/Actual.This is the reason why, in this paper, we evaluate COCOMO through Eqn.(5).Note that, MMRE may be a useful measure when evaluating the goodness-of-fit of a model.
CURVILINEAR COMPONENT ANALYSIS
CCA is a procedure for feature reduction based on auto-associative multi-layer feed-forward neural networks [3, pp.314-319].Applying CCA does not require being an expert in neural network (NN) computation.A CCA implementation can be found in any mathematics application that is able to deal with NNs and/or matrixes.Implementing CCA requires just a few lines of code.Even if one does not have the opportunity to use a mathematics suite, the CCA algorithm can be easily implemented with the same programming language used for calculating OLS.For space limitation, we focus on CCA reporting just some principal notes on NN(s) [3,6].
Multi-layer Feed-forward Neural Networks
A neuron is a parameterized and bounded function (Figure 2-a), which can be linear or nonlinear.
A neuron (also called Unit) calculates an output (y) such that , where w i are the weights (or parameters), X i are the inputs, and f is called activation function.If f is a nonlinear function (e.g., a sigmoidal function such as logistic function, hyperbolic tangent function), then the neuron is called Nonlinear; if f is a linear function (f is the identity function), then the neuron is called Linear.A feed-forward NN is generally a nonlinear function, which is composed of some neurons (Figure 2-b) and layers.In the feed-forward networks, the data can just flow from the inputs to the outputs.In recurrent networks, the data flow can be circular.In Figure 2, g may be different from f.For instance, in regression problems, g is nonlinear and f is linear.In discrimination problems, both g and f are nonlinear functions.The input labeled "b" is called bias.It is an input that provides a constant value of 1.The bias plays the same role as the intercept in a polynomial function.In Figure 2(b), f units are called hidden because they are intermediate.Hidden layers express the complexity of a model.In particular, the number of hidden units corresponds to the degree of a polynomial.For instance, an NN having two hidden units is more complex than an NN with just one hidden unit just as a second order polynomial is more complex than a first-order polynomial.Based on observations (both inputs and outputs), the problem is then to calculate the model weights (w i ) such that the input values are mapped to output values.The weight calculation is also called model training.In order to get this mapping, a cost function has to be minimized [3, pp.194-201].The most common cost function is the Euclidian distance.When using polynomials, it is possible to apply OLS to calculate the model parameters, but it is not applicable with NNs.Usually, the best training technique is Backpropagation [10].This is an iterative method based on calculating gradients.In particular, the gradient of the cost function is calculated.It happens for each step and the gradient is used to update the parameters found in the previous step.The algorithm stops when satisfactory conditions have been met [3].It is very important to note that, the hidden neurons play a principal role here.In fact, their output can be considered as a representation of the input in mapping the output [10].This property will be used for implementing auto-associative NNs.
Auto-Associative Multi-layer Neural Networks
An auto-associative neural network (AANN) is a particular kind of multi-layer feed-forward NN. Figure 3 shows an example of AANN topology.The aim of this kind of neural network is to perform nonlinear dimensionality reduction.The strategy is to map N input variables into N output variables.The observed outputs used to train the network (targets) are just the observed inputs themselves (for this reason this network is called auto-associative).The auto-associative network in Figure 3 tries to map each observed input into itself [3, p. 314].This strategy is worth for dimensionality reduction when the number M of the neurons in the second hidden layer (Figure 3) is less than N. To get a correct dimensionality reduction, the output units must be linear (Lin = Linear) as well as the M units in the second hidden layer (Lin).The first and the third hidden layer must be nonlinear (Sig = Sigmoidal function).The training of this kind of network is based on minimizing an Euclidian distance similar to the one mentioned above [3, p. 314].Note that, AANN in Figure 3 can be considered as composed of two different networks.The first network (F 1 , dashed rectangle) projects the initial N-dimensional data onto an M-dimensional space (M<N).This space is composed of the output of F 1 when feeding it with the original observations.The curvilinear components of this space are encapsulated in F 1 .This means that, once F 1 has been calibrated, it can be used for transforming any input into an equivalent representation with respect to the original one with fewer dimensions (from N to M dimensions).The second network (F 2 ) maps the output of F 1 having M dimensions back into the initial N-dimensional space.The result is that, the output of F 1 is a nonlinear representation (projection) of the original N-dimensional space onto a shrunken space composed of M components.This network actually performs a curvilinear component analysis also called Nonlinear Principal Component Analysis (see [7] for the definition of Principal Component Analysis).This important result is made possible because of the presence of nonlinear activation functions in the first and third hidden layer (Sig).Note that, this kind of technology is able to perform both linear and curvilinear component analysis.
Using CCA together with OLS (The Strategy)
The ten steps below explain the strategy that we propose for dimensionality reduction with CCA together with OLS.Note that, this strategy would be the same even if we considered different parametric models (e.g., machine learning, neural networks).The aim of this strategy is to figure out whether the available data can be shrunk by a curvilinear transformation because of the redundancy of some input variables unknown.So, a model that is more parsimonious (i.e., the one having less parameters because built upon fewer input variables) should provide better results in terms of accuracy.The strategy is the following: 1. Split up the available data (i.e., past observations) into two subsets as explained in Section 2 and Figure 1
EXPERIMENT DESIGN
Our experimental setting is based on the strategy reported in Section 3.3.The aim of this experiment is to show that applying the procedure in Section 3.3 can lead to improving the estimation accuracy of a log-linear function trained by OLS (Section 2).To this end, we organized the available data as reported in Figure 4. We used 60 projects of the COCOMO data set [12] for building randomly 30 different data sets.The first row of the 30x60 matrix in Figure 4 includes the experiment-projects' identifiers, which we assigned randomly from 1 to 60; each of the remaining rows is composed of a different randomly selected circular permutation of the first row.Then, we split up this matrix into two subsets of columns (A and B) and considered set A as a set of 30 different training sets and set B as a set of 30 different test sets.The split proportion was 2/3 -1/3, thus each item of set A included 40 project instances, and each item of set B included 20 instances.Our experimental setting simulated the situation where there is a data set of past observations (Set A) and a set of projects being estimated (set B), unknown at estimation time.The insight is that, if actually CCA was able to improve the accuracy of a log-linear OLS function, we should observe for it more accurate results (i.e., in terms of bias and spread) than the ones obtained without applying any improvement technique and it should happen for a significant number of times (at least 30 times with randomization).
We started with calculating the log-linear OLS functions with CCA.To this end, we considered set A as a set of past observations and used set B for calculating bias and spread of each of the 30 obtained functions by applying CCA, i.e., we calculated MnRE CCA(k) and STDRE CCA(k) , for k = 1 through 30.We used MnRE CCA and STDRE CCA for denoting the two distributions (Appendix 1).Applying the proposed strategy to set A meant dividing this set into two further subsets, A A and A B (Figure 4) with the proportion 2/3 and 1/3, as explained in Section 3.3.Set A A was used for training and set A B for selecting the best function (Section 3.3., Step 4).Note that, each element of sets A A , A B and B was made of different projects with respect to each other element of the same set.With respect to the log-linear OLS functions without applying CCA, first we considered the 30 elements of set A A for training as many log-linear OLS functions and then we used set B for calculating MnRE NO-CCA(k) and STDRE NO-CCA(k) , with k = 1 through 30, thus we got MnRE NO-CCA and STDRE NO-CCA (Appendix 1).Then, we compared the obtained distributions in order to figure out whether MnRE CCA was better than MnRE NO-CCA and whether, at the same time, STDRE CCA was insignificantly different from STDRE NO-CCA .In a real case, any hold out method (i.e., the ones that split up the observations into two subsets, one for training, and one for test) may lead to loosing information because of the hold out strategy.In fact, projects in set A B cannot be used for training the log-linear OLS function with CCA.For this reason, we wondered whether the accuracy of the functions with CCA was better than the accuracy of functions trained with the complete set A. To this end, we retrained 30 log-linear OLS functions by considering each element of set A as a training set and used the corresponding elements of set B for calculating the two bias and spread distributions, thus we got MnRE
RESULTS AND DATA ANALYSIS
First, for each considered distribution, we performed some statistical tests for normality (Chi-Square goodness-offit statistic, Shapiro-Wilks, Z score for skewness, and Z score for kurtosis).non-statistically different.Overall, we concluded that applying CCA to log-linear OLS functions improved the accuracy for the COCOMO data set without making worse the variability.
DISCUSSION AND CONCLUSION
Let us consider the implications of the results reported in Section 5. Based on the COCOMO data set, we have shown that applying CCA to log-linear OLS functions produces estimates that are more accurate than the ones provided by the same kind of functions without applying CCA.
A valuable result is that the proposed technology increases the correctness of the estimates without worsening the variability.In order to evaluate the reliability of our experiment, we also compared the variability (spread) of the obtained distributions.With respect to Figures 5 and 6, the spread of the CCA distributions is less than the ones without applying CCA.Note that, the spread is expressed by the length of the box from lower tail to upper tail.This happens both for the bias and spread distributions.This means that, CCA is able to provide more stable estimates with respect to the same kind of functions trained without applying CCA.
With respect to Figure 7, we can see that the functions trained with a greater number of data points and without applying CCA (i.e., MnRE A NO-CCA ) provide a distribution slightly sharper than the one obtained by applying CCA.However, this is the effect of using more data points.In fact, this spread improvement is not confirmed in Figure 8, where the standard deviation obtained by applying CCA is better than the one obtained without CCA.
Although, running a CCA procedure requires non-negligible effort, this loss can be compensated by obtaining estimates that are more accurate.If CCA cannot provide better estimates, it can improve the estimation reliability.In fact, we have shown that CCA is able to provide distributions having fewer outliers (Figures 5, 6, 7, and 8).Then, practitioners and organizations dealing with software estimates may apply CCA for reducing the number of outliers even though the accuracy would not be improved.Since reducing the number of outliers expresses reliability, CCA can be a useful tool not only for improving estimation accuracy, but also for improving its reliability.
Another advantage of applying CCA is that, it can be used with any kind of data and prediction model (e.g., software cost, fault proneness, defect slippage).However, the effectiveness of applying CCA to different data sets and contexts has to be evaluated empirically through replicated experiments that we wish researchers would try out.
From a practical point of view, another advantage of applying CCA is that we do not need to know the relevance of each attribute being removed with respect to the considered context.This is an advantage because, for instance, stepwise feature selection requires knowing that [9].Moreover, CCA does not suffer from multicollinearity (i.e., having two or more input variables whose effect cannot be separated on the output), which can affect stepwise methods.CCA overcomes this problem by considering the simultaneous effect of every input variable through a nonlinear auto-associative neural network, (i.e., CCA does not separate the effect of each variable on the output, but it finds a nonlinear, equivalent, and more compact representation keeping the effect of each variable along with the others.In fact, CCA reduces multicollinearity by finding out the redundant variables (i.e., variables that can be expressed by a linear or nonlinear transformation of other variables).A further advantage is that, CCA can be implemented as an automatic procedure for estimation model improvement.Note that, once we have implemented it for the first time, we can reuse it thereafter without changes.We believe that, results of the proposed work can be used by practitioners, academics, and organizations as a baseline for further empirical investigations aiming at figuring out whether CCA can be effectively applied to other data sets, as well.
CCA has some drawbacks.For instance, the procedure that we proposed in Section 3.3 is based on the assumption that we have enough data to split up the data set into two subsets (TrS and TsS).Conversely, CCA would not be applicable.CCA is based on NNs, which require knowing some optimization techniques to reduce the training time.Not applying any optimization technique, may increase the effort and reduce the gain of applying it.The successful application of the CCA to the COCOMO data set that we have shown in this work should be considered as a first step towards such an emerging approach, which may eventually integrate canonical statistics that the scientific community has effectively undertaken so far.
In the future, we plan to compare the proposed approach with other feature selection techniques based on stepwise methods [9] as well as explore the possibility of combining together CCA and stepwise to get benefits from both techniques.
Fig. 4 .
Fig. 4. Experimental setting (Appendix 1), where the apex A refers to functions trained using set A. Similarly to the previous case, we tested the hypothesis {MnRE CCA is significantly better than MnRE A NO-CCA } and {STDRE CCA is insignificantly different from STDRE A NO-CCA }.
Fig. 5 .
Fig. 5. Bias Analysis (MnRECCA vs MnRENO-CCA) Let N be the number of input variables (N = 16 for the COCOMO model).Based upon the split made in Step 1, use TrS to train N-1 models applying CCA as many times, where each time the data set is reduced by 1 component (i.e., in the first CCA application, TrS turns into N-1 dimensions, in the second one, it turns into N-2 dimensions, and so on up to 1) 3. Calculate the Mean(RE) and STD(RE) for the obtained N-1 models feeding each model with TsS 4.Among the N-1 models obtained inStep2, select the model having the best score calculated in Step 3, i.e., the Mean(RE) closest to zero 5. Use TrS to train a log-linear function applying OLS without CCA 6. Calculate the Mean(RE) and STD(RE) feeding the model with TsS 7. Repeat Steps 1 through 6 for a statistically sufficient number of times (e.g.30) changing the composition of TrS and TsS and get two distributions for each considered summary statistic, i.e., MnRE CCA ≡ {Mean CCA (RE Based upon suitable statistical tests (i.e., parametric or non-parametric), evaluate the hypotheses whether (1) the distribution MnRE CCA is significantly better than MnRE NO-CCA and (2) the distribution STDRE CCA is insignificantly different from STDRE NO-CCA .If the statistical tests significantly confirm hypotheses (1) and (2), then execute Steps 9 and 10, otherwise stop this procedure because CCA cannot significantly improve the accuracy.In the latter case, other feature selection techniques should be considered (e.g., stepwise [9]) 9. Select the model corresponding to the best value in MnRE CCA ≡ {Mean CCA (RE If two models have the same score choose the one having the smallest spread 10.Use this model to make predictions (on new projects).
|
2014-10-01T00:00:00.000Z
|
2008-06-26T00:00:00.000
|
{
"year": 2008,
"sha1": "0a39e9d543362fce972f34b0ba5aedab6c871f2f",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/7024fb61-dc03-4439-a9ed-a5f6ea65508a/ScienceOpen/001_Sarcia.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0a39e9d543362fce972f34b0ba5aedab6c871f2f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
231628270
|
pes2o/s2orc
|
v3-fos-license
|
An advanced method for propargylcholine phospholipid detection by direct-infusion MS
Phospholipids with a choline head group are an abundant component of cellular membranes and are involved in many important biological functions. For studies on the cell biology and metabolism of these lipids, traceable analogues where propargylcholine replaces the choline head group have proven useful. We present a novel method to analyze propargylcholine phospholipids by MS. The routine employs 1-radyl-2-lyso-sn-glycero-3-phosphopropargylcholines as labeled lysophosphatidylcholine precursors, which upon cellular conversion direct the traceable tag with superb specificity and efficiency to the primary target lipid class. Using azidopalmitate as a click-chemistry reporter, we introduce a highly specific, sensitive, and robust MS detection procedure for the propargylcholine phospholipids. In a first study, we apply the new technique to investigate choline phospholipid metabolism in brain endothelial cells. These experiments reveal differences in the metabolism of phosphatidylcholine and its pendant, ether phosphatidylcholine. The novel method described here opens a new, quantitative, and detailed view on propargylcholine phospholipid metabolism and will greatly facilitate future studies on choline phospholipid metabolism.
Abstract Phospholipids with a choline head group are an abundant component of cellular membranes and are involved in many important biological functions. For studies on the cell biology and metabolism of these lipids, traceable analogues where propargylcholine replaces the choline head group have proven useful. We present a novel method to analyze propargylcholine phospholipids by MS. The routine employs 1-radyl-2-lyso-sn-glycero-3-phosphopropargylcholines as labeled lysophosphatidylcholine precursors, which upon cellular conversion direct the traceable tag with superb specificity and efficiency to the primary target lipid class. Using azidopalmitate as a click-chemistry reporter, we introduce a highly specific, sensitive, and robust MS detection procedure for the propargylcholine phospholipids. In a first study, we apply the new technique to investigate choline phospholipid metabolism in brain endothelial cells. These experiments reveal differences in the metabolism of phosphatidylcholine and its pendant, ether phosphatidylcholine.
The novel method described here opens a new, quantitative, and detailed view on propargylcholine phospholipid metabolism and will greatly facilitate future studies on choline phospholipid metabolism.
Supplementary key words click • lipidomics • lysophosphatidylcholine • ether lipid • plasmalogen • propargyl-PC Phospholipids containing a choline moiety in their head group represent a major fraction of the cellular lipidome. The family of choline phospholipids includes phosphatidylcholine (PC), ether phosphatidylcholine (PC O), and sphingomyelin (SM). In most eukaryotic cells, PC comprises almost half of all phospholipids, whereas the generally less abundant SM or PC O show highly elevated levels in particular cells, e.g., in the brain or the heart.
For investigations on the cell biology of choline phospholipids, an analogue of choline, propargylcholine, bearing a terminal alkyne moiety, was introduced (1).
Upon metabolic incorporation, the propargylcholine replaced the majority of choline head groups in the cellular lipidome. The terminal alkyne of the propargylcholine phospholipids can be click reacted (2) with dedicated reporter azides (3) to enable lipid tracing by microscopy (1).
A particular strength of this tracer is the fact that the obtained localization data can be correlated with metabolic analyses. Propargylcholine phospholipid metabolism can be followed by TLC using fluorogenic reporter azides (4,5) or by MS benefiting from a specific precursor ion in positive ion mode (1). However, both technologies have major limitations. For TLC, the limit of detection is in the low picomole range, and usually, lipid species are not resolved (4). While MS considerable boosts sensitivity, the conventional approach to detect the main propargylcholine lipid metabolites only delivers their sum FA composition (1).
We have recently introduced a highly sensitive MS method for tracing alkyne-labeled lipids employing a dedicated azide reporter that upon click reaction facilitates the ionization and identification of the labeled product (6). Using this reporter, termed C171, we demonstrated subfemtomole sensitivity for side chainlabeled alkyne-lipid tracing.
Here, we demonstrate the applicability of our C171based method for analyzing also head group-labeled alkyne lipids, the propargyl phospholipids. As the different positioning of the alkyne label at the head group imposes some intrinsic restrictions, we furthermore present a novel method overcoming these limitations. We therefore introduce azidopalmitate (N 3 Pal) as a clickable MS reporter that allows for direct identification of the labeled propargyl phospholipids at the MS1 level by conferring a predictable mass shift to the analyte. Importantly, in negative ion mode at the MS2 level, a diagnostic fragment is formed that confirms the lipid identity while the individual side chains of the lipid are revealed in parallel. We have used this novel method in a series of experiments where we investigated the propargyl phospholipid metabolism in a brain endothelial cell line. This study opens a detailed This article contains supplemental data. *For correspondence: Lars Kuerschner, lars.kuerschner@uni-bonn.
de. and quantitative view on phosphocholine lipid homeostasis in bEND3 cells and demonstrates differences between regular and ether PC metabolism.
Cell culture and lipid labeling
The brain endothelial cell line bEND3 was obtained from ATCC (CRL-2299) and maintained in DMEM medium (Gibco; 31966021) containing 10% fetal calf serum (Gibco; 11560636) and 1% penicillin/streptomycin (Gibco; 15070063). Propargylcholine lipids were added to the medium at concentrations of 20 μM from 5 to 10 mM stock solutions in 80% ethanol. Cells were then cultured for 24 h.
Lipid extraction and click reaction
Cells on 24-well dishes (supplemental Fig. S2) were washed once with ice-cold PBS (Sigma; 806552) and quickly once with 155 mM ammonium acetate, taking care to remove the liquid after the last wash as completely as possible. The lipids were extracted by addition of 500 μL methanol:CHCl 3 (7). Culture dishes were sonicated in a bath sonicator for 30 s before lipid collection. After centrifugation, the supernatants were retrieved and mixed 300 μL CHCl 3 and 700 μL of 1% acetic acid to induce phase separation. The organic phase was collected, evaporated in the speed vac (45 • C, 20 min), and redissolved in 10 μL CHCl 3 , and the tubes briefly vortexed. To tubes clicked with C171, 40 μl of C171 click mix were added (prepared by mixing 10 μl of 100 mM C171 in 50% methanol with 200 μl 5 mM Cu(I)AcCN 4 BF 4 in acetonitrile [AcCN] and 800 μl ethanol) while tubes clicked with N 3 Pal, 70 μl of N 3 Pal click mix were added (prepared by mixing 10 μl of 50 mM N 3 Pal in ethanol with 250 μl 5 mM Cu(I)AcCN 4 BF 4 in AcCN and 750 μl ethanol) followed by sonication for 5 min and incubation at 42 • C for 16 h. About 300 μl of CHCl 3 and 700 μl water were added, and samples were briefly shaken and centrifuged for 5 min at 20,000 g. The upper phase was removed, and the lower phase dried in a speed vac as above. About 500 μl of spray buffer (2-propanol/ methanol/water 8/5/1 + 10 mM ammonium acetate) was added, and the tubes were sonicated for 5 min and stored at −20 • C.
Determination of lipid recovery
Total lipids from 45,000 unlabeled bEND3 cells (12 identical samples) were isolated as above but using an extraction mix also containing 250 pmol of synthetic pPC 18:0/18:0, pPC 18:1/ 18:1, pPC 18:2/18:2, pPC 18:3/18:3, pPC 20:4/20:4, and pPC 22:6/ 22:6. Six samples were processed as usual (sample a), and to further six samples (sample b), another 250 pmol of all synthetic pPCs was added prior to the click reaction. All samples were click reacted with N 3 Pal before processing continued as usual. Samples were analyzed by MS, and the signal intensities of the homoacyl-pPCs were determined. Recovery as percentage was calculated from the signals according to 100 * a/(b − a).
Determination of method linearity, detection, and quantification limits
Total lipids from 45,000 unlabeled bEND3 cells were isolated as above and mixed with increasing concentrations of synthetic homoacyl-pPCs and 240 pmol of pPC 31:1. Samples were processed as usual, and employing the N 3 Pal reporter was quantified using the pPC 31:1 internal standard. MS2 signals of the click-reacted lipid (PR2), its diagnostic fragmentation peak upon neutral loss (NL) of 335.26, and of the respective FA were recorded. Five replicate experiments were performed.
MS analysis
The tubes were sonicated for 5 min, and the dissolved lipids were analyzed. Mass spectra were recorded on a Thermo Q-Exactive Plus spectrometer equipped with a standard heated ESI source using direct injection from a Hamilton syringe driven by a syringe pump under the control of the Tune instrument control software. MS1 spectra (resolution of 280,000) were recorded in 100 m/z windows from 250 to 1,200 m/z (positive mode) and 950-1,300 m/z (negative mode) followed by recording MS/MS spectra (resolution of 280,000) by data-independent acquisition in 1 m/z windows from 200 to 1,200 m/z (positive mode) and 950-1,300 m/z (negative mode).
MS data analysis
Raw files were converted to mzml files using MSConvert and analyzed using LipidXplorer (8). For identification and quantification of labeled alkyne lipids, molecular fragment query language files were written that identify the species by the presence of a peak corresponding to the expected masses of the labeled lipid class combined with the characteristic NL. Lipids were quantified using the respective internal standard. The applied molecular fragment query language files are provided in the supplemental data.
Statistical analysis
Statistical differences between sample groups were calculated using GraphPad Prism, version 8.0, software. Twoway ANOVA was followed by Dunnet analysis to correct for multiple comparisons. Family wise significance and confidence level (alpha 0.05; 95% CI) settings were applied. Multiplicity-adjusted P values were calculated.
The technology
To establish an improved method for choline phospholipid analysis by MS, we reasoned that a chemical modification of the propargylcholine moiety during sample preparation could facilitate the analysis of the lipid. Benefiting from the possibilities of the click reaction (2, 3), we first explored the potential of our recently introduced C171 reporter (6).
The C171 reporter comprises a charged quaternary ammonium group, a linker, and an azido group for reaction with terminal alkynes. For the initial setup of the method, we used a synthetic phosphatidylpropargylcholine, pPC 31:1, featuring myristic acid (FA 14:0) and heptadec-9-enoic acid (FA 17:1) side chains (Fig. 1A). Upon click reaction, the C171 reporter (Fig. 1B) conferred a nominal mass shift of +171 Da to the lipid Fig. 1. Analysis of pPC by ESI-tandem MS using click-chemistry reporters. Synthetic pPC 31:1 (A) click-reacted with C171 reporter (B) generates a mass-shifted product (C) whose MS2 fragmentation spectra at increasing collision energy in positive mode and the most likely structures (D) are shown. Alternatively, click reaction with N 3 Pal reporter (E) generates a different mass-shifted product (F) whose MS2 fragmentation spectra at increasing normalized collision energy (NCE) in negative mode and the most likely structures (G) are depicted. Magenta numbers indicate the diagnostic fragmentation peaks upon NL and the corresponding molecular structure. Green numbers indicate the molecular structures corresponding to NL fragments. Orange numbers indicate the diagnostic peaks and corresponding FA structures obtained only in negative mode employed by the N 3 Pal reporter method. analyte (Fig. 1C). The positive charge ensured efficient ionization, and at moderate collision energies, the labeled lipid showed the stereotypic NL of 73.09 Da (Fig. 1D) observed before (6). Intriguingly, the introduced positive charge on the bicyclic triazole neighboring that on the quaternary ammonium favors this NL over the commonly observed loss of a positive head group fragment m/z 208.07 (1) corresponding to m/z 184.07 for regular PC. The NL of 73.09 Da is diagnostic and enables identification of the lipid analyte while providing the sum FA composition. At elevated collision energies, further fragments, specific for the labeled head group moiety, can be detected (Fig. 1D).
To deepen the analysis, an identification of the individual FA side chains would be desirable. To detect fragments of the individual side chains generated in a MS2 setup, we opted for negative-mode MS. Consequently, the use of a different reporter for the click reaction became necessary. Such a reporter should include an azido moiety, ensure adequate ionization of the labeled product to enhance its signal, confer a predictable mass shift to the lipid analyte that allows for its direct identification at the MS1 level, and yield a diagnostic fragmentation pattern at the MS2 level. N 3 Pal fulfills these requirements (Fig. 1E). Upon click reaction, it confers a nominal mass shift of +296.08 Da to the lipid analyte (Fig. 1F). The negative charge enables efficient ionization in the negative mode, and at moderate collision energies, the labeled lipid showed a stereotypic NL of 335.26 Da (Fig. 1G). Importantly, at moderate and elevated collision energies, side chainspecific fragments (m/z 227.20 and 267.23), revealing the identity of the attached FAs (FA 14:0 and FA 17:1, respectively) can be detected along another head group-specific fragment (m/z 379.34). This way various lipid classes containing the propargylcholine head group can be analyzed (supplemental Fig. S3). Using our instrumentation and protocol, the method provided a linear range of at least three orders of magnitude, a detection limit of 1 pmol, and a limit of quantification of 4 pmol (supplemental Fig. S4). While delivering an average of 79% analyte recovery for six different pPC species, that of PUFA-containing lipids was found reduced if the FA signal rather than the peak corresponding to the NL was considered (supplemental Fig. S4).
A proof of concept
To investigate the choline phospholipid metabolism in cells, we chose a labeling strategy employing either the synthetic LpPC 16:0 featuring a palmitic acid side chain, or its ether pendant, LpPC O-16:0 with the corresponding fatty alcohol at the sn-1 position (supplemental Fig. S1). The brain endothelial cell line bEND3 was incubated with 20 μM of either tracer for 24 h.
We first determined the effect of labeling on major lipid classes of the sphingolipid, glycerophospholipid, and neutral lipid families ( Fig. 2A). Upon incubation with either LpPC 16:0 or LpPC O-16:0, the amounts of unlabeled PC significantly decreased ( Fig. 2A and Table 1), indicating a cellular compensation for the surplus of exogenously added propargylcholine phospholipids. Alike the content of PE, LPE, and PS was reduced, while the levels of PC O significantly increased only during incubation with LpPC O-16:0. Addition of LpPC O-16:0 also increased TG, whereas LpPC 16:0 lowered the cholesterol esters ( Fig. 2A and supplemental Table S1). The levels of the other tested neutral and glycerophospholipids as well as sphingolipids were not significantly affected.
Next, we aimed to elucidate the metabolic fate of the labeling lipids. Upon uptake, LpPC 16:0 or LpPC O-16:0 underwent cellular acylation to yield pPC or pPC O, respectively (Table 1). An analysis of the pPC species generated from the LpPC 16:0 tracer was performed by applying the C171 or the N 3 Pal reporter method (Fig. 2B). Comparing the species distribution of the unlabeled endogenous PCs with that of the labeled pPC pool, a clear correlation emerged. The most prevalent species showed a range of 32-36 carbons in their side chains, and 34 carbons were most abundant. The degree of FA saturation also matched well between both pools. Detection of pPC species by the C171 versus N 3 Pal reporter methods often yielded comparable amounts ( Fig. 2B and Table 2, species columns).
The main advantage of the N 3 Pal-based detection over the C171 reporter method, however, is that it delivers subspecies information by identifying the two FA side chains, in addition to the sum FA composition ( Table 2, subspecies columns). Out of the reported 23 pPC species, only the N 3 Pal-based detection revealed that 20 species contained subspecies, whereas 3 species did not show subspecies. Up to five subspecies could be detected for two species (pPC 36:4 and pPC 38:5). When analyzing the abundance of each FA among all 70 detected pPC subspecies, the generally most frequent palmitate, oleate, palmitoylate, and stearate ranked highest, and together comprised 83.4% of all FAs (Table 3). Arachidonate ranked fifth (3.7% of all FAs) and was found in eight subspecies. Together, all PUFAs encompassed 12.4% of all FAs and showed the widest distribution in the pool. Given the fact that the employed LpPC 16:0 tracer featured a palmitate side chain and that out of the 70 detected pPC subspecies, 53 (corresponding to 554 pmol/45,000 cells or 32% of all pPC molecules) did not contain a palmitate, a substantial lipid remodeling within the pool became evident.
If analyzing a sample using the N 3 Pal reporter method, the quantification of the peak corresponding to the fragment after an NL of 335.26 Da provides a sensitive means of detection (Fig. 1G). As this peak is specific and abundant, it represents the favorable way to quantify pPC species using an internal pPC standard. Alternatively, the signal intensities of the peaks corresponding to the fragmented FA side chains may be used. Comparing the two quantification ways, a generally good agreement between both approaches was found (Table 2). Hence, the total pPC content in 45,000 cells was determined as 1,641 or 1,750 pmol for the method based on NL 335.26 or the FA peaks, respectively. Both numbers were in good agreement with 2048 pmol, the value obtained from the C171 reporter method ( Table 2, Table S2). Again, the side-chain carbon range and saturation degree matched well between both pools. The PC O pool contained PUFAs at a frequency of 28%. At least for the five most abundant pPC O species, a neglectable occurrence of fatty alcohol vinylation was found, rendering the identified metabolites labeled plasmanyl species. Detection of the labeled pPC O species by the N 3 Pal reporter method generally showed a higher sensitivity than analysis by the C171-based method. Remarkably, either method reported significantly more pPC O than PC O, demonstrating a larger pool size of the labeled versus the unlabeled ether PCs upon tracer supplementation (supplemental Table S2, bottom). As this deviated from the data obtained for the pPCs, it pointed to differences in the metabolism of ether versus nonether PCs (Table 1).
Comparing data on the LpPC 16:0 and LpPC O-16:0 tracers, we found that both precursors directed the propargylcholine label to their primary target lipid class with similarly high efficiencies (
DISCUSSION
Propargylcholine labeling of choline-containing lipids has been proven a valuable tool to investigate the cell biology and metabolism of PC (1). The versatility of the alkyne tag in combination with advanced click reporters, detection technologies, and instrumentation has opened new possibilities in lipid research (9). However, the particular developments in alkyne lipid tracing by MS have thus far focused on side chaintagged alkyne lipids (6). For an alkyne head group such as propargylcholine, the potential of the new methodologies had not been explored.
Propargylcholine-containing lipids have been analyzed by MS using scans for [M + H] + ions in Fig. 2A. The specificity of labeling (preference for the primary target lipid class) has been calculated and is expressed as percentage of the total detectable label. positive ion mode that are a precursor of m/z 208.1 (1). This approach has some intrinsic limitations. When unfractionated lipid extracts are continuously infused into the ESI source on a quadrupole, an overlap of various parent ions occurs. During MS2 scanning to detect the m/z 208.1 fragment, such overlap can cause problems, as this fragment is head group specific but carries no intrinsic information on the side chains. Our strategy employing click reaction and the C171 reporter overcomes this shortcoming. In MS2 analysis, this reporter gives a characteristic NL so that the backbone of the labeled lipid still appears as a charged fragment. This improves the specificity of the analysis because the analyzed lipid is defined by two specific ions enabling resolution of isobaric species in MS2, which would not be possible with a charged reporter ion such as the 208.1 Da propargylcholine head group (6). In addition, the quantification benefits from the high sensitivity of MS2 scanning in 1 Da windows. With unfractionated lipid extracts, minor species occasionally fail to give peaks in MS1, whereas in MS2, both the precursor and the fragments are detected, allowing for unequivocal identification and quantification. However, both the conventional strategy based on the 208.1 Da reporter ion and the C171 reporter approach employing the NL of 73.1 reliably deliver only the sum FA composition of the lipid analyte.
Our novel method employing click reaction and the N 3 Pal reporter also overcomes this limitation. While maintaining the ionization ease in the negative ion mode applied here, it benefits from all advantages achieved with the C171 reporter and more. The nominal mass shift of +296 Da by the N 3 Pal reporter relocates the ion of the analyte to a portion of the MS1 spectrum that is hardly occupied and thus effectively reduces parent ion overlap. During MS2 analysis, the N 3 Pal method also provides a characteristic NL preserving the lipid backbone information in the detected fragment. The signal for the NL 335 seen here is about 3-fold stronger than that obtained for the NL 73 detected for the C171 reporter method using positive mode. Both methods profit from the high sensitivity of MS2 scanning in 1 Da windows. In a negative mode MS2 analysis, the N 3 Pal method also shows advantages over a possible direct detection of unclicked pPCs as acetate counterion adducts with methyl group elimination (10). While ionization efficiencies are comparable, the N 3 Pal method reliably delivers far more intense parent peaks in MS2.
Importantly, the lipid analyzed by the N 3 Pal method is defined by two specific ions with high diagnostic power and in addition by FA-specific fragments, Total lipids isolated from bEND3 cells labeled with 20 μM lyso-propargyl-PC 16:0 for 24 h were analyzed using either the C171 (Bold values) or the N 3 Pal (Italic values) reporter method. Each lipid species was detected, identified as sum FA composition and quantified using either the NL73.09 or the NL335.26 peak, and pPC 31:1 as internal standard by either method (species columns; corresponding to data in Fig. 2B). Only the N 3 Pal reporter method delivered the subspecies composition (subspecies columns) revealing the identity of the two FAs. The sum of the peaks corresponding to both FA fragments was used to quantify the subspecies and the subspecies' proportion on all subspecies. Lipid amounts are shown as pmol per 45,000 cells and represent means ± SD; N = 7. Lipid species less abundant than 25 pmol under all conditions depicted in Fig. 2B were omitted. Total lipids isolated from bEND3 cells labeled with 20 μM lysopropargyl-PC 16:0 for 24 h were analyzed using the azidopalmitate reporter method. Each lipid species was identified using the NL335.26 peak and the side chains by the respective FA fragment peaks. The two FAs were quantified as half of their sum, and the FA fragment peaks generated from pPC 31:1 served as internal standard. A total of 70 pPC subspecies carrying FA side chains at 140 possible positions were analyzed. FA amounts are shown as pmol per 45,000 cells and represent means; N = 7. The data correspond to the subspecies columns in Table 2. revealing the identity of the side chains. However, no information on sn-1/sn-2 placement or positioning of double bonds within the side chain is obtained. Yet, as cellular metabolism is unlikely to change the sn-1 linkage of the fatty alcohol in ether lipids, the liberated FA can be assumed to originate from the sn-2 position in the case of ether lipids.
For labeling of the choline-containing lipid pool methods using D9-choline and stable isotope tagging have proven invaluable (11)(12)(13). An alternative strategy employs propargylcholine (1). Our approach relates to the latter but also employs click reaction and dedicated reporters. In addition, we chose to use LpPCs as labeled precursors as these tracers represent a good compromise between precursor solubility and labeling specificity. Using an intermediate concentration of LpPC 16:0 or LpPC O-16:0, a superior labeling of the respective target lipid class pPC or pPC O with high specificity and efficiency was achieved.
As the cells take up and metabolize these precursors, they exert parallel adjustments to their lipidome. Unsurprisingly, the pool of endogenous PC is affected strongest and reduced accordingly (14). The precision of the underlying regulatory mechanisms is intriguing, and we find a superb compensation, illustrated by excellent numerical matching of pool size adaptations. For one parameter, the cells appear to adjust their membrane composition for steady proportions of the different lipid head groups. As observed before, propargylcholine is well accepted by cellular metabolism and substitutes effectively for the choline moiety in lipids (1). Accordingly, the levels of endogenous PC are reduced by both the labeled pPC and its ether pendant pPC O to accommodate the surplus of propargylcholine head groups. Here, the nature of the sn-1 side chain (FA or fatty alcohol) appears to play a secondary role. However, certain flexibility for gross adjustments in head group composition may exist as we also find reductions in ethanolamine-containing lipids (endogenous LPE or PE) and PS for either precursor treatment. All other tested lipid classes displayed no changes with the notable exception of TG and cholesteryl ester, the relevance of the latter remains unclear.
Apart from the head group composition, cells are also known to precisely fine tune the FA profile within their lipidome. When comparing the labeled metabolites of LpPC 16:0 or LpPC O-16:0 to the endogenous lipid pool, we find a well-matching pattern of side-chain length and saturation. Not only similar species were identified but also the ranking of their abundance was similar. Although both tracers contain a saturated tail of 16 carbons, a great variety of side chains were found in the metabolites. For labeled pPC, a third of all species was not containing palmitate, and hence surely derived from lipid remodeling. While this again demonstrates the great acceptance of the propargyl label by the involved enzymatic machinery, it also shows the extent of lipid remodeling. Palmitate is generally considered the most abundant saturated FA and is usually found at the sn-1 position in membrane lipids. Hence, acylation of the LpPC 16:0 tracer would right away yield a very common lipid species, and yet, the cells spend considerable efforts to further modify at least 32% of these initial metabolites during remodeling. That way the cells maintain a specific side-chain composition within each lipid class that is paralleled by a certain head group distribution within the whole lipidome. Noteworthy, the profile of labeled pPC species observed in our experiments does not indicate whether it originates from head group or side-chain remodeling. Likely, both activities exerted by phospholipases C/D or A1/A2, respectively, will occur within our extended experimental time frame.
Comparing the remodeling of LpPC 16:0-derived and LpPC O-16:0-derived lipids, some differences in ether versus nonether lipids became apparent. When labeling cells under equal conditions, both precursors labeled their primary target lipid class with a superb efficiency of ∼85%. However, the total amount of metabolites from LpPC 16:0 was twofold lower than that from LpPC O-16:0. Conversely, the pool size of endogenous PC in bEND3 cells is tenfold higher than that of its ether pendant PC O. This led to very different proportions of labeled versus endogenous lipids in both classes. While a quarter of all PC molecules became labeled, one endogenous PC O was matched by three labeled PC O molecules. If assuming a similar rate of precursor uptake and primary acylation, a longer dwell time of the propargyl label within the ether versus the nonether PC pool is indicated. This is in accordance with the calculated long half-lives of ether lipids in neuronal cells and whole brain (15, 16) but does not exclude the existence of short-lived subpopulations (17).
When analyzing the label transfer away from the primary metabolite pool to other lipid classes, a comparable overall frequency (15% for pPC vs. 17% for pPC O) of head group exchange was found. However, some differences were observed. Head group transfer yielding labeled pSM was threefold more frequent for labeled pPC than pPC O, despite the latter being twofold more abundant. This may indicate that the involved enzymes, such as the SM synthases (18,19), prefer pPC over pPC O as head group donor. This notion is in line with the idea of a higher stability of the propargyl label within the ether versus the nonether PC pool. Finally, we noted a profoundly higher occurrence of PUFAs in the pPC O (28%) versus the pPC (12%) pool of the analyzed bEND3 cells. This may relate to the described function of ether lipids to act as reservoirs for PUFAs (20). Because PUFAs tend to release carbon dioxide during fragmentation and hence escape detection, our analysis likely underrepresents their general abundance within both pools (21)(22)(23)(24). Indeed, when evaluating several synthetic pPC species, our instrumentation showed a profoundly reduced response to those lipids that contained two PUFAs, and this effect increased with the number of double bonds.
Taken together, the biological data presented here open a quantitative view on the choline phospholipid metabolism in bEND3 cells. They reveal differences in metabolism of PC and ether PC while demonstrating the power of the newly introduced tracing tools. The novel method developed here will greatly facilitate further studies in the field.
Data availability
Data are available from the authors on reasonable request.
|
2021-01-18T06:15:42.292Z
|
2021-01-13T00:00:00.000
|
{
"year": 2021,
"sha1": "86afed6c5e801eddac630f6b71c20d1530347bb0",
"oa_license": "CCBY",
"oa_url": "http://www.jlr.org/article/S002222752100002X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46b9fe5b925de611447a73bd0a7d5f10ce331662",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267252353
|
pes2o/s2orc
|
v3-fos-license
|
Progress in Predicting Ames Test Outcomes from Chemical Structures: An In-Depth Re-Evaluation of Models from the 1st and 2nd Ames/QSAR International Challenge Projects
The Ames/quantitative structure–activity relationship (QSAR) International Challenge Projects, held during 2014–2017 and 2020–2022, evaluated the performance of various predictive models. Despite the significant insights gained, the rules allowing participants to select prediction targets introduced ambiguity in model performance evaluation. This reanalysis identified the highest-performing prediction model, assuming a 100% coverage rate (COV) for all prediction target compounds and an estimated performance variation due to changes in COV. All models from both projects were evaluated using balance accuracy (BA), the Matthews correlation coefficient (MCC), the F1 score (F1), and the first principal component (PC1). After normalizing the COV, a correlation analysis with these indicators was conducted, and the evaluation index for all prediction models in terms of the COV was estimated. In total, using 109 models, the model with the highest estimated BA (76.9) at 100% COV was MMI-VOTE1, as reported by Meiji Pharmaceutical University (MPU). The best models for MCC, F1, and PC1 were all MMI-STK1, also reported by MPU. All the models reported by MPU ranked in the top four. MMI-STK1 was estimated to have F1 scores of 59.2, 61.5, and 63.1 at COV levels of 90%, 60%, and 30%, respectively. These findings highlight the current state and potential of the Ames prediction technology.
Introduction
The Ames test, a biological assay that utilizes bacterial strains such as Salmonella typhimurium, is a widely used method for assessing chemical mutagenicity by monitoring reverse mutations [1].This test serves as a preliminary screening tool to evaluate the carcinogenic potential of chemicals [2].Recently, the focus has shifted toward developing new methods for the initial assessment of impurities in pharmaceuticals.The international conference on harmonization (ICH) M7 guideline promotes the use of quantitative structure-activity relationship (QSAR) models as an alternative to traditional toxicological studies [3], making the accuracy of QSAR models in identifying mutagenic chemicals increasingly important.
The Division of Genetics and Mutagenesis at the National Institutes of Health Sciences in Japan (DGM/NIHS) has developed an Ames mutagenicity database, which includes chemicals not previously incorporated in QSAR model development.The Ames/QSAR International Challenge Project [4], conducted between 2014 and 2017, involved twelve QSAR vendors from seven countries and tested seventeen QSAR tools across three distinct phases (i.e., Phases I-III).Phases I, II, and III were performed between 2014 and 2015, 2015 and 2016, and 2016 and 2017 with a total of 3902, 3829, and 4409 compounds, respectively.A total of 12,140 compounds were used as an external validation set, with the Ames test data for these chemicals sourced from unpublished data registered under Japan's Industrial Safety and Health Act (ANEI-HOU) at the Ministry of Health, Labour and Welfare [5].)] without COV 100% represent various evaluation metrics for the prediction models.Specifically, BA (%) refers to the balanced accuracy, MCC stands for Matthews correlation coefficient, and COV (%) denotes the coverage rate for all predicted target compounds.Johnson Sb[COV (%)] represents the Johnson normalized coverage rate, while Johnson Sb[COV (%)] without COV 100% signifies the Johnson normalized coverage rate, excluding the prediction models with a 100% coverage rate.Prediction models that had 100% coverage are highlighted in dark.Out of the 109 types of prediction models evaluated in this study, 33 models had a coverage rate of 100%.These normal quantile plots are shown as dots and red lines.
As depicted in Figure 2, we observed strong correlations between the BA, MCC, and F1 Score.The correlation coefficients were 0.859 between BA and MCC, 0.943 between MCC and F1, and 0.922 between F1 and BA.To derive a more integrated index from BA, MCC, and F1, we conducted a principal component analysis.The first principal component (PC1), which accounted for 93.9% of the variance, was selected as a comprehensive evaluation index (Figure 3).Therefore, PC1, along with BA, MCC, and F1, was used for subsequent analyses.However, it is important to note that the COV is determined by factors distinct from the model's intrinsic performance.Therefore, COV was analyzed separately in correlation studies.As depicted in Figure 2, we observed strong correlations between the BA, MCC, and F1 Score.The correlation coefficients were 0.859 between BA and MCC, 0.943 between MCC and F1, and 0.922 between F1 and BA.To derive a more integrated index from BA, MCC, and F1, we conducted a principal component analysis.The first principal component (PC1), which accounted for 93.9% of the variance, was selected as a comprehensive evaluation index (Figure 3).Therefore, PC1, along with BA, MCC, and F1, was used for subsequent analyses.However, it is important to note that the COV is determined by factors distinct from the model's intrinsic performance.Therefore, COV was analyzed separately in correlation studies.
Evaluating the Impact of Coverage on Predictive Performance
A correlation analysis was conducted to assess the impact of the COV on BA, MCC, F1, and the PC1.Due to the skewed distribution of COV, Johnson normalization was applied to enable a normal quantitative correlation analysis [11,12].
Evaluating the Impact of Coverage on Predictive Performance
A correlation analysis was conducted to assess the impact of the COV on BA, MCC, F1, and the PC1.Due to the skewed distribution of COV, Johnson normalization was applied to enable a normal quantitative correlation analysis [11,12].
Evaluating the Impact of Coverage on Predictive Performance
A correlation analysis was conducted to assess the impact of the COV on BA, MCC, F1, and the PC1.Due to the skewed distribution of COV, Johnson normalization was applied to enable a normal quantitative correlation analysis [11,12].
Performance of All Models at 100% Coverage
In this study, we neutralized the influence of the COV and reassessed model performance.We used residuals from the linear relationship between normalized COV and each evaluation index to estimate the performance of all the models at 100% COV.The model with the highest BA of 76.9% at 100% COV was MMI-VOTE1, as reported by Meiji Pharmaceutical University (MPU).Additionally, the best predictive models for MCC, F1, and PC1 were all MMI-STK1, also reported by MPU.The respective values were 0.443 for MCC, 52.8% for F1, and 2.55 for PC1 (Table 1).Notably, all four top-ranking models, based on the comprehensive PC1 index, were developed by MPU, a first-time participant in the second challenge.Furthermore, the fifth and sixth most outstanding models were BM_PHARMA v1.5.2.0, submitted by MultiCASE Inc. (Mayfield Heights, OH, USA), and Derek_Nexus v.4.2.0, reported by Lhasa Limited (Leeds, UK) in the first project, respectively (Table 1).
Performance of All Models at 100% Coverage
In this study, we neutralized the influence of the COV and reassessed model performance.We used residuals from the linear relationship between normalized COV and each evaluation index to estimate the performance of all the models at 100% COV.The model with the highest BA of 76.9% at 100% COV was MMI-VOTE1, as reported by Meiji Pharmaceutical University (MPU).Additionally, the best predictive models for MCC, F1, and PC1 were all MMI-STK1, also reported by MPU.The respective values were 0.443 for MCC, 52.8% for F1, and 2.55 for PC1 (Table 1).Notably, all four top-ranking models, based on the comprehensive PC1 index, were developed by MPU, a first-time participant in the second challenge.Furthermore, the fifth and sixth most outstanding models were BM_PHARMA v1.5.2.0, submitted by MultiCASE Inc. (Mayfield Heights, OH, USA), and Derek_Nexus v.4.2.0, reported by Lhasa Limited (Leeds, UK) in the first project, respectively (Table 1).
Target Prediction Model for Reassessment
The first Ames/QSAR International Challenge Project [4], which was divided into three separate phase challenges, conducted each phase independently.Notably, the Ames test data for compounds used in the external validation sets of a previous phase were disclosed to participants before the start of the subsequent phase.This disclosure allowed participants to adjust their models using the newly available data.As a result, models that shared the same name across different phases might have been adjusted or modified.Furthermore, it was observed that the COVs were often reconfigured for models with the same name across various phases.Therefore, in this reanalysis, all the models used in each phase of the first project were treated as independent entities, regardless of whether they shared the same name.
Target Prediction Model for Reassessment
The first Ames/QSAR International Challenge Project [4], which was divided into three separate phase challenges, conducted each phase independently.Notably, the Ames test data for compounds used in the external validation sets of a previous phase were disclosed to participants before the start of the subsequent phase.This disclosure allowed participants to adjust their models using the newly available data.As a result, models that shared the same name across different phases might have been adjusted or modified.Furthermore, it was observed that the COVs were often reconfigured for models with the same name across various phases.Therefore, in this reanalysis, all the models used in each phase of the first project were treated as independent entities, regardless of whether they shared the same name.
On the other hand, during the second challenge [6], a rule was introduced that permitted participants to select which of their multiple submitted models would be evaluated.This rule aimed to reduce bias toward participants who submitted several models, thereby ensuring a fairer assessment.However, this approach also carried the risk of potentially high-performing models being excluded from the evaluation process.To mitigate this, in the current reanalysis, all the models submitted were extracted and included in the reassessment.As a result, the total number of models evaluated rose to 109 (Table S1).
Evaluation Index
Like many toxicity tests, the distribution of positive and negative compounds in the Ames test results is notably skewed.In the first Ames/QSAR International Challenge Project, the external validation set consisted of 12,140 compounds across three phases.Of these, 1757 (14.5%) were positive and 10,383 (85.5%) were negative.In the second challenge, the external validation set comprised 1589 compounds, with 236 (14.9%) testing positive and 1353 (85.1%) testing negative.It is widely acknowledged that using metrics such as sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) for such imbalanced data can often lead to misleading evaluations [13,14].
Accuracy (Acc), while a frequently used evaluation metric, is notably vulnerable to skew in datasets that are imbalanced [15,16].Acc is computed using the formula (TP + TN)/(TP + TN + FP + FN), where TP, TN, FP, and FN denote true positives, true negatives, false positives, and false negatives, respectively.This metric is straightforward and easy to comprehend.However, its reliability diminishes in scenarios like the project, where only 14.5% of the cases were positive.If all the test compounds were predicted to be negative, the resulting Acc would be 85.5%, which is misleadingly high.Consequently, a model that fails to accurately predict any positive cases may appear to outperform many others across both projects due to this metric's susceptibility to data imbalance.
In predictive modeling, sensitivity (also known as recall) and specificity often display a trade-off relationship.This is also observed for PPV, also known as precision, which often has a complex relationship with sensitivity, depending on the threshold used in the model.Similarly, NPV and specificity also display a relationship that varies with different thresholds.To assess the robustness of a predictive model's generalization performance, it is crucial to achieve a balanced mix of these metrics.As a result, receiver operating characteristic (ROC) curves and precision-recall curves are frequently utilized to examine these relationships [17,18].The area under these curves is an excellent measure for model evaluation.However, these metrics are only applicable to statistical models that can calculate predicted probability values.They were not used in the Ames/QSAR international challenge projects due to their inability to evaluate knowledge-based models.Instead, balanced accuracy (BA), which is the average of sensitivity and specificity, was used as a comprehensive metric to evaluate these two parameters, replacing the ROC curve [19][20][21][22].BA is calculated using a specific formula: BA = (Sensitivity + Specificity)/2 = {TP/(TP + FN) + TN/(FP + TN)}/2 Indeed, the formula for BA combines sensitivity (the true positive rate) and specificity (the true negative rate), providing a single measure that encapsulates the model's performance across both positive and negative cases in the dataset.BA is particularly insightful because it reflects the model's accuracy in identifying classes, regardless of the size of each class in the sample.This makes BA an ideal metric for assessing the overall performance of a model, especially in situations where data classes are imbalanced.Furthermore, the interpretability of this indicator is excellent, offering a clear understanding of model accuracy in a balanced manner.
PPV, also known as precision, represents the proportion of items predicted as positive that are actually positive.On the other hand, sensitivity (or recall) signifies the proportion of actual positive items that are correctly identified as such.The F1 Score, which is the harmonic mean of PPV and sensitivity, is commonly used for a comprehensive evaluation of these two metrics [23,24].It is important to note that sensitivity and PPV are also referred to as recall and precision, respectively.The F1 Score is calculated using the following formula: F1 = 2 × Recall × Precision)/(Recall + Precision) = 2 × (TP 2 )/{2 × (TP 2 ) + TP × FP + TP × FN} In the second Ames/QSAR International Challenge Project [6], the F1 Score was introduced as an additional evaluation metric, supplementing those used in the first project.The F1 Score is particularly beneficial in achieving a balance between PPV and sensitivity.A higher F1 Score indicates a superior model, as it represents a strong balance between predictive precision and recall.This is particularly important in scenarios such as Ames test predictions, where accurately detecting positives and minimizing false positives are equally important.
The MCC is another metric that is related to the chi-square statistic in a 2 × 2 contingency table.It incorporates a significant amount of information by considering the balance ratio of the four categories in the confusion matrix: TP, TN, FP, and FN [25,26].The MCC is expressed as follows: Indeed, BA, the MCC, and the F1 Score each have their own unique statistical properties.However, they all serve as valuable integrative indicators for evaluating models, particularly when dealing with imbalanced data.In this study, a novel approach was adopted to streamline the performance evaluation of the predictive models.This approach involved combining these diverse indices using principal component analysis.This method offers a more consolidated and definitive evaluation metric, thereby simplifying the otherwise complex task of model assessment.
Principal Component Analysis
The principal component analysis (PCA) conducted in this study revealed a strong consolidation among BA, the MCC, and the F1 Score.These metrics accounted for 94% of the variance in the same principal component direction (Figure 3).This pattern suggests a significant collinearity among these metrics.Indeed, the correlation coefficients between these evaluation indicators showed strong correlations, ranging between 0.859 and 0.943 (Figure 2).This finding highlights the utility of the PC1, which integrates these metrics, as a common and definitive indicator of integrated predictive performance.
A crucial aspect of this study was the quantitative correlation analysis conducted to assess the impact of the COV on the evaluation indicators BA, MCC, F1, and PC1.Correlation analysis using Pearson's correlation coefficient is generally most reliable when the variables under consideration follow a normal distribution [27].However, when the variables deviate from normality, there is an increased risk of the analysis being influenced by outliers or an overestimation of the degree of correlation.Moreover, if the assumption of normal distribution is not met, the accuracy in determining significance levels may be compromised.Therefore, it is crucial to verify the normality of each dataset before performing correlation analysis.Upon checking the normal distribution of these parameters using normal quantile plots, a distribution heavily skewed toward the COV was observed.As a result, the Johnson normal distribution method [11,12] was employed to correct the COV distribution to a normal form, achieving effective normalization.
Impact of COV on Metrics
The conducted correlation analysis, which was between the normal COV and the comprehensive evaluation indicators BA, MCC, F1 Score, and PC1 unveiled statistically significant negative correlations across all metrics.This finding implies that models with better predictive performance are likely to have lower COV settings.Despite the fact that each participant uniquely determined the COV settings using various techniques, it is still possible to estimate the standard influence of COV across all models from the slope of the least squares line.
Following this, we used the slope of this linear relationship and the residuals from each evaluation index to estimate the values of these indices, assuming a COV of 100% for each model.This method effectively shifts the evaluation values of each model along the slope of the straight line to a point where the COV equals 100% (Figure 4).As a result, this calculation allows for a correction of all models to the evaluation indices at 100% COV, thereby enabling a fair comparison and evaluation across all models (Figure 4, and Table 1).
Best Models
The analysis, adjusted to a 100% COV for all models (Table 1 and Table S1), unveiled that the most efficacious predictive model, utilizing the integrated comprehensive index PC1, was MMI-STK1.This particular model, submitted by MPU during the second project, demonstrated superior performance.It is noteworthy that MMI-STK1's training data exclusively encompassed the Phase 1, 2, and 3 datasets from the NIHS.For an in-depth understanding of the methodology contributing to its excellence, the algorithm employed for MMI-STK1 is elucidated in the supplementary file of the report [6]: "Ninety-nine stacking models were constructed using multiple descriptors (Dragon, MOE, Mordred) and various machine learning algorithms (light GBM, XG-Boost, deep learning, graph convolutional network).The ultimate prediction was determined through a majority vote from all prediction results.The descriptors utilized were computed through Dragon, MOE, and Mordred".
My analysis identified MMI-VOTE1 as the second most effective model.Like MMI-STK1, MMI-VOTE1 was submitted by MPU during the second project and exclusively trained on NIHS-provided Phase 1, 2, and 3 data.The algorithmic approach for MMI-VOTE1 is extensively detailed in the supplementary file of the report [6].This supplementary information provides a comprehensive insight into the methodologies and principles that underscore the success of MMI-VOTE1: "Nine stacking models were developed utilizing a combination of descriptors (Dragon, MOE, Mordred) and diverse machine learning algorithms (light GBM, XG-boost, deep learning, graph convolutional network).The final prediction was determined through a majority vote based on all prediction results.Descriptors were computed using Dragon, MOE, and Mordred".
The third most effective predictive model identified in this study is MMI-STK2, submitted by MPU during the second project.In contrast to previous models, the training data for MMI-STK2 included not only the Phase 1, 2, and 3 data provided by the NIHS but also Hansen's data.The detailed algorithmic approach for MMI-STK2 is expounded upon in the supplementary file of the report [6]."MMI-STK2 is a stacking model constructed using Light GBM, deep learning, and graph convolutional network algorithms, with descriptors calculated using Dragon and MOE".
The fourth best-performing model identified in the analysis was MMI-VOTE2, another model submitted by MPU during the second project.Similar to MMI-STK2, the training data for MMI-VOTE2 not only included the Phase 1, 2, and 3 data from the NIHS, but also incorporated Hansen's data.More detailed information about the algorithm used for MMI-VOTE2, including its approach to integrating these diverse datasets, is available in the supplementary file [6]: "MMI-VOTE2 is a majority voting model constructed using Light GBM, Deep Learning, Random Forest, and graph convolutional network algorithms.The descriptors used were calculated using Dragon, MOE, and DNA docking simulations".
In this project, MPU registered the four types of models previously mentioned.Impressively, all these models ranked as the top performers among the 109 models evaluated in this study.These models did not undergo adjustments to their coverage rate by altering their applicability domain.Given that setting a model's applicability domain is a highly technical process and varies significantly from model to model, it is reasonable to infer that the estimated values of the metrics used for evaluating prediction models at COV levels other than 100% might contain considerable errors.Despite this, the results mentioned above demonstrate the current technical capabilities of Ames prediction, underlining both the advances and potential limitations in this field.
Effect of Coverage on the Best Model
For MMI-STK1, which was identified as the top-performing model in terms of MCC, F1, and the integrated index PC1, we estimated how BA, MCC, F1, and PC1 would change with alterations in the COV (refer to Figure 5).For example, while the F1 Score was at 52.8% with a COV of 100%, it was projected to increase to 59.2%, 61.5%, and 63.1% when the COV was adjusted to 90%, 60%, and 30%, respectively.This trend indicates that the predictive performance of the model can significantly improve even with a 90% COV, which involves excluding only 10% of the compounds from the prediction.This observation implies that the external validation set included a small proportion of compounds that this model found particularly challenging to predict accurately.
These results lead to an important conclusion: when applying the Ames prediction model in real-world scenarios, it is crucial to consider the applicability domain while setting the COV [7][8][9][10].Although the estimated values presented here might contain substantial errors, the performance of the model could be further improved if the COV settings are based on appropriately defined applicability domains.
Evaluation of Adaptive Domain Setting Technology and Future Prospects
This study represents the first instance in the Ames/QSAR Challenge Projects where it has been explicitly shown that the performance of predictive models can be improved by adjusting COV settings.Notably, it also pinpointed the model with the highest predictive performance by taking into account COV.This significant discovery, achieved in a highly competitive environment, highlights the current limitations of QSAR technology in predicting Ames test outcomes.However, the enhanced prediction performance attributed to COV settings depends on the accurate definition of the models' applicability domain.
The performance evaluation at 100% COV presented in this study is essentially a projection based on standard COV settings.With careful consideration of the applicability domain settings, there is potential to exceed these standard performance levels.
While the technology for setting applicability domains-evaluated based on compound similarities and predicted probabilities derived from the models-is advanced, the research on optimal methodologies for setting applicability domains is still in its early stages [7][8][9][10].This reanalysis had limited capacity to assess this specific aspect of model performance.However, as technologies for systematically determining suitable applicability domains for each model advance, a combination of diverse models, like those presented in this project, could lead to improved prediction accuracy.Although this project was primarily a competition evaluating the standalone performance of various models, future enhancements in prediction rates are expected, especially with the application of ensemble and consensus methods based on advanced techniques for estimating applicability domains [28].
Analysis Strategy
The overarching strategy for this reanalysis is outlined in the following steps: a. Model and value extraction: retrieve all prediction models and their corresponding evaluation values from the Ames/QSAR 1st and 2nd Challenges.
b-1.PCA: perform PCA using balanced accuracy (BA), Matthews correlation coefficient (MCC), and F1 Score to calculate the first principal component (PC1) as a comprehensive evaluation index.b-2.Normalization of COV distribution: normalize the distribution shape of the compound sample COV.c. Correlation analysis: conduct a thorough correlation analysis between BA, MCC, F1, and PC1 against COV, and derive the least squares regression line.
d. Estimation at 100% COV: estimate the predictive performance of all models at a COV of 100% using the least squares regression line and considering the residuals of each evaluation index.
e. Identifying the best model at 100% COV: evaluate and determine the best predictive model at a COV of 100%.
f. Performance estimation with varying COV: estimate the predictive performance of the best model across various COV settings.
Data for the Analysis
The analysis encompassed all prediction models and their associated evaluation metrics as documented in both the main texts and supplementary materials of the 1st and 2nd Ames/QSAR International Challenge Projects papers [4,6].In the context of the first challenge, any prediction models sharing the same name but employed in different phases were treated as distinct models for evaluation purposes.This approach acknowledges the potential modifications and adjustments made to the models across various phases.In the second project, the analysis considered all prediction models listed in the supplementary file of the project paper.This comprehensive approach ensured that all the models submitted for the challenge were taken into account in the analysis.
Evaluation Index
In the Ames/QSAR International Challenge Projects, compounds underwent categorization into three classes based on the Ames test results [4,6]: Class A (strong positive): compounds inducing over 1000 revertant colonies per milligram in at least one Ames test strain, with or without metabolic activation.
Class B (positive): these compounds caused a minimum 2-fold increase in revertant colonies compared with the negative control, but less than those induced by Class A compounds, in at least one Ames strain with or without metabolic activation.
Class C (negative): defined as compounds indicating less than a 2-fold increase in revertant colonies (non-mutagenic).
Challenge participants were tasked with submitting results identifying positive compounds (Classes A and B) and negative compounds (Class C) through either existing or newly developed predictive models.The organizers then computed various metrics, including sensitivity for Class A, sensitivity, specificity, accuracy, BA, MCC, and F1 Score, based on these predictions.As mentioned above, since sensitivity and specificity are in a trade-off relationship, observing only one evaluation index might lead to incorrect interpretations [17][18][19][20][21][22].In addition, accuracy is reportedly an inappropriate imbalanced data evaluation indicator [15,16].Therefore, in this study, BA, MCC, and F1 were chosen as comprehensive evaluation metrics for generalization performance and used for analysis.It is important to note that the F1 Score was not used as an evaluation metric in the first challenge.To ensure consistency, the F1 Scores for all models in the first project were retrospectively calculated from the PPV (precision) and sensitivity (recall), following the method used in the second project (Table S1).The normality of the comprehensive evaluation indicators (BA, MCC, F1) and the coverage ratio (COV) was assessed using normal quantile plots and their respective 95% confidence intervals.Subsequently, the evaluation indicators (BA, MCC, F1) underwent PCA to calculate the PC1.In addition to these measures, this study also focused on evaluating the impact of COV fluctuations on model performance.
Evaluating the Impact of Coverage on Predictive Performance
The COV was initially normalized using the Johnson normalization method [11,12].Following this, a correlation analysis was performed between the normalized COV and the comprehensive evaluation indicators: BA, MCC, F1 Score, and the PC1.This analysis did not include models that were already reported to have a 100% COV.Based on the results of this correlation analysis, especially the residuals from the least squares regression line, this study estimated the evaluation indices of all predictive models at a COV of 100%.In addition, this study explored estimating the predictive performance of the model identified as the best under different COV conditions.Specifically, the evaluation index for each model at 100% COV was calculated from the equation of the least squares line between the normally distributed COV and BA, MCC, F1, or PC1, and the residuals of each model from those lines.
Statistical Test
The normality of each evaluation index was confirmed by examining the 95% confidence intervals illustrated in normal quantile plots.This step ensured that the data adhered to the assumptions necessary for subsequent statistical analyses.Pearson's correlation coefficient was utilized to assess the relationship between various evaluation indices.A significance level of 0.05 was established to determine the statistical significance of the correlations, following standard practices in statistical testing.PCA was carried out using a correlation coefficient matrix.All statistical analyses were conducted using JMP Pro version 16.2, software developed by SAS Institute Inc., Cary, NC, USA.
Conclusions
This study is the first in the Ames/QSAR Challenge Projects to demonstrate that the performance of predictive models can be improved by adjusting the COV.It emphasizes a model with exceptional predictive performance when considering COV, thereby showcasing the highest current QSAR technology level in predicting Ames test outcomes.This study suggests that with accurate settings of the applicability domain, model performance could see improvements.While the existing technology for setting applicability domains is sophisticated, research in this field is still in its early stages.This study could point out the potential for enhanced prediction accuracy through the integration of diverse models and anticipates future advancements with the application of ensemble and consensus methods, particularly in the estimation of applicability domains.
Figure 1 .
Figure 1.Shape of the performance evaluation index and coverage rate distribution for predictive models.The terms BA (%), MCC, COV (%), Johnson Sb[COV (%)], and Johnson Sb[COV (%)] without COV 100% represent various evaluation metrics for the prediction models.Specifically, BA (%) refers to the balanced accuracy, MCC stands for Matthews correlation coefficient, and COV (%) denotes the coverage rate for all predicted target compounds.Johnson Sb[COV (%)] represents the Johnson normalized coverage rate, while Johnson Sb[COV (%)] without COV 100% signifies the Johnson normalized coverage rate, excluding the prediction models with a 100% coverage rate.Prediction models that had 100% coverage are highlighted in dark.Out of the 109 types of prediction models evaluated in this study, 33 models had a coverage rate of 100%.These normal quantile plots are shown as dots and red lines.
Figure 1 .
Figure 1.Shape of the performance evaluation index and coverage rate distribution for predictive models.The terms BA (%), MCC, COV (%), Johnson Sb[COV (%)], and Johnson Sb[COV (%)] without COV 100% represent various evaluation metrics for the prediction models.Specifically, BA (%) refers to the balanced accuracy, MCC stands for Matthews correlation coefficient, and COV (%) denotes the coverage rate for all predicted target compounds.Johnson Sb[COV (%)] represents the Johnson normalized coverage rate, while Johnson Sb[COV (%)] without COV 100% signifies the Johnson normalized coverage rate, excluding the prediction models with a 100% coverage rate.Prediction models that had 100% coverage are highlighted in dark.Out of the 109 types of prediction models evaluated in this study, 33 models had a coverage rate of 100%.These normal quantile plots are shown as dots and red lines.
Figure 2 .
Figure 2. Correlation between the performance evaluation indicators for the predictive models.The analysis incorporated a total of 109 predictive models.Models with 100% coverage are represented in gray, while the other models are depicted in black.The terms BA (%) and MCC refer to balanced accuracy and Matthews correlation coefficient, respectively.
Figure 3 .
Figure 3. Principal component analysis of the performance evaluation indicators for the predictive models.The analysis incorporated a total of 109 predictive models.The figure displays a superimposed biplot of the score plot and loading vector.The terms BA (%), MCC, PC1, and PC2 refer to balanced accuracy, Matthews correlation coefficient, first principal component, and second principal component, respectively.
Figure 2 .
Figure 2. Correlation between the performance evaluation indicators for the predictive models.The analysis incorporated a total of 109 predictive models.Models with 100% coverage are represented in gray, while the other models are depicted in black.The terms BA (%) and MCC refer to balanced accuracy and Matthews correlation coefficient, respectively.
Figure 2 .
Figure 2. Correlation between the performance evaluation indicators for the predictive models.The analysis incorporated a total of 109 predictive models.Models with 100% coverage are represented in gray, while the other models are depicted in black.The terms BA (%) and MCC refer to balanced accuracy and Matthews correlation coefficient, respectively.
Figure 3 .
Figure 3. Principal component analysis of the performance evaluation indicators for the predictive models.The analysis incorporated a total of 109 predictive models.The figure displays a superimposed biplot of the score plot and loading vector.The terms BA (%), MCC, PC1, and PC2 refer to balanced accuracy, Matthews correlation coefficient, first principal component, and second principal component, respectively.
Figure 3 .
Figure 3. Principal component analysis of the performance evaluation indicators for the predictive models.The analysis incorporated a total of 109 predictive models.The figure displays a superimposed biplot of the score plot and loading vector.The terms BA (%), MCC, PC1, and PC2 refer to balanced accuracy, Matthews correlation coefficient, first principal component, and second principal component, respectively.
Figure 4 .
Figure 4. Correlation between the coverage rate and the performance evaluation index for predictive models.The terms BA (%), MCC, PC1, and Johnson Sb[COV (%)] represent the balanced accuracy, Matthews correlation coefficient, the first principal component, and Johnson normalized coverage, respectively.In the figure, prediction models with 100% coverage (33 species) are depicted in gray, while the other models (76 species) are shown in black.The analysis only includes prediction models with less than 100% coverage.The red line in the figure represents the least squares regression line, providing a visual representation of the correlation.
Figure 4 .
Figure 4. Correlation between the coverage rate and the performance evaluation index for predictive models.The terms BA (%), MCC, PC1, and Johnson Sb[COV (%)] represent the balanced accuracy, Matthews correlation coefficient, the first principal component, and Johnson normalized coverage, respectively.In the figure, prediction models with 100% coverage (33 species) are depicted in gray, while the other models (76 species) are shown in black.The analysis only includes prediction models with less than 100% coverage.The red line in the figure represents the least squares regression line, providing a visual representation of the correlation.
Figure 5 .
Figure 5. Estimation of performance evaluation index by coverage rate for the top prediction model.This figure illustrates the estimated variation in performance due to changes in the coverage rate of MMI-STK1 (Meiji Pharmaceutical University), which was identified as the highest-performing model based on the first principal component (PC1) as well as Matthews correlation coefficient (MCC) and F1 Score.The terms BA (%) and COV (%) denote the balanced accuracy and coverage rate for all predicted target compounds, respectively.
Figure 5 .
Figure 5. Estimation of performance evaluation index by coverage rate for the top prediction model.This figure illustrates the estimated variation in performance due to changes in the coverage rate of MMI-STK1 (Meiji Pharmaceutical University), which was identified as the highest-performing model based on the first principal component (PC1) as well as Matthews correlation coefficient (MCC) and F1 Score.The terms BA (%) and COV (%) denote the balanced accuracy and coverage rate for all predicted target compounds, respectively.
This table lists the top 20 prediction models performing best at a 100% coverage rate, as estimated by the first principal component (PC1).The terms BA (%), MCC, and PC1 refer to balanced accuracy, Matthews correlation coefficient, and first principal component, respectively.The higher the performance in each evaluation metric, the darker the red color shown.
|
2024-01-26T17:24:16.013Z
|
2024-01-23T00:00:00.000
|
{
"year": 2024,
"sha1": "0123e541b2872cbb3b00304d12a9c6fdb123fa6a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/3/1373/pdf?version=1705995155",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ef991da4d9019ae3f1bf3250bd033a0670df85e8",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236224483
|
pes2o/s2orc
|
v3-fos-license
|
Challenges and Opportunities for Climate Change Education (CCE) in East Africa: A Critical Review
: It is undoubtedly clear that climate change is happening, and its adverse impacts could reverse the progress made toward meeting sustainable development goals. The global crisis poses one of the most severe challenges to reducing poverty and existing inequalities, especially in developing countries that are projected to be highly vulnerable to climate variability. However, the education sector provides an untapped opportunity for successful climate change adaptation and mitigation through knowledge and skill acquisitions, and consequently, positive behavioral change. Specifically, education can capacitate individuals and communities to make informed decisions and take practical actions for climate-resilient sustainable development. This study is focused on East Africa, a region whose economy heavily relies on climate-dependent activities. At present, East African governments are already embedding climate change in their school curriculum. However, they lack coherent approaches to leverage climate change education as a tool in their adaptation and mitigation strategies. Therefore, this review explores some of the critical barriers to climate change education and possible opportunities for leveraging learning to promote sustainable development in East Africa.
Introduction
Climate change is arguably one of the most pressing global issues that have long-term implications for all countries' sustainable development. From increasing shifting weather patterns that threaten food security to rising sea levels and extreme rainfall that cause catastrophic flooding, climate change impacts are wide-ranging and unprecedented in scale. Recent research warns that such extreme climate-related events could be worse than predicted in the near future [1][2][3]. At the root of the climate variability is global warming [1], mainly attributed to carbon-intensive industrialization and associated population growth, a key element in the current capitalist world that hampers green development initiatives. De Souza, et al. [4] assert that developing nations are particularly vulnerable to the impacts of climate change, even though they do not share the same burden of responsibility for global warming as the global north. Indeed, data from the Intergovernmental Panel on Climate Change (IPCC) support this argument, with the recent decadal analyses strongly pointing to increased warming trends across the African continent over the last 50 to 100 years [2].
In response to the climate change challenges mentioned above, nations have signed various global treaties to tackle the adverse impacts of climate change. The first agreement was the United Nations Convention on climate change, which was established during the earth summit in 1992 to prevent dangerous human interferences in the climate system [5]. The Kyoto Protocol, which launched negotiations to strengthen the global response to climate change, followed in 2008 with its implementation period ending in 2012. While the sum of the emission from countries that observed the Kyoto targets reduced significantly,
Contextual Background
The East African region compromises of the countries of Djibouti, Kenya, Ethiopia, Tanzania, Rwanda, Sudan, South Sudan, Uganda, Eritrea, Burundi, and Somalia. According to the United Nations' latest data, East Africa's current population is around 400 million, and this number is estimated to double by 2050 [9]. Verburg, et al., Addaney and Weisser, et al. [10][11][12] suggest that such a high population enhances climatic variations by causing pressure on the existing natural resources, thus leading to environmental degradation, increased conflicts, food security, and high poverty levels as resources become scarce. Marchant and Lane [13] also point out a strong connection between increased human interventions and the region's ecosystem based on its historical perspectives, which significantly shapes its economic and social development. Indeed, there are already observed direct climate change impacts on the development of climate-dependent activities such as agriculture which account for 43% of the gross domestic product (GDP) of East Africa-and impacting the livelihood of 80% of the region's general population [14]. The following section will discuss the key vulnerability sectors in East Africa in detail.
Key Sector Vulnerabilities to Climate Change
This section will discuss the key sectors in East Africa that are most vulnerable to climate change. They include food security, water resources, biodiversity, human health and extreme weather events.
Food Security
One of the most widespread and devastating impacts of climate change in East Africa is food insecurity, with projections of frequent emergencies and famine as shown in Figure 1. As the UN Food and Agriculture Organization (FAO) reports, there is a strong link between climate change and East African food insecurity due to the shifts in growing seasons, interposed with increased droughts and floods that destroy food crops [15]. There has been a decline in the long rainfall season between March and May, and the progressive moisture deficit has resulted in decreased crop yield of long-life grains, such as maize, across the region [16]. Consequently, the low production of maize, which accounts for (13.1% daily calories per capita in Burundi, (19.5 percent) Ethiopia, (9.3%) Uganda, (25.7%) Tanzania, (33.3 %) Kenya, significantly affects the available food supply [14]. Climate change also affects East Africa's fisheries, with many tropical fish such as tilapia unable to survive the increasing temperatures that are beyond their thermal maxima, thus affecting access to affordable food to the majority of populations along the lakesides [17].
Water Resources
Climate change has impacted the frequency, intensity, and predictability of precipitation in East Africa. IPCC [18] projects that the region will experience decreased rainfall of 20 percent in the dry seasons by 2050. Thus, such changes in precipitation affect water availability and quality in East Africa's lakes and rivers that support the health and livelihoods of millions of people [19] while underpinning hydropower production and agricultural security [14]. Not only are these changes not uniform, but they also occur in widespread, unpredictable events. Accordingly, there have been abnormally high amounts of precipitation events across the equatorial part of East Africa, especially in the already wet seasons that add to erosion and complicate water management issues (ibid.). Less precipitation in the already dry season also enhances frequent and severe droughts and, ultimately, water scarcity [19]. Indeed, water availability for human consumption is of great concern, Grasham, Korzenevica and Charles [20], indicating that two-thirds of rural dwellers and a quarter of urban dwellers in East Africa lack access to clean water. The rising sea levels also enhance salt-water intrusion into river deltas and aquifers, thus threatening the availability of fresh water.
Human Health
Climate variability also significantly affects human health through heat stress, air pollution, water-borne diseases (typhoid, diarrhea, cholera) and vector-borne diseases (malaria, dengue fever) [21]. While other factors such as health preparedness and topography influence the spread of disease, scientists have established that high temperatures and intense rainfall provide critical breeding environments for the mosquitoes causing malaria [22]. Consequently, there have been increasingly reported malaria cases, especially in the highland parts of Kenya, Tanzania, Rwanda, Ethiopia, and Uganda [23]. The authors argue that the highlands that were previously the cooler areas of the region, had not previously been susceptible to malaria until climate change intensified. The number of people exposed to the disease is expected to double by 2070 (ibid.). Onyango, et al. [22] further assert that the cost of household expenditure for malarial treatments is highest in
Water Resources
Climate change has impacted the frequency, intensity, and predictability of precipitation in East Africa. IPCC [18] projects that the region will experience decreased rainfall of 20 percent in the dry seasons by 2050. Thus, such changes in precipitation affect water availability and quality in East Africa's lakes and rivers that support the health and livelihoods of millions of people [19] while underpinning hydropower production and agricultural security [14]. Not only are these changes not uniform, but they also occur in widespread, unpredictable events. Accordingly, there have been abnormally high amounts of precipitation events across the equatorial part of East Africa, especially in the already wet seasons that add to erosion and complicate water management issues (ibid.). Less precipitation in the already dry season also enhances frequent and severe droughts and, ultimately, water scarcity [19]. Indeed, water availability for human consumption is of great concern, Grasham, Korzenevica and Charles [20], indicating that two-thirds of rural dwellers and a quarter of urban dwellers in East Africa lack access to clean water. The rising sea levels also enhance salt-water intrusion into river deltas and aquifers, thus threatening the availability of fresh water.
Human Health
Climate variability also significantly affects human health through heat stress, air pollution, water-borne diseases (typhoid, diarrhea, cholera) and vector-borne diseases (malaria, dengue fever) [21]. While other factors such as health preparedness and topography influence the spread of disease, scientists have established that high temperatures and intense rainfall provide critical breeding environments for the mosquitoes causing malaria [22]. Consequently, there have been increasingly reported malaria cases, especially in the highland parts of Kenya, Tanzania, Rwanda, Ethiopia, and Uganda [23]. The authors argue that the highlands that were previously the cooler areas of the region, had not previously been susceptible to malaria until climate change intensified. The number of people exposed to the disease is expected to double by 2070 (ibid.). Onyango, et al. [22] further assert that the cost of household expenditure for malarial treatments is highest in East Africa. Moreover, the rift valley epidemic is also a great threat to human health, and Bryson, et al. [23] reports that it mostly occurs during the extremely wet seasons in East Africa. Hence, such examples evidence that climate variability causes a high burden of disease to East Africans.
Extreme Weather Events
Extreme weather events such as droughts, floods and wildfires are now frequent events across East Africa, with their impacts varying across the region. For example, Wainwright, et al. [24] report that the recent short rain season of 2019 in East Africa was the wettest; it caused massive landslides and floods that affected approximately 2.8 million people. According to Figure 2, flooding further affected millions of East Africans in 2020, leaving many populations in need of humanitarian aid. Extreme weather events such as droughts, floods and wildfires are now frequent events across East Africa, with their impacts varying across the region. For example, Wainwright, et al. [24] report that the recent short rain season of 2019 in East Africa was the wettest; it caused massive landslides and floods that affected approximately 2.8 million people. According to Figure 2, flooding further affected millions of East Africans in 2020, leaving many populations in need of humanitarian aid. Climate variations also cause intense droughts that result in widespread famines and conflicts between humans and wildlife as they fight for scarce water resources [19]. In Sudan, for example, more than half of the country is a desert or semi-desert, and the decreased precipitations resulting from climate change have caused more desertification and other forms of land degradation. Regional lake level fluctuations as observed in Lake Victoria in Uganda, Kenya, Tanzania, and Burundi also enhance flooding and disrupt economic activities such as fisheries and tourism along the lakes [25]. Similar impacts occur due to sea-level rise, causing loss of coral reefs and mangroves, and ultimately coastal erosion along the Indian Ocean in Kenya and Tanzania [26].
Biodiversity
Sintayehu [27] points out that Africa remains one of the most under-studied continents regarding ecosystem dynamics and climate change. However, the impacts of climate variability on the region's rich biodiversity are already being felt. Historically, climate Climate variations also cause intense droughts that result in widespread famines and conflicts between humans and wildlife as they fight for scarce water resources [19]. In Sudan, for example, more than half of the country is a desert or semi-desert, and the decreased precipitations resulting from climate change have caused more desertification and other forms of land degradation. Regional lake level fluctuations as observed in Lake Victoria in Uganda, Kenya, Tanzania, and Burundi also enhance flooding and disrupt economic activities such as fisheries and tourism along the lakes [25]. Similar impacts occur due to sea-level rise, causing loss of coral reefs and mangroves, and ultimately coastal erosion along the Indian Ocean in Kenya and Tanzania [26].
Biodiversity
Sintayehu [27] points out that Africa remains one of the most under-studied continents regarding ecosystem dynamics and climate change. However, the impacts of climate variability on the region's rich biodiversity are already being felt. Historically, climate change has resulted in dramatic shifts in the geographical distributions of species and ecosystems in East Africa, particularly after the post glaciers period [21]. Extreme temperature rise combined with other stresses disrupts species' habitat and their co-existence. Maitima, et al. [28] suggest that East Africa is particularly vulnerable to invasive and exotic species colonization due to its sensitive fauna, resulting in numerous localized extinctions. Additionally, plant species that cannot keep up with the climate shifts, such as the shrub savannahs in East Africa that are highly sensitive to short-term water availability, are declining [27]. Climate change is also likely to alter species migration routes, such as the Wildebeests migration from Kenya to Tanzania, leading to population decline.
Climate Change Response in East Africa
To respond to the said phenomena, the East Africa Community developed a Climate Change Policy (EACCCP) in 2009 to implement measures to improve the region's adaptive capacity and resilience to the negative effects of climate change [29]. Most of the adaptation priorities in the policy tend to focus on livelihoods, energy, forests, agriculture and food security, disaster response, transport, and coastal zones. However, Price [30] argues that this vertical focus on specific sectors rather than horizontal cross-cutting linkages limit policy coherence. Apart from the policy, the region also established the Eastern Africa Climate Smart Agriculture Platform (EACSAP) in 2014, which seeks to promote agricultural productivity, adaptation, and resilience to climate change through technological innovation (ibid.). Additionally, East African countries have developed national climate change strategies, which are at various implementation stages. Table 1 illustrates country-specific climate policies in detail and the action points for their implementation. [31]. East African governments are also collaborating with universities to train climate change researchers and professionals, even though there is a lack of data on completed, ongoing projects and their impacts. In addition, regionally, the Inter-governmental Authority on Development (IGAD) which works across eight East African countries, established a climate prediction and application center that tackles climate-related security risks [30].
Non-governmental organizations across East Africa have also not been left out in the fight against the climate crisis. Various institutions are banding together cooperatively to conserve depleting resources and protect vulnerable populations' livelihoods. For instance, in Tanzania, conservation groups promote mangrove protection and coastal areas through reforestation with climate-smart species, integrated land-use planning, and resource use technology [26]. In Sudan, Practical Action Organization introduced cleaner energies, such as cooking stoves to rural women to reduce the dependence on firewood fuel, thus improving people's health and reducing the environment's burden [32]. The Sudanese government recently signed a green climate fund with the United Nations Development Fund to support its citizens with climate-resilient water and food security.
Despite these efforts, Addaney [11] observes that climate change in Africa is still regarded as a technical problem that requires specialized solutions, while Orindi and Murray [21] assert that East African countries continue to treat climate change separately from their broader development agenda. Perhaps, the governments feel the urgent need to tackle the pressing poverty challenges rather than climate change. In this regard, the future predicted climate change impacts, culminating in a host of challenges for adaptation due to economic constraints, a backdrop of governance and land tenure issues would severely affect the region's sustainable development [2,11,31]. Weisser, et al. [12] maintain that adaptation to climate change should not merely focus on new activities but instead strengthen existing livelihood coping strategies through knowledge empowerment and skills sharing. For this reason, Verburg, et al. [10] suggest coordinating efforts between the government, the private sector, the civil society, and community members to promote climate change education and innovation in East Africa for maximized implementation of the existing strategies.
The Role of Education in Climate Change Management
While the policies mentioned above and climate agreements have increased the attention to curb the negative impacts of climate shocks [11,33], the current efforts generally overlook education's role in equipping people with skills to deal with uncertain environmental futures. As the literature reveals [33][34][35][36][37], educating people on climate change is a vital measure to persuade populations at all community levels including school children, farmers, and the general population to play an active role in mitigation and adaptation action. Nikendei, Cranz and Bugaj [38] further reiterate that teaching students on climate change as a scientific issue of social importance prepares them for democratic participation as they decide and adopt positive environmental behaviors. Wachholz, Artz and Chene [39] emphasize the importance of empowering college students with the right information about global climate change, while Ochieng and Koske [40] suggest that primary and secondary students also need climate knowledge, as they will have to make climate policy decisions. Thus, they need to have an informed perspective.
Consequently, the role of multi-disciplinary research in the pressing climate change issue is beyond doubt [36]. It is time to actively consider knowledge creation and skill development via education to tackle climate change rather than addressing it from a policy viewpoint that entirely leaves governments with the sole responsibility of developing strategies to adapt to its impacts. Reid [41] posits that research for climate literacy is vital for producing citizens who understand the climate system and can utilize that knowledge in their engagements as active community members. Wangari Maathai, a global climate activist, also highlighted the fact that "You cannot protect the environment unless you empower people, you inform them, and you help them understand that these resources are their own, that they must protect them" [42] (p. 1). Hence, both formal and informal climate change education plays a critical role in empowering populations to understand actively climate science and the skills to adapt and respond to climatic shocks [33].
Additionally, climate justice movements, which garner increased attention in all forms of media, also offer much-needed informal education on the pressing issue [43]. From the Climate Strikes by Greta Thunberg to Idle No More and many activist projects that range in scale, these movements make visible the links between the environment and social justice [44,45].
Climate Change Education in East Africa
It is evident that the continued extraction of natural resources for economic development is geographically and socially uneven, and so are the effects of climate change [4]. East Africa is one of the regions that is severely experiencing the adverse impacts of ever-changing climate due to its dependence on rain-fed agriculture and tourism as the backbone of its economies, sectors that are highly vulnerable to extreme climatic variations. From the recurring droughts in Djibouti, Sudan, and Ethiopia, the floods in Somalia, these climate-related events exacerbated food insecurity and human conflicts resulting from a scarcity of natural resources and ultimately forced migrations within the region [10,16,31]. Despite these increased phenomena, the impacts of climate change are still inadequately addressed over East Africa.
Assessing the role of education in addressing climate change issues in the region is of great importance. In Kenya, for example, most citizens are concerned about food insecurity issues and drought, a common phenomenon in the agriculture-dependent economy [46]. However, there is still some misinformation about the causes of climate change among the general population [47]. A study carried out among primary school teachers in western Kenya to assess their perception of climate change showed their concerns on climate change threats and existing challenges. Still, they lacked knowledge on clear strategies for climate mitigation and adaptation in their communities [40]. Silvestri, et al. [47] also reports a lack of adaptation knowledge as a behavioral barrier limiting some agriculture communities to respond to climate change effectively.
While there are broad and enduring questions regarding climate change education that this paper may not answer entirely, it will examine some key challenges and opportunities of climate change education in East Africa. Notably, the research will highlight the role of climate education in the region, who the critical players are, and the existing chal-lenges. It will also highlight the opportunities to reinforce formal and informal knowledge base to create a more comprehensive climate change awareness, adaptation policies, and implementation strategies that target all key stakeholders.
Analysis and Findings
This section presents key findings from the literature on the challenges for climate change education in East Africa to make recommendations for future action. We identify opportunities to overcome the barriers to action, and we describe approaches for addressing classroom and informal climate change education to promote behavior change by the general public. Lastly, we address the limitations of the literature review and summarize the critical steps of educating communities to be more resilient in the face of climate change.
Due to the broad perspective of the research topic, we adopted a narrative review involving basic key terms search to examine the challenges and opportunities for climate change education in East Africa. The material was derived from a comprehensive literature search from the Web of science and google scholar. The databases were searched using different permutations and combinations of key terms such as 'climate change, 'climate variability, 'environmental change,' 'climate change impacts,' 'education,' 'awareness,' 'East Africa,' 'Uganda,' 'Ethiopia,' etc. Since the study captured the diverse issues of climate change in the different country contexts in East Africa, the only typical inclusion and exclusion criteria were based on the dates of publication, originality, and language. The search covered full-text articles published from 2005 to 2021 to ensure that the issues discussed were current. We reviewed articles' abstracts to eliminate those that did not meet the goal and objectives of the study. After applying the above criteria, an estimate of 50 studies remained for in-depth content analysis. We also checked the reference list of relevant articles to ensure no relevant published articles on climate change education since 2005 were missed. Such flexibility in the literature used was critical to identify the gaps and opportunities for climate change education in the region. To evaluate the different climate change policies in East African countries and their relation to education, a google search on governments and the United Nations' website was conducted. We generated a table with the countries that have established climate change policies and strategies. On that basis, certain themes regarding the challenges and opportunities for climate change education in East Africa were developed and elaborated accordingly.
Challenges of Climate Change Education
The literature indicates that nations in East Africa are increasingly integrating climate change in their education curriculum due to the global crisis's mounting awareness [48,49], and with such integrations come many challenges to teach the topic. These include the need to ascertain the role of the educator, grappling with misconceptions, complexities of interdisciplinarity and understanding the content of climate change education.
Ascertaining the Role of the Educator
Firstly, it is not clear whether educators' role is limited to conveying climate science facts or extends to climate justice, which entails empowering their students with problemsolving skills to implement climate change projects within their communities [40]. Climate science teachers lack clear guidance on how and which aspects of climate change to pursue, and similar trends are evident globally. Berger, Gerum and Moon's [35] findings on bachelor of education programs in a university indicate that the teachers were not trained on the effective way to communicate about climate change. Many study participants also reported being reluctant to teach the topic because of its political nature, a common controversy even in East Africa. Educators' aggressive climate actions within their localities may also cause conflicts with community members who are keen to protect their identities as the topic of climate change greatly resonates with people's held values and ways of life [37]. For instance, there are conflicting interests and controversies concerning hazardous waste management; key decision-makers may feel that green development hampers productiv-ity. Their neoliberal ideas can over-rely on natural resources, thus attempting to recall information that reinforces their judgments [33]. Wise [50] also reports teachers' concerns about parents' responses regarding climate change, making them hesitant to teach the topic. They believe that teaching climate change in their communities could decrease their credibility and effectiveness [37]. Such perceptions are replicated in East Africa, where parents uphold exam-oriented learning for their children rather than teaching climate change topics, normally not tested in national examinations. Accordingly, teaching climate change's complexity and uncertainty requires careful thought and attention, hindering practical and strategic education about this global crisis.
Grappling with Misconceptions
The abounding misconception about the causes and effects of climate change among young people [39], local communities [34], and educators [35,40], also presents key barriers for conveying accurate climate science information through school programs and noninformal avenues. The Afifi, et al. [51] study on climate change perception among refugees in East Africa showed a lack of awareness, with many refugees blaming bad governance and conflict for the climatic problems. In Sudan, farmers acknowledged that the weather was increasingly changing; however, they reported economic challenges such as poverty rather than land-use changes to be a major cause of climate change [52]. For this reason, the farmers believed they had no capabilities to adapt to the climate phenomena. Similarly, Huho [49] asserted that many young people in Kenya believe that climate change is a problem that only affects farmers; thus, they find no value in learning about it. Despite the few studies, there is still inadequate research on climate knowledge and awareness among vulnerable populations, narrowing the appropriate educational strategies to tackle the existing gaps [47]. Further, the very design of climate education within the school curriculum and the compartmentalization of knowledge also poses a significant challenge for practical learning and climate change adaptation. Huho [49] reports that teaching climate change is commonly left to science teachers within the school setup; so is the replication of the syllabus in many schools in East Africa, limiting collective actions within school programs and the general population.
Complexities of Interdisciplinarity
Bangay and Blum [33] assert that most of the climate education materials focus on knowledge transfer without considering the content of climate education and how it interacts with other cross-cutting issues within communities. In Eastern African countries where populations' displacement is frequent such as South Sudan and Somalia, climate change is closely linked to land conflict and migration [12]. Thus, the lack of integration of such local challenges into climate science makes it impractical for many students and vulnerable communities in East Africa. The interdisciplinary nature of climate also translates to multiple knowledge gaps for most teachers due to the rapidly emerging scientific, political, ethical, and economic data [35]. Hence, this knowledge gap, linked with a lack of adequate content knowledge; perhaps because most teachers did not learn about climate change either in their schooling [36], means they may avoid teaching the topic. Hence, the lack of multidisciplinary teaching on climate change topics within the school syllabus eliminates fundamental aspects and successful mitigation actions to respond to climate change impacts. According to Nikendei, Cranz, and Bugaj [38], there is a need to teach climate change across all disciplines to address the many complexities of climate variability while providing opportunities to recognize social and scientific aspects. Pruneau, Khattabi, and Demers [34] also propose experiential teaching that fosters critical thinking and empowers students to reimagine different futures and develop the capacity to act to the climate crisis globally.
Understanding the Content of Climate Change Education
Certain climatic and environmental notions are also numerous and complex; thus, non-specialist educators and the general population may find it challenging to understand all the global problem elements. Pruneau, Khattabi, and Demers [34] point to the intricacy of climate variability that results from the immense web of casual links between interdependent factors that affect each other in several ways. Similar complexities are also evident in East Africa, where high population growth is linked to climate change, which translates to food insecurity, displacements, further concentrated high populations in certain areas, and vice versa [12]. For these reasons, people view climate change as a set of complex problems that could keep citizens from teaching and learning the topic. Besides, some present-day impacts of climate change are difficult to perceive, either because they are invisible to the naked eye or in remote areas where people have little knowledge of the living conditions. For instance, the rising ocean temperatures along the Indian Ocean in Kenya and Tanzania and the extinction of animal species, such as the turtles in Somalia [27]. Pruneau, Khattabi, and Demers [34] report that not being able to perceive such problems curtails awareness of the climate crisis and, thus, the experiential learning that can enhance climate justice.
Opportunities for Climate Change Education
Irrespective of existing challenges, the literature captures a number of opportunities in East Africa that can be maximized for climate change education. Among these are overwhelming support from educators, the government's commitment, and the presence of indigenous knowledge systems.
Overwhelming Support from Educators
Teachers' overwhelming support of climate change education, despite the reports that very few have received training and lack adequate resources to teach the topic [33,36,37,49], presents opportunities to empower them with the right knowledge about the matter. Ochieng and Koske [40] also posit that most teachers believe that climate change should be taught in schools, including individual actions and policies to deal with the climate crisis. Hence, it is essential to train educators on climate literacy's critical principles and scientific consensus on the causes of climate change to enlighten their students about the global problem effectively [36]. Huho [49] further asserts that with the right support and developed content of climate change education, populations can gain the right knowledge and dispel the many misconceptions of climate change. Moreover, having accurate information empowers vulnerable communities to take the most effective actions to mitigate the impacts of climate change. Nikendei, Cranz, and Bugaj [38] suggest that teaching should not precisely focus on explaining climate science but also create awareness of the impacts of climate change on humanity and the environment. For this case, schools can collaborate with youth movements and local organizations in East Africa who share positive messages to improve climate-related action and behavior change among local communities. For instance, Mukwano industries, one of the largest companies in Uganda, invest in treeplanting initiatives with small-holder farmers as part of its corporate responsibility [32]. However, some of these farmers may not have accurate information concerning climate change and agriculture; thus, educators' partnership with such organizations can fill the missing knowledge gap.
Government's Commitment
East Africa's government's commitment to creating a favorable regulatory and enabling environment to build resilient communities also provides relevant educational integration opportunities in their established national climate change policies. For instance, Rwanda has an environmental education for sustainable development strategy (EESD), which aims to create environmental awareness, complemented with ecological topics embedded in all its K-12 curricula to foster eco-friendly attitudes [48]. Such illustrations mean that climate education strategies exist. There is only a need to revitalize climate change information to be personally meaningful to learners while engaging them in planning and mitigation activities [37]. The policies should incorporate adequate funding to train educators and resource materials for climate change learning. Ethiopia and Kenya also have well-established agricultural research centers [31] that can be integrated into the higher education learning system to provide much-needed research on climate adaptation in the region [53]. As Henderson, et al. [36] propose, educational researchers working within diverse disciplines need to engage in the most pressing issues of climate variability. The research activities should actively involve young people and Indigenous communities to create comprehensive platforms for understanding and addressing climate change. Accordingly, the research institutions' existence within the East African region is a contingency for educators to incorporate participatory learning to offer holistic and transformative answers for climate change management.
The Presence of Indigenous Knowledge Systems
Moreover, combining Indigenous knowledge with formal research presents unique opportunities to empower vulnerable populations to mitigate and adapt to climate shocks. According to Songok [54], the lack of integrating Indigenous knowledge with scientific climate change response plans excludes local actors' participation in climate education. Speranza, et al. [55] also indicate that climate change education curricula in East Africa lack contextual relevance and devalue Indigenous knowledge due to the western teaching hegemony in the education system. Therefore, there is a need to promote locally based knowledge on climate change that communities can easily understand and apply to deal with climate variability [56]. Indeed, despite the arising debates regarding the accuracy of Indigenous knowledge systems as climate change continues in the future, small-scale farmers in East Africa are already utilizing their informal knowledge to adapt [31,46]. For instance, some agro-pastoralist communities in Kenya use various Indigenous knowledge indicators, such as weather variables, and environmental factors, such as the hills' shadow, to monitor climate variability [55]. The authors maintain that even locals with access to formal climate education still consult indigenous forecasts to plan their farming practices (ibid.). Such actions indicate that agro-pastoralists in East Africa use Indigenous knowledge as the background to position and interpret other sources of data for their climate adaptation practices. Moreover, Indigenous practices can improve climate monitoring, especially in rural areas in East Africa with few meteorological stations to access share and access information. Therefore, it is critical to reinforce formal and informal expertise to create a more comprehensive climate change awareness, adaption policies, and implementation strategies that target all key stakeholders to enhance sustainability.
Recommendations for the Way Forward
Educators clearly need additional training, resources, and time to adequately teach climate change in the classroom [35,36,49]. Educational ministries in East Africa can address this gap through organized training programs that focus on equipping teachers with crucial climate literacy principles. Governments should also provide teachers with resources such as lesson plans to incorporate this material in their classes. Foss and Ko [57] recommend professional development programs designed to help teachers integrate climate change education in their curriculum to ensure successful learning. On their end, educators should also acknowledge the seriousness and risk of climate change impacts within their communities, hence pass on locally relevant knowledge to their students. For example, teaching young students about rising sea levels within the desert areas in Sudan might not make sense; however, assimilating drought effects could enhance experiential learning. Monroe, et al. [37] also emphasize the essentiality of making climate change education meaningful to learners through technology that incorporates personal experiences. Strategies such as immersive virtual lessons enable students to see the longterm and geographically distant impacts of climate change, and consequently, the need to take mitigation actions [57]. An interdisciplinary approach that draws on expertise from various disciplines can also increase students' interest and understanding of the subject.
Additionally, encompassing climate change in all learning spaces, ranging from formal to informal, from the early years of schooling to the tertiary level, is the most coherent approach to leveraging education about the pressing issue. The East African community should incorporate quality learning, environmental and climate change education to promote relevant and multidisciplinary education that fosters critical thinking and problem-solving. Foss and Ko [57] maintain that all stakeholders' involvement encourages active and participatory learning to deliver the knowledge and skills relevant to the different local contexts. For instance, agricultural communities who are well-informed on Indigenous knowledge could be supported to enhance their adaptation practices rather than entirely introducing difficult scientific concepts of climate change which they might understand [54]. Similarly, strategies should also be adopted outside school settings, slowly building students' critical thinking about climate impacts, and supporting them to take climate actions through projects within their communities. For instance, parks and museums are suitable venues for child and adult education using craft activities that focus on positive change. Public meetings, such as the 'Barraza' (elders' gathering) that are common across Africa, could also be utilized as appropriate platforms to foster climate change discussions and resilience issues among the general public. Other professionals who shape the environment, such as urban planners, civil engineers and environmental researchers, should endeavor to be more active in climate education.
Conclusions
The literature review undertaken aimed to assess the challenges and opportunities for climate change education in East Africa. It achieved this by first highlighting the widespread impacts of the crisis and the global responses to this increasingly recognized phenomenon. The research also accentuated the importance of climate change education in managing its effects, especially regarding empowering vulnerable populations with the appropriate skills and knowledge to effectively adapt to extreme climates. Its originality lies in exploring whether and how climate change education is happening in East Africa while identifying existing opportunities to embed it with the current specific-country climate response plans. In addition to merely integrating climate change in educational curriculum as most East Africa governments currently do, the study identified the need to train teachers with accurate climate change information to make such knowledge meaningful to learners and incorporate Indigenous knowledge in the learning process.
The review process excluded some potential sources on climate change education, as we only captured the literature published in English and from Google Scholar and Web of Science database, with specific search terms. We acknowledge that relevant research or outputs that did not use the exact search terms that we employed would not have been captured in this review. While examining country-specific approaches to climate change education, we found the most inspiration from the general literature with a broader view on the barriers and opportunities of climate change education, which was aggregated with national climate change policies that mentioned the integration of the subject into the school curriculum. Hence, understanding the generality of these initial findings will be necessary for future endeavors regarding the topic. Further studies, therefore, that zoom into each of these key identified points are needed to gain a deeper understanding and establish more effective recommendations that target specific challenges. Despite these limitations, our study is based on diverse enough literature to offer meaningful insights and implications on the topic.
In conclusion, given the lack of well-defined leadership of climate change education in East Africa, various partnerships between government ministries, research groups in universities and institutions, local schools, museums, and non-profits could create better opportunities and place-based active learning on climate change. These approaches may also be necessary and useful outside East Africa, even within regions with clear climate education policies, as the misconceptions about the issue, exclusion of local knowledge, teachers' lack of adequate training, and definitions of their roles in climate change persist.
Although there may be practices in other continents that could be emulated in East Africa, we draw on Leal Filho and Hemstock [58] to argue that best approaches elsewhere may not be suitable in a different regional context as decisions about the most adequate practices should take into consideration local realities, available expertise, and resources.
|
2021-07-26T00:06:02.824Z
|
2021-06-09T00:00:00.000
|
{
"year": 2021,
"sha1": "853e5a66c412578f7df1b595407ecfda1c779be8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2225-1154/9/6/93/pdf?version=1624244558",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "583ac248035c5209680867ecc01c204264919d9e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
55698601
|
pes2o/s2orc
|
v3-fos-license
|
Factors Affecting Effectiveness of E-Procurement in Business Organizations, a Survey of Safaricom Dealers in Nakuru CBD-Kenya
: The study focused on establishing factors affecting effectiveness of e-procurement in business organizations. It was carried out among Safaricom dealers in Nakuru Central Business District (CBD). The specific objectives that guided the study were: to find out how e-security affects effectiveness of e-procurement among Safaricom dealers in Nakuru CBD; to determine the extent to which quality of software systems affect the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD; to find out how staff training affects the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD; and to determine how subcontracting affects the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD. The findings of the study will be of great importance to Safaricom dealers as well as other business organizations for they will get to know the factors affecting the effectiveness of e-procurement in their organizations and thereby come up with measures to enhance the effectiveness of e-procurement process as a whole. A survey research design was adopted for the study where 31 procurement personnel working in Safaricom dealer shops in Nakuru CBD formed the target population for the study. A census technique was used where all 31 respondents were included for the study. A questionnaire was used as the data collection instrument and contained closed ended questions. Data analysis was done using descriptive statistics such as frequencies and percentages, while presentation of the results was in the form of tables, charts and graphs which facilitated clear interpretation of results and drawing of conclusion. The findings revealed that e-security affected the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD to a large extent. The quality of software that was being used by Safaricom dealers was found to affect the effectiveness of e-procurement to a large extent. Training sessions were offered to employees of Safaricom dealers often and the level at which training was found to affect the effectiveness of e-procurement was to a large extent. Also established was that subcontracting affected the effectiveness of e-procurement to a very large extent. Based on these findings, the researcher recommended that regular software updates should be done and firewalls installed as e-security measures to forbid outside threats such as hackers and viruses from gaining access to the system and thereby safeguarding data being transacted. The researcher suggested that further research should be conducted to determine the challenges faced in the implementation of information technology in the procurement process.
Background of the Study
E-procurement is the business to business or business to consumer or business to government purchase and sale of supplies, work and services through the internet as well as other information and networking systems such as electronic data interchange (EDI). The study purposed to find out the factors affecting effectiveness of using computers or electronic data interchange in business organizations as far as the procurement procedures are concerned (Lysons and Farrington, 2006).
The use of inter-organizational systems such as electronic data interchange and internet-based extranets enable new types of collaborative alliances between separate trading partners (Philips, 2003). Most organizations in Kenya today are adopting e-procurement as a way of operating their activities and getting feedback by use of emails, extranets and other internet technologies used to support every business (Mentzer, 2006). E-procurement enables users within organizations to order directly from an electronic catalogue without interference from a purchasing department. Orders are acknowledged automatically by the supplier and there is no need to contact the purchasing department with questions such as when the order will be delivered and also what terms and conditions apply since the user can verify the order status online when desired (O'Brien, 2004).
Many suppliers nowadays offer detailed tracking and tracing facilities which enable their customers to monitor orders, follow ups and delivery real time. Besides this, procurement systems enable electronic invoicing and invoice matching. As a result, the traditional purchasing cycle is reduced and simplified considerably.
Safaricom Limited is one of the blue chip companies in Kenya. It is the leading mobile network operator that was formed in 1997 as a fully owned subsidiary of Telkom Kenya. In May 2000, Vodafone group of the United Kingdom acquired 40% stake and management of the company. It has directly employed over 1500 staff mainly stationed in Nairobi and other major towns in Kenya such as Mombasa, Kisumu, Nakuru, Eldoret, Kakamega, and Nyeri among others. Currently, it has nationwide dealers to ensure that customers across the country have access to its products and services.
Nakuru town has a significant number of Safaricom dealers both within the Central Business District (CBD) and its suburbs. Due to security reasons, support infrastructure and access to the market, most large operators are located in the CBD. E-procurement is in the increase in Kenya following the technological advancements that the country has witnessed and Nakuru town currently has access to speedy internet services courtesy of the various providers such as Jamii Telkom (JTL), Kenya Data Networks (KDN), Access Kenya, and Telkom Kenya, among others. This has not only increased the internet connectivity, but also the cost and usage amongst its residents and e-commerce is the new frontier in the field of business.
Statement of the Problem
For the past years since the introduction of computers, the objective of procurement has not been attained. Even though Safaricom Limited is among the first organizations to introduce computers in their business activities, procurement process is still facing challenges and more so it is still done manually in many organizations. This is mainly because many organizations lack the funds to install software systems that are right for their business activities and also the lack of knowledge and skills on how to operate these systems. In the long run, such organizations consider it more expensive in that they would still be required to train their staff on how to use the software and still rely on the software provider for support. This would necessitate the creation of an IT department which firms find easier to avoid through manual procurement process. In some cases, the firms have to train their suppliers and other business partners on how to use their system in order to enhance smooth running of the procurement process. It is in this regard that the researcher set out to find the factors affecting the effectiveness of e-procurement in business organizations specifically among Safaricom dealers in Nakuru CBD since it is a mandatory to conduct business electronically.
Objectives of the Study
The main objective of the study was to explore factors affecting effectiveness of e-procurement in business organizations. Hence, the specific objectives of the study were: i. To find out how e-security affects effectiveness of e-procurement among Safaricom dealers in Nakuru CBD. ii. To determine the extent to which quality of software systems affect the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD. iii. To find out how staff training affects the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD. iv. To determine the extent to which subcontracting affects the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD.
Theoretical Literature
E-procurement is the framework for inter-organizational collaboration. It denotes the seamless application of information and communication technology from its point of origin to its end along the entire value change of business process conducted electronically and designed to enable the accomplishment of a business goal.
E-procurement is of course only a special application of the general benefits that computerization may bring to any function in an organization. This include the ability to store and retrieve a great quantity of data, process such data rapidly with a high degree of accuracy, eliminate much routine error and use exception techniques which save time by notifying these variations which require management action. Reduction of routine clerical activity by automatic preparation of documents such as purchase requisitions, orders, acknowledgement forms, progress letters and formalizing of streamlined procedures that might not otherwise be contemplated (Ibid).
In Safaricom, procurement is a complete function rich in its purchasing system which interferes seamlessly with the requisition system. Dealers make there requisition orders through the same platform known as dealer portal. Every organization has its unique code of login in and doing business with Safaricom. For this to happen, Safaricom relies on electronic data interchange in its operations where data is shared between the bank, Safaricom and the dealers (Atieno, 2013). Organizations, a Survey of Safaricom Dealers in Nakuru CBD-Kenya All dealers bank to one consolidated holding account but indicate their specific banking code which uniquely identifies them. Once the banking is done, the finance department synchronizes their bank data and confirms the banking; the amount banked is credited to the dealer portal. The dealer logs in and places the order through the system to Safaricom logistics, indicating the preferred point of collection, upon approval, dealer gets a reference number through soft copy and the same is printed as a hard copy at the point of collection indicated. The authorized dealer staff then collects the specified stock and gets a copy of the order and a delivery note of the same. However, this only happens with some products. For bulk products, the portal does not allow the dealer to indicate point of collection but by default picks the designated courier service through which the product is dispatched (Ibid).
The use of e-procurement software may make it possible to automate these processes. Companies participating expect to be able to control parts of inventories more effectively, reduce purchasing overheads and improve the cycles. At the same time, ongoing purchases may qualify customers for volume discounts or special offers (Atieno, 2013).
E-security in E-procurement
E-security may be used to refer to protecting the system from viruses, attacks by intruders and protection of data being transacted. Information is an asset which like other important business assets has value to an organization and consequently needs to be suitably protected. E-security in procurement has various benefits. It increases privacy and confidentiality as it ensures the protection of sensitive information from unauthorized disclosure. E-security safeguards the accuracy and completeness of information/data given thus ensuring an organization maintains its integrity. It also ensures that information and associated services are available to users when required and that no party can deny the involvement in a transaction (Lysons and Farrington, 2006).
The increasing dependence on systems and technologies for business critical processes is making organizations vulnerable. This is enhanced by the trend of using systems and technologies that are outside an organization IT departments' immediate control. More and more systems are being hosted and managed by third parties that is, Application Service Provision (ASP) and delivered to end users using the internet infrastructure (Ibid).
Quality of Software Systems in E-procurement
Quality can be a confusing concept partly because people view quality in relation to different criteria based on their individual roles in the production-marketing value chain. Sunil and Peter (2007) argues that delivery of the service to the internal quality is achieved by embedding throughout the organization a fundamental approach that serves as a basis for assuming quality from design to customer with this in mind everybody in the organization sees that whatever he does has to satisfy the customer.
Hartman (2000) argues that quality is defined as perceived and concerns should dominate the providers' decision making. Consumers may focus on the specification quality of a product or service, or how it compares to competitors in the market place. Products might measure the conformance quality or degree to which the product/service was produced correctly. Support personnel may measure quality in the degree that a product is reliable, maintainable or sustainable. Many people have defined quality from different perspectives.
Training Organizational Employees
People are the most important and expensive part of an organization. Organizations' time management for task completion and overall system quality are significantly influenced by the effectiveness of the organizations employees are in short supply. Armstrong (2006) quoted that organizations should not shift their total focus from its employees to the customers as it is very tempting. As much as customers retention and loyalty is important to an organization, great attention should equally be given on the side of the employees bearing in mind that its employees who delivers services to customers and thus its them who retains them and build organizational loyalty in them.
Training is the process of enhancing the skills, capability and knowledge of employees is nourished to give the best results in doing a particular work or task. Training process moulds the thinking of employees and leads to quality performance of employees. It is continuous process and never ending in nature. Employees become more efficient and productive if trained well (Ibid).
Subcontracting in E-procurement
Subcontracting is the practice of assigning part of the obligations and tasks under a contract to another party known as a subcontractor. Subcontracting is very useful in situations where the range of required capabilities for a project is too diverse t be possessed by a single general contractors. In such cases, subcontracting parts of the project that do not firm the general contractor's core competences may assist in keeping costs under control and mitigate overall project risk (Bailey et al., 1998).
According to Saunders (1997) subcontracting has developed as a reaction to the over diversification that took place in 1970s and early 1980s. This has led to many organizations to reveal on their core competences. Many organizations have little or no expertise in carrying out many services or even knowledge of market rate cost. Organizations should subcontract these activities to specialize and concentrate its energies on core activities. Subcontracting is the strategic use of outside resources to perform activities initially handled by internal staff. Subcontracting is essentially the transfer of production of goods and services that had been performed internally to an external party.
Specialized subcontractors are better positioned to secure and maintain a grip at the leading edge of technological change and innovation. He is legally responsible to the main contractor. According to Lysons and Farrington (2006) companies can also make customer services more effective by taking advantage of the technological innovations that some providers offer and here there may sometimes be an advantage in off shoring. That is because some foreign subcontracting providers have offerings their domestic counterparts can't match in terms of technologies that help guide customer service by recognizing patterns in consumer behavior.
One way to mitigate the damage from subcontracting customer service is to invest the money the company saves to improve the quality of the company's products or services, or to cut prices rather than simply pocket the savings as extra profit. Our findings suggest that this is not happening in most cases. Among the companies we studied that have outsourced customer satisfaction score; their customers didn't feel that they were getting any more for their money than they did before the company started subcontracting (Ibid).
In the case of Safaricom dealers, products pass through several channels from the time of release by the main contractor before reaching the consumers. In the case of airtime, after been manufactured by the main contractor, it is distributed by a subcontractor called Andy distributors to the dealers. The dealers sell the airtime to t in smaller packages who then sell to consumers. Even though the manufacturers and the distributors have specialized in their profession, dealers contact Safaricom directly in case of a problem. Anyango (2005) asserts that companies are re-engineering their supply chain management software for example, the demand of e-procurement are pushing organizations to use their internets, and e-commerce to help them re-engineer their relationship with their suppliers, distributors and retailers to meet their e-commerce customers imperative needs to what they want, where and when it is wanted at the best possible cost. E-procurement has revolutionalized and its effects on purchasing practices have improved in various businesses. The companies offering e-procurement systems have generated a considerable cost saving, productivity and efficiency.
Empirical Literature
Mathane (2007) on Factors Influencing Adoption of E-procurement in the Supply Chain states that the time taken for the acquisition by use of e-procurement is very important. The system should have the time that is taken to acquire goods or to exchange information through the supply chains. For this is much effective system of managing these chains.
Chepkonga (2010) on Factors Affecting Order Placement in Procurement Process observed that lead time depends on a number of factors, from the time it takes to create the machinery to the speed of delivery system. Lead time can be reduced if information technology is implemented on order placement and also introduction of online shopping. Anyango (2005) on Factors Affecting Effectiveness of IT on Procurement Function stated that the use of IT in managing procurement function has developed rapidly over the last 10 years. Research demonstrates that IT utilized in a variety of procurement application including the communication with vendors, checking vendor price quotes and making purchases from vendor catalogs. Vendor negotiations have also been streamlined through the use of IT. It is being used in order processing applications. The most frequent areas of application include order placement and order status. Use of IT in order processing has resulted in increased accuracy levels and increased reliability. Serem (2005) on Effects of Computerization of Kenya Ports Authority, stated that the successful going live of phase one of the IT strategy was a major milestone in the organization strategic road map and resolve to become an e-port and rated amongst top twenty ports in the world by the year 2000. The introduction of computers have witnessed and experienced drastic change in their working system. This include less paperwork and decision making has improved due to availability of online and timely information and service delivery to internal and external customers which used to take up to two days, now takes two minutes (Ibid).
Reduced movement of staff, that is, a letter can be edited and e-mailed without sending by post or use of a messenger. Also the internal messaging service has reduced paper flow substantively and eliminated independent connections to external internet service providers which used to cost the authority Kshs. 140,000 per month.
Erasto (2005) on The Role of Vendor Managed Inventories in Customer/Supplier Chain states that the vendor managed inventory process is a combination of e-commerce, software and people. The e-commerce layer is the mechanism through which companies communicate the data. The vector markup language (VML) data can be communicated via electronic data interchange where compatible customer/supplier software and hardware are interlinked or any other reliable communications method. The key feature of the e-commerce layer is that data is timely and accurate. Chebii (2006) stated that the internet has opened the door to new ways of shopping. Shopping in the internet offers convenience way and time saving benefits to shoppers as compared to traditional way of shopping. This mode of shopping eliminates the agony of traffic jams, pick pockets and bad weather to travel and no transport cost is involved. Wangare (2005) on Top Performance through E-procurement revealed that, top performers conduct more than 20% of their procurement online while they use the internet for several e-procurement applications such as communicating with vendors, checking vendor price quotes and purchasing from vendor catalogues. The internet has also enabled companies to set early warning, damage system provide information on warranty agreements and assist in vendor negotiations.
E-procurement functions must guard and mitigate risks, understand the market, build good relationship with the supplier who meet needs in a timely manner and constantly monitor performance to improve service provision. This therefore raises the need for an organization to have clearly defined policies that can be understood (Ibid). Munene (2006) on Factors Affecting the Use of Electronic Banking Service suggests that the use of computers in banks is growing at a higher rate with inappropriate IT platform, banks growth is held back. This translates into high transactions costs and forces the banks to open more and bigger branches to accommodate the swelling number of customers. In order to accommodate these customers, banks have introduced a number of e-banking services aimed at meeting the needs of its customers. E-banking is one of the services which has been introduced and enables world wide access to ones account at the convenience of a customer. E-banking ensures 24 hours access to one's account every day, easy payment solutions and affordable transaction fees.
Wanyama (2012) on Contribution of E-procurement in Enhancing Procurement Process states that, application of IT helps to ensure a continuous production and distribution of goods and services in an organization. It guarantees on timely delivery and this creates a real life environment between buyer and supplier. Due to inadequate skills of employees in computer operation, it is very important to train them for the purpose of good management and organization performance.
Conceptual Framework
The conceptual framework shows the relationship between variables and can be explained as follows: the independent variables are the factors affecting effectiveness of e-procurement in business organizations and include e-security, quality of software, staff training and subcontracting. When the intervening variables are taken care of, so are the factors and hence there is effective e-procurement (dependent variable). On the contrary, when the intervening variables are not dealt with properly, the end result is ineffective e-procurement.
Research Design
A survey research design was adopted for the study. A survey attempts to collect data from members of a population in order to determine the current status of the population with respect to one or more variables (Mugenda and Mugenda, 1999). A survey research design was suitable and appropriate for this study because it allowed collection of large amounts of data from the field. By using survey design there was generalization of information related to the target population.
Target Population
According to Mugenda and Mugenda (2003), target population is a population to which the researcher wants to generalize the results of a study. The study targeted procurement personnel of Safaricom dealer shops in Nakuru CBD. According to the Nakuru County Government Licensing Office (2014), there are 11 Safaricom dealer shops in Nakuru CBD and the researcher selected all of them for study. A total of 31 procurement personnel formed the target population for the study. This population was chosen because of their influence over procurement activities of the The distribution of target population for the study is shown in the subsequent table.
Sampling Design and Procedures
A census technique was used where all 31 procurement personnel of Safaricom dealer shops in Nakuru CBD were included in the study. This was because they had the required information with respect to the objectives of the study. Census technique was used because the population was considerably small and manageable.
Data Collection Instruments and Procedures
A questionnaire was used as the data collection instrument for the study and was developed through the guidance of the study's objectives and research questions. It was found to be an appropriate instrument as it was easier for the researcher to collect data from the respondents. Furthermore, it was viewed as the most appropriate as it permits greater depth of response from the respondents' thus more detailed data about the views of the respondents on the subject could be obtained. The questionnaire contained closed ended questions and were administered on a drop and pick up later basis where the respondents were given 3 days to go through the questions at their own pace. This was done in order to ensure uniformity of answers and also to increase the response rate.
The validity of the research instrument was achieved through the expert judgment of the research supervisor. The reliability of the instrument was ascertained by conducting a pilot study at Gilgil town where 4 questionnaires were randomly distributed to procurement personnel of 2 Safaricom dealer shops (Interservices Ltd. and Fon Xpress). Since the instrument was found to yield stable and desirable results which could be relied upon, it was adopted for the study.
Data Analysis and Presentation
After the questionnaires were collected, before analysis was done, all questionnaires were adequately checked for verification. They were edited to correct any errors, coded for consistency and completeness. Data analysis was done using descriptive statistics where frequencies and percentages were used. Presentation of the results was done in form of tables, charts and graphs which facilitated clear interpretation of results and assisted in drawing of conclusions.
Correlation Analysis
A correlational analysis was conducted to determine the relationship between E-security, quality of software, staff training and ub-contracting and effectiveness of E-procurement,. The results were as presented in table 2. From table 2, the Pearson Correlation = .048 indicates a weak positive correlation which implies that E-security positively affects effectiveness of E-procurement. The significance value of 0.372 (> 0.05) however indicates that the relationship is not statistically significant. The Pearson Correlation = .082 indicates a weak positive correlation which implies that quality of software has weak positive effect effectiveness of E-procurement. The significance value of 0.285 (> 0.05) however indicates that the relationship is not statistically significant.
The Pearson Correlation = .041 indicates a weak positive correlation which implies that staff training positively affect effectiveness of E-procurement. The significance value of 0.387 (> 0.05) however indicates that the relationship is not statistically significant. Similarly, the Pearson Correlation value of 0.048 indicates positive relationship that implies that subcontracting positively affects effectiveness of E-procurement. The significance value of 0.372(>0.05) indicates that the relationship is not statistically significant.
Regression Analysis
To determine the overall effect of E-security, Quality of software, staff training and sub-contracting on effectiveness of E-procurement, a multiple regression analysis was conducted. The results were as presented in tables 3. From table 3, the R square reveals that E-security, quality of software, staff training and sub-contracting collectively affect effectiveness of E-procurement up to 9 %. This is the collective effect of the three variables on effectiveness of E-procurement Where; Y -Effectiveness of E-Procurement X 1 -E-Security X 2 -Quality of Software X 2 -Staff Training X 4 -Subcontracting
Conclusions and Recommendations
From the findings of the study, it can be concluded that e-security affects the effectiveness of e-procurement among Safaricom dealers in Nakuru CBD to a large extent hence making it a necessity for the dealers to make effective transactions. Lack of e-security was found to hinder the effectiveness of e-procurement.
Quality of software used by Safaricom dealers was found to affect the effectiveness of e-procurement to a large extent. The use of high quality software was found to reduce errors to minimal thereby enhancing the effectiveness of e-procurement.
Safaricom dealers offered training sessions for their workforce on a regular basis. Staff training was found to affect the effectiveness of e-procurement to a large extent whereby it helped in improving the understanding and coordination level of the employees. It also helped in reducing errors and increasing the number of transactions thus enhancing the effectiveness of e-procurement.
Subcontracting affected the effectiveness of e-procurement to a very large extent and Safaricom dealers were basically content with the services offered by subcontractors. Through subcontracting, there was reduction of time taken in service delivery thereby enhancing the effectiveness of e-procurement.
Since e-security was found to affect the effectiveness of e-procurement among Safaricom dealers, it is important that regular software updates are done and firewalls installed to forbid outside threats such as hackers and viruses from gaining access to the system and thereby safeguarding data being transacted. This should go a long way in creating a sense of confidence amongst users including suppliers who will be assured of security when transacting online and thereby enhance the effectiveness of e-procurement.
Safaricom dealers should invest in high quality software so as to reap the best benefits and enhance the effectiveness of e-procurement. They should ensure that their software systems are updated on a regular basis to avoid them becoming obsolete. In order to realize optimum performance and effectiveness of e-procurement among Safaricom dealers, more regular training sessions should be arranged for employees so as to keep them abreast with the changing trends or updates of the system. This will give them a better understanding of e-procurement and thereby enhance their efficiency levels. Subcontracting of services should be highly encouraged amongst Safaricom dealers in Nakuru CBD since professionals were highly involved and were therefore more dedicated towards achieving certain objectives. By subcontracting services, the dealers were assured of quality service delivery and therefore high effectiveness levels in e-procurement.
Further research should be conducted to determine the challenges faced in the implementation of information technology in the procurement process. Since the current study was confined to Safaricom dealers in Nakuru CBD, the researcher suggests that similar research should be carried out in a wider geographical region and with a larger sample size to determine whether similar results can be achieved for more prudent generalization of the findings
|
2019-05-28T13:13:06.413Z
|
2015-06-02T00:00:00.000
|
{
"year": 2015,
"sha1": "f9805a9c57da111750e31d64c91b02c59c001412",
"oa_license": null,
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijefm.20150303.27.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5d7b1044cf1769f7443a8fc5e11619b25ca0fab4",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
16641585
|
pes2o/s2orc
|
v3-fos-license
|
A Proposed Alternative Low Energy Quantum Field Theory of Gravity Based on a Bose-Einstein Condensate Effect
An alternative quantum field theory for gravity is proposed for low energies based on an attractive effect between contaminants in a Bose-Einstein Condensate rather than on particle exchange. In the ``contaminant in condensate effect,"contaminants cause a potential in an otherwise uniform condensate, forcing the condensate between two contaminants to a higher energy state. The energy of the system decreases as the contaminants come closer together, causing an attractive force between contaminants. It is proposed that mass-energy may have a similar effect on Einstein's space-time field, and gravity is quantized by the same method by which the contaminant in condensate effect is quantized. The resulting theory is finite and, if a physical condensate is assumed to underly the system, predictive. However, the proposed theory has several flaws at high energies and is thus limited to low energies. Falsifiable predictions are given for the case that the Higgs condensate is assumed to be the condensate underlying gravity.
Within Bose-Einstein Condensates [1], the authors predict in a separate paper [2] the existence of a new effect which causes an attractive force between two contaminants, the "contaminant in condensate" (CIC) effect. It is proposed that contaminants act as a potential within the condensate. This causes the condensate in between two contaminates to jump to a higher energy state than if no contaminants existed. By assuming that the condensate behaves as a massive scalar field governed by: with induced standing waves between contaminants governing the condensate superstate given by: the expectation value of energy associated with the superstate over all energy levels is determined to be: with µ ≡ mca/h in the case µ ≪ 1. Note for a massless field this becomes: These results were derived by the same method by which the Casimir effect is derived [3].
However, in a physical Bose-Einstein condensate, energy levels are so low that, as argued in [2], induced superstates are likely always in their lowest energy state available. In order to make a more accurate model of the force associated with the CIC effect, one must therefore find the energy associated with the creation of a superstate and its change to a different size. To simplify both the scattering calculations and the creation of an S-matrix to describe the CIC effect, the approach taken to determining the energy of an induced superstate is to associate a scalar particle propagator with the condensate superstate: Again, as all energy states are not integrated over as in the Casimir effect, it is safe to manipulate one state at a time in calculations.
The superstate "propagates" in the space of distances rather than physical space however. That is, a superstate is said to "propagate" from one distance to another, as describing a condensate superstate by a single point would incompletely describe its position. Taking into account this philosophical point, the standard machinery of QFT is used. A force between two particles is then produced by the creation of a superstate and its movement. For a massless field, this results in a force: as usual [4] [5].
Note two particles can occupy the same point in distance-space, preventing superstate interaction, and the superstate does not need to interact with any particles to create a force. As this occurs in the CIC effect, it thus fulfills two physical requirements that are required of a QFT of the CIC effect.
Also it should be noted that with no interactions among particles allowed, the Feynman rules for our theory are trivial.
A finite quantum field theory has thus been defined, essentially by fiat. All interactions which could cause a divergence have been eliminated as only creation, propagation, and annihilation are allowed. The model can further be extended to a spin-2, massless tensor field to find a finite, though not predictive, quantum theory of gravity, as will be demonstrated below.
In condensates, though, only energy outside of the condensate will serve as a potential. Thus if a physical condensate is used as the source of gravity in our quantum field theory, higher order self-interaction terms can be ignored as unphysical. A predictive theory of gravity can thus be created. (Note that if the Higgs condensate is assumed to underly gravity as will be the case in section (3), this model would explain why the Higgs condensate has no apparent "weight" and its energy density is not observed, a problem noted in [6].) In order to preserve relativity, all particles interacting through a condensate must be separated by either a time-like distance or light-like distance. Also, the energy for the creation of a superstate of many particles comes from each individual particle, lowering the temperature of the system as a whole.
The resulting theory starts with the graviton propagator in the harmonic gauge as usual [4], but redefines it in distance space so it can apply to a particle superstate rather than a particle. This gives: where η is the Minkowski metric and g µν = η µν + h µν where h µν are deviations from the Minkowski metric. This couples to the stress-energy tensor T µν defined according to the variation of the matter action S M by: and gives the scattering amplitude: Between non-relativistic matter, this becomes G 2k 2 T 00 (1) T 00 (2) . As usual, the Fourier transform gives the interaction potential: where a change is made from the graviton-exchange model and one integrates over distances rather than positions. This reduces to the Newtonian potential GM1M2 r . This is an identical result to a graviton-exchange model [4], but interactions which cause divergences are not predicted.
It should be noted that the above model bears resemblance to Sakharov's "Induced Gravity" model [7], as both speculate gravity to arise from underlying quantum fluctuations rather than as a fundamental force. However, the mechanism by which this is thought to occur is different in the model above. Vacuum energy is not presumed as a basis for gravity. Rather, superstates of a physical condensate mediate the gravitational interaction.
Physical Predictions
In order for a theory of gravity to exist which works by the massless mechanism in section (2), there must exist a field associated with a spin-2, massless, tensor particle that energy forms a potential in (perhaps associated with the Einstein's space time field). A condensate of these particles would then be sufficient to cause a gravity-like interaction. The theory thus requires and therefore predicts the existence of some form of massless condensate in which energy forms a potential.
The theory would also predict that there is no self-interaction correction to the strength of the gravitational interaction. However, this prediction would only occur at energies currently not testable.
Alternatively, a condensate associated with gravity consisting of massive particles, such as the Higgs condensate, would produce testable effects. Note that there is nothing preventing a condensate of massive particles from producing an effect such as that described above which could be associated with gravity. This is because the information about a superstate would still travel at the speed of light. There are two chief effects predicted in this case, which are described below.
It can be heuristically argued that the CIC effect in a condensate composed of a massive field should behave no differently than in a condensate composed of a massless field. This is because of the argument that information about the condensate is massless, and thus the CIC effect would still behave as though it were occurring in a massless medium. The results below disregard this argument and strictly follow the mathematics of the proposal.
A Universal Repulsive Force and its role in Early Universe Cosmology
As described in section (1), the energy caused by two contaminants in a massive, scalar condensate is equal to, if we sum over all excitation modes of a potential supersate: This results in a force including arbitrary constants b n : In the early universe, uniform extreme high energy conditions could potentially cause induced superstates to obtain higher energies. It is thus proposed that the repulsive component of equation (14) may play a role in inflationary cosmology. If the Higgs condensate induces a gravity-like effect in the CIC mechanism, then the following inflationary potential is predicted to have occurred after symmetry breaking: This results in "slow roll" parameters [8]: and a predicted number of e-foldings [9]:
Universal Repulsive Force in the present day?
As there is no reason for this potential to disappear after the inflationary period (when the potential energy of the field predominates over its kinetic energy) is over, there should thus be a universal repulsive force between particles of order O(ln a) presently. However, if the assumption that a condensate will always be in its lowest energy state available is used, which is perhaps more accurate, then this effect is not predicted. In fact, the energy of massive condensate superstate creation and movement is predicted by the path integral method to be − 1 4πa e −ma which is clearly a physically untenable potential. It is still proposed however, that this potential force, with an appropriately small coupling constant, could be a physical justification for the apparent cosmological constant [10]. However, for the proposed repulsive force to be produced by this mechanism, the perhaps unphysical assumption that a permeating condensate exists in arbitrarily high energy states must be made. Also, unfortunately, we have found no satisfactory method to incorporate this effect in Einstein's equation as of the time that this is being written. This is a major flaw in the theory of a massive condensate inducing gravity and it may well be that there is no method of successfully incorporating it into Einstein's equation.
It is presently the subject of ongoing work. It does, though, seem promising that some form of potentially testable prediction can be made for certain variations of a gravity as CIC effect theory.
Conclusion
We have attempted to show in as brief and straightforward a manner possible that if there exists a field (such a spin-2 tensor field or the Higgs field) associated with a particle that forms a condensate which permeates space and in which mass-energy forms a potential, a finite, predictive quantum field theory of gravity can be developed by assuming gravity to be a CIC effect in the condensate.
A CIC effect in the Higgs condensate could produce a finite, predictive quantum field theory of gravity with falsifiable predictions, primarily a universal repulsive force of order O(ln a). However, there are serious problems with incorporating the results of the predictions with Einstein's equations.
Unfortunately, there are inherent problems with the CIC approach approach at high energies, which is why it is only proposed as a low energy theory. First, since predicted effects at very high, early universe energies cannot be incorporated into Einstein's equations at this time, it does not seem to be reducible to Einstein's theory. Second, it is not background independent. 8
|
2014-10-01T00:00:00.000Z
|
2007-03-08T00:00:00.000
|
{
"year": 2007,
"sha1": "b5df900a7e9d00b8a0d2dadb5ff074d1faaa9057",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9ee0a0cce0ea65b1821ca949a91e6cdd05bf8130",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
30111669
|
pes2o/s2orc
|
v3-fos-license
|
Dense Gas Tracers and Star Formation Laws in Active Galaxies: APEX Survey of HCN J=4-3, HCO+ J=4-3, and CS J=7-6
We report HCN J=4-3, HCO+ J=4-3, and CS J=7-6 observations in 20 nearby star-forming galaxies with the Acatama Pathfinder EXperiment 12-m telescope. Combined with 4 HCN, 3 HCO+, and 4 CS detections in literature, we probe the empirical link between the luminosity of molecular gas (L_gas) and that of infrared emission (L_IR), up to the highest gas densities (10^6 - 10^8 cm-3) that have been probed so far. For nearby galaxies with large radii, we measure the IR luminosity within the submm beam-size (14"-18") to match the molecular emission. We find linear slopes for L_CS76-L_IR and L_HCN43-L_IR, and a slightly super-linear slope for L_HCO+43-L_IR. The correlation of L_CS76-L_IR even extends over eight orders of luminosity magnitude down to Galactic dense cores, with a fit of log(L_IR)=1.00(\pm 0.01) \times log(L_CS76) + 4.03(\pm 0.04). Such linear correlations appear to hold for all densities>10^4 cm-3, and indicate that star formation rate is not related to free-fall time scale for dense molecular gas.
INTRODUCTION
The star formation process constantly turns gas into stars. The Kennicutt-Schmidt (K-S) law (Kennicutt 1998) globally correlates the surface-densities of star formation rate (Σ SFR traced by Hα) with total gas mass (Σgas traced by CO and H i ), with a slope α ∼ 1.4 (Σ SFR ∝ Σ α gas ). However, quite a range of deviations from the K-S law have been observed, and no unique slope was found in the L CO -L IR correlations as well (e.g., Liu & Gao 2012;Gao & Solomon 2004a).
Recent studies on star formation indicate that stars, especially the massive stars, are predominantly formed in the dense cores of giant molecular clouds (e.g., Evans 2008). Dense gas directly represents molecular content involved in forming stars (e.g., Lada et al. 2012), traced by the rotational transitions of high dipole moment molecules (e.g., HCN and HCO + ), because of their high critical densities 1 (n crit ). Gao & Solomon (2004a,b) find a tight linear correlation between the luminosities of IR emission (L IR tracing the SFR) and HCN J=1→0 (L ′ HCN J=1−0 tracing M dense ) in galaxies. This correlation even extends to Galactic dense cores undergoing high mass star-formation (e.g., Wu et al. 2005). Lada et al. (2012) argue that such linear correlation is a fundamental relation for SFR and dense gas, and the molecular gas with densities above 10 4 cm −3 should follow this linearity. Linear correlations have also been found in other dense gas tracers with similar or higher n crit (e.g., HCO + J=1→0, HNC zyzhang@pmo.ac.cn 1 All critical densities (n crit ) in this letter are calculated with n crit = u>l A ul / u l C ul (T kin ) at T kin =100 K assuming optically thin emission.
A ul and C ul denote the Einstein coefficient for spontaneous emission and the collision rate, respectively. All state-to-state cross sections and rate coefficients are from the LAMDA Web site (http://home.strw.leidenuniv.nl/˜moldata/) (Schöier et al. 2005).
For gas tracers with n crit higher than HCN J=1→0, the slopes of the L ′ gas -L IR correlations are controversial. Krumholz & Thompson (2007) argue that the mean densities in different types of galaxies and n crit of the tracer change the slopes. Numerical simulations predict decreasing slopes against increasing n crit , because of sub-thermal excitation conditions (i.e., Narayanan et al. 2008;Juneau et al. 2009). Observations of HCN, HCO + , and CS (typically, J≤3→2) have been used to support these and show sublinear correlations (e.g., Baan et al. 2008;Bussmann et al. 2008;Graciá-Carpio et al. 2008), where HCN J=3→2 has a slope of α ∼ 0.8, following the prediction quite well (Bussmann et al. 2008). Up to date, only few detections of higher-J transitions of the above species have been reported (i.e., Knudsen et al. 2007;Jackson et al. 1995;Greve et al. 2009;Bayet et al. 2009;Wilson et al. 2008).
To better probe the densest molecular gas in galaxies, and to test the predictions for higher n crit tracers, we therefore performed a survey of CS J=7→6, HCN J=4→3, and HCO + J=4→3 in 20 nearby actively star-forming galaxies with the Atacama Pathfinder EXperiment (APEX) 12-m telescope 2 . With the advent of the Herschel space telescope, we are able to obtain beam-matched IR luminosities in nearby galaxies, and to compare them with data from single dish telescopes. In this letter, we summarize our findings and compare them with the results from Galactic studies. We adopt R SD C aper Band Type g Source Name (Mpc) (10 10 L ⊙ ) (K km s −1 ) (K km s −1 ) (K km s −1 ) (10 6 K km s −1 pc 2 ) (10 6 K km s −1 pc 2 ) (10 6 K km s −1 pc 2 ) (Bayet et al. 2009). b (Knudsen et al. 2007). c (Jackson et al. 1995). d (Greve et al. 2009). e (Papadopoulos 2007). f (Wilson et al. 2008) g Galaxy types are found in NASA/IPAC Extragalactic Database (NED).
SAMPLE, OBSERVATIONS AND DATA REDUCTION
We selected 20 galaxies from the Infrared Astronomical Satellite (IRAS) Revised Bright Galaxy Sample (Sanders et al. 2003), with S ν (100 µm) > 100 Jy, and declination < 20 • . In the analysis, we also include data from literature (Jackson et al. 1995;Knudsen et al. 2007;Papadopoulos 2007;Wilson et al. 2008;Greve et al. 2009;Bayet et al. 2009), which were mostly observed with the James Clerk Maxwell Telescope (JCMT) with a beam-size (Full Width Half Power; FWHP) of 14 ′′ . The sample encompasses galaxies with L IR from 10 10 L ⊙ to 10 12.5 L ⊙ , including nearby normal galaxies, starbursts, and Ultra Luminous Infrared Galaxies (ULIRGs; L IR ≥ 10 12 L ⊙ ), with about half containing Active Galactic Nuclei (AGNs). Table 1 lists the targets with integrated intensity, distance, and luminosities.
Our observations were performed in 2011 April and August with the Atacama Pathfinder Experiment (APEX) on the Chajnantor Plateau in Chile, in good (pwv < 0.6 mm) to median (pwv ∼ 1 mm) weather conditions. In total we spent ∼ 30 hours telescope time on this project. The First Light APEX Submillimeter Heterodyne receiver (FLASH) was employed to observe CS J=7→6, HCN J=4→3 and HCO + J=4→3 simultaneously, with dual sidebands. Typical system temperatures were T sys ∼ 180 -240 K. The Fast Fourier Transform Spectrometer back-ends led to a bandwidth of 4 GHz for each sideband, with a channel spacing of 0.2 MHz. The beam-size is ∼ 18 ′′ at 342 GHz.
All observations were performed in a wobbler switching mode. Beam throws range between 2 ′ and 4 ′ , according to the target size. Every 15 minutes we made a chopper wheel calibration. The focus was determined on Saturn or Jupiter every 3-6 hours. Pointing was checked once per hour, resulting in a typical uncertainty of 2-3 ′′ (R.M.S.). Including overhead, we spent ∼1.5 hour on each galaxy. Typical R.M.S. noise levels are 0.1 mK at 20 km s −1 velocity resolution. Although the sideband separation is better than 10 dB, there are still some CO J=3→2 images presented in the upper sideband. Fortunately these CO images do not mix with our HCN spectra.
All data were reduced with the CLASS package in GILDAS 3 . We checked the line profiles of low-J HCN or CO transitions in the literature and set the baseline ranges accordingly. Linear baselines were subtracted after inspecting each spectrum. We qualified spectra by comparing the measured noise and the theoretical noise before and after 4 times of boxcar smooth. About 5% of the spectra were discarded during the qualification.
We converted the antenna temperatures (T ⋆ A ) to the main beam brightness temperatures (T mb ), using The adopted forward hemisphere efficiency F eff and beam efficiency B eff are 0.95 and 0.73, respectively. The flux uncertainty was estimated to be ∼15%. We derived the line luminosity (L ′ gas ) following Solomon et al. (1992). Fig. 1 shows the spectra of strong detections of HCO + J=4→3, and Fig. 2 shows the spectra of weak detections of HCO + J=4→3. NGC 7771 and NGC 2903 has only non-detections, so we do not show their spectra. Combining data from the literature we construct samples containing 14, 17, and 17 detected galaxies for CS J=7→6, HCN J=4→3, and HCO + J=4→3, respectively. Total IR (TIR; 8-1000µm) luminosity (Sanders et al. 2003) is adopted as a proxy of SFR. However, the molecular lines are obtained only in small beam-sizes (∼ 14 ′′ -18 ′′ ), which only pick up a certain portion of the IR luminosity for a whole galaxy. This is particularly problematic for nearby galaxies and starbursts because of their large radii (up to a few arcmins). If this effect was neglected, one would systematically overestimate the IR emission or underestimate the dense gas emission at the low luminosity end of the L ′ gas -L IR correlations, while the ULIRGs are not affected due to their small angular sizes of gas emission.
We decide to measure the IR emission within the submm beam-size, rather than to adopt the IR luminosities of the entire galaxies. We download Herschel 4 PACS 100 µm, 70 µm (when 100 µm is not available) images from the Herschel Science Archive (HSA), and they are processed to level 2.5 in the pipeline. NGC 3628 does not have Herschel data, so we adopt archival images of SCUBA 850 µm instead. Using these IR data, we perform aperture photometry with the submm beamsize and with the whole galaxy. The background radii are selected by eye from the outside of galaxies to the edge of the images. Some nearby galaxies are downloaded from the Key Insights on Nearby Galaxies (KINGFISH) project, and similar to their photometry results (Dale et al. 2012), the impact of using ∼10% larger or smaller aperture areas is a median difference of less than 3% on the flux densities for all wavelengths. The IR luminosity within the beam-size is: where L TIR is the IR luminosity of the entire galaxy, R SD is the ratio of flux densities within the beam-size of single dish (SD) telescope to that measured in the whole galaxy, and C aper is the aperture correction factor for the beam-sizes. The final error comprises errors of photometry (∼5%), the point source assumption (∼10%), the flux calibration error (∼5%), and the error of tracing TIR with a chromatic IR band (∼10%) (e.g., Galametz et al. 2013). In the end we account 20% as a conservative uncertainty for L IR .
3.1. The L ′ gas -L IR Correlations Figure 3 presents the L ′ gas -L IR correlations. All detections are included in the fitting except for NGC 4039 and NGC 3627, because the regions covered by their APEX beams are not active in star formation as indicated from their IR images. In the linear regression we assume Gaussian independent variables, and account for the errors in both L IR and L ′ gas . Upper limits are not adopted in the fitting. The L ′ gas -L IR correlations extend from the nuclear regions of nearby normal galaxies to ULIRGs, covering a L IR range of ∼ 2.5 decades. We adopt publicly available IDL routines MPFIT (Markwardt 2009) for the linear least-squares fit Figure 3, and obtain error estimates by fitting the distributions with Gaussian profiles. These fittings give slopes of 0.94±0.07, 1.01±0.07, and 1.12±0.9 for CS J=7→6, HCN J=4→3, and HCO + J=4→3, respectively. These are consistent to the linear least-squares fit. Both correlations of L ′ CS J=7−6 -L IR and L ′ HCN J=4−3 -L IR have slopes very close to unity, while the L ′ HCO + J=4−3 -L IR correlation shows a slight super-linear slope. The slopes of HCN and HCO + are much higher than those predicted by Narayanan et al. (2008) and Juneau et al. (2009). Although they did not predict CS J=7→6, the high n crit of CS J=7→6 makes it far from the predicted trend too. We also fitted the correlations with our APEX data only, and obtained slopes of 0.98±0.15, 0.98±0.05, and 1.08±0.05, which are very close to the results fitted with the combined data.
Alternatively, we fitted the correlations without the beammatching correction, and obtained slopes (N) and correlation coefficients (r) to be: N=0.71±0.08 and r=0.89 for CS J=7→6, N=0.77±0.05 and r=0.94 for HCN J=4→3, and N=0.94±0.10 and r=0.92 for HCO + J=4→3, respectively. The shallower slopes (sub-linear) and worse correlations indicate that the beam-matching correction has a significant effect in the fitting. On the other hand it should be emphasized that, with the beam-matching correction we are studying the central 14 ′′ -18 ′′ regions in the nearby galaxies (except for the maps of HCN J=4→3 and HCO + J=4→3 in NGC 253), rather than the whole galaxies.
At T kin =100K, HCN J=4→3, HCO + J=4→3, and CS J=7→6 have n crit of 5.6 × 10 6 cm −3 , 1.3 × 10 6 cm −3 , and 2.8 × 10 6 cm −3 (see Sect 1), respectively. These tracers pick up the densest part of the molecular cores, and trace only a small amount of the total dense gas mass involved in star-formation. The linear slopes found in the high-J HCN and CS tend to support the proposed correlations for gas tracers with n crit higher than ∼ 10 4 cm −3 (Lada et al. 2012).
Comparing with the neutral HCN and CS molecules, the molecular ion HCO + is more sensitive to the average ambient electron abundance x(e), because of the protonation reaction: H + 3 + CO → HCO + + H 2 The high electron abundance is likely to reduce [HCO + /H 2 ] efficiently. Considering the impacts of the CR ionization (Papadopoulos 2007), the turbulent diffusion (Xie et al. 1995), and the AGN illumination, HCO + is likely deficit when being exposed to the extreme physical conditions prevailing at the high luminosity end, i.e., ULIRGs. Such effect could increase the slopes of the L ′ HCO + -L IR correlations in galaxies. Wu et al. (2010) measured the L ′ CS J=7−6 -L IR correlation in individual Galactic dense cores with high mass star formation. Their best fit yields a slope of ∼ 0.8 with a large uncertainty because of the significant scatter in the data and the limited range of infrared luminosities they have at their disposal. In the bottom right of Figure 3, we plot their Galactic cores and our galaxies together. Combining both samples, we find a highly linear correlation, log(L IR ) = 1.00(±0.01) × log(L ′ CS J=7−6 ) + 4.03(±0.04), with a correlation coefficient of 0.98. This correlation is remarkably similar to that derived for HCN J = 1 →0 by Wu et al. (2005), though CS J=7→6 has much higher excitation requirements. The linear correlation between L ′ gas and L IR holds over an IR luminosity range of about eight orders of magnitude.
Comparison between Galactic Cores and Galaxies
The linearity of the (dense gas)-SFR correlation was interpreted as a "fundamental unit" of star formation on different physical scales, thus both SFR and dense gas mass are simply piled up by adding in more units (e.g., Wu et al. 2005). Recently it was found that once the studied gas content is denser than a density threshold, SFR does not depend on the exact value of the gas density, but depends on the total mass of the dense gas (e.g., Lada et al. 2012). In such a context the nearlinearity in the Σ gas − Σ SFR correlation in nearby galaxies (e.g., Bigiel et al. 2008;Schruba et al. 2012) is likely caused by a constant M dense /M total H 2 fraction.
The gas densities probed by the three lines are beyond the highest average densities in starbursts (Krumholz & Thompson 2007), and they are also one to two orders of magnitude higher than the threshold density for star-formation (n H 2 ∼ 10 4 cm −3 ; e.g., Parmentier et al. 2011;Lada et al. 2012). We find that all gases with densities > 10 4 cm −3 are linearly correlated with SFR, which indicates that the SFR in the dense gas is not likely affected by the free-fall time scale (t ff ), because t ff ∝ ρ −1/2 . If Σ gas /t ff -Σ SFR correlations are linear for all dense gases, the shorter t ff for the denser gas would not keep L ′ gas -L IR linear. Indeed, the gas content traced by HCN J=1→0 has 10 times longer t ff than that traced by HCN J=4→3, but they do show both linear correlations with the SFR. The non-linear correlations found in the K-S law are likely caused by the different fractions of dense gas to total gas in different types of galaxies, rather than the free-fall timescales (Lada et al. 2012). Actually, the K-S law with a slope of ∼1.4 is likely not a fundamental physical relation, as the non-linear slope found for H 2 +H i , seen in the very tenuous gas, is related to the relative amounts of dense gas and dust directly being involved in star formation and other gas being too diffuse and diluted to have anything to do with massive star formation.
AGN Contamination to the IR Emission
About half of the galaxies in our sample are hosting AGNs, in particular the ULIRGs. Compared to the near-and mid IR bands, the far-and total-IR are well correlated and are thus less contaminated by the AGNs emission (e.g., U et al. 2012;Juneau et al. 2009). Some AGNs, however, can still contribute significantly to the bolometric luminosity in ULIRGs (e.g., Veilleux et al. 2009). For the most extreme case, Mrk 231, the estimated AGN contribution is ∼70%, and the SFR indicated from the IR emission should be ∼0.5 dex lower in the L ′ gas -L IR correlations (e.g., Veilleux et al. 2009). For the rest galaxies, the AGN contamination should be much less (<10%) for LIRGs (L IR > 10 11 L ⊙ ) and nearby Seyferts, where star-formation is expected to dominate over IR emission. Unfortunately the IR emission for most AGN-host galaxies have not been decomposed, so that we could not deduce the IR luminosity by η SF , where η SF is the fraction of the IR luminosity due to star-formation. Overall, we do not find significant changes in the correlations if we remove AGNs from our samples. This may be due to the small size of our sample, or the fact that most AGN-hosting galaxies are not strongly contaminated by AGNs.
In conclusion, we present a survey of three dense gas tracers (CS J=7→6, HCN J=4→3, and HCO + J=4→3) in 20 nearby galaxies observed with the APEX 12-m telescope. Combining with data from literature and after a beam-matching correction for nearby galaxies, we find linear L ′ gas -L IR correlations for CS J=7→6 and HCN J=4→3, and a slightly superlinear slope for L ′ HCO + J=4−3 . These results are consistent with those found in HCN J=1→0 (i.e., Gao & Solomon 2004a,b) and CS J=5→4 (i.e., Wang et al. 2011;Wu et al. 2010), but contradictory to the predictions in Bussmann et al. (2008) and Juneau et al. (2009). We also find that the linear L ′ CS J=7−6 -L IR correlation can be traced universally across eight orders of luminosity magnitude, down to Galactic cores. If the beam-matching corrections are not applied for nearby galaxies, however, the slopes are significantly sub-linear. Future ALMA surveys of more dense gas tracers as well as other transitions over large samples of galaxies are necessary to consolidate the above findings and ensure that, apart from the vagaries of HCO + chemistry, the simplest (dense gas)-SFR empirical relation is indeed true.
|
2017-05-08T23:15:04.926Z
|
2014-02-05T00:00:00.000
|
{
"year": 2014,
"sha1": "f5269fac57484b6101c7ce587fba30fd15c43f0b",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/2041-8205/784/2/L31/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "397ee4ef341b32cee8d5e9a96f03ddd5c28d84b2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
243810084
|
pes2o/s2orc
|
v3-fos-license
|
Modular Synthetic Tissues from 3D-Printed Building Blocks
Biology employs modular organization at every scale: molecular building blocks make up living cells, specialized cells organize within tissues, and col-lections of tissues constitute organs. 3D-printed networks of picoliter-sized aqueous compartments interconnected by lipid bilayers form a powerful platform for building precisely patterned synthetic tissues. However, this tech-nology has been limited to millimeter-sized networks, with slow fabrication times and lacking flexible design. Here, the authors apply modular design to construct modular synthetic tissues by assembling a wide range of 3D-printed building blocks. They use dedicated modules for storing and releasing rea-gents, performing logic operations, responding to magnetic fields, and encapsulating living cells. They build centimeter-sized synthetic tissues able to transmit electrical signals through thousands of interconnected compartments. They assemble hybrid tissues composed of both synthetic modules and modules containing living cells. Lastly, by incorporating mutant protein nanopores within the building blocks, they assemble modular synthetic tissues with electrical outputs that are modulated by the integration of chemical inputs.
Modularity is also a widespread design principle in human-made technologies. [1] A modular approach enables standardization in the design, fabrication, use, and maintenance of devices. Modules are individually designed and fabricated with optimized cost and production time. Different modules are then assembled through standardized interfaces. Novel devices are generated easily and quickly by assembling modules in various architectures, without a need to redesign each part from scratch. [2] Previously, modular design has been applied to the fabrication of regenerative scaffolds and cell-laden hydrogels using magnetically actuated microrobots, [5] DNA-directed self-assembly, [6] and mechanical interlocking. [7] Modular design could also enable a leap forward in the construction of bioinspired devices such as synthetic tissues. Synthetic tissues are cell-free multicompartment systems built from the bottom-up by combining biomolecules such as lipids, proteins, and nucleotides. These cell-free systems can exhibit collective emergent properties resulting from the interaction of hundreds of cell-like units arranged in precise architectures. [8][9][10] Importantly, in contrast to most examples in the areas of bioprinting and tissue engineering, synthetic tissues are not designed to exactly replicate the functions of living tissues, but they take inspiration from living tissues to build novel cell-free devices with functions not necessarily found in nature. A robust and versatile platform to build synthetic tissues consists of networks of aqueous droplets interconnected by droplet interface bilayers (DIBs) within an oil [11,12] or aqueous [13,14] environment. Within such droplet networks, each droplet acts as a cell-like compartment in which specific chemical cargo and biochemical content can be localized. Furthermore, the lipid bilayers that interconnect adjacent printed compartments can be functionalized by membrane protein pores to control the trafficking of contents among the compartments. Cell-free bioelectronic devices such as batteries [11] and rectifiers [15] have been generated from hand-made synthetic tissues. By the 3D-printing of picoliter-sized aqueous droplets in desired architectures, sophisticated synthetic tissues able to transmit electrical signals, [8,16] undergo macroscopic folding, [8] and express proteins in response to light [17] have also been generated.
However, the production of synthetic tissues has so far been limited by the speed of fabrication, the size of the networks that can be generated, and a lack of flexibility in design. The slow manufacturing time of synthetic tissues has resulted from time-consuming droplet-by-droplet 3D-printing, limiting the The ORCID identification number(s) for the author(s) of this article can be found under https://doi.org/10.1002/adfm.202107773.
Introduction
Modularity is the ability to build a system by the assembly of individual independent parts or modules. [1,2] Each module may have a distinct function, and different modules can work in harmony to produce new, complex functions. Modularity is a major factor in the evolvability and adaptability of living systems, and nature employs it at every level of organization. [3,4] For example, molecular building blocks make up specialized cells, which in turn form the tissues that comprise organs, which together form entire organisms.
networks of compartments to millimeter sizes. Further, the need to redesign and optimize fabrication of synthetic tissues for each new purpose from scratch has limited the rate at which novel applications can be generated.
Here, we demonstrate for the first time the versatile fabrication of modular synthetic tissues with novel functions by assembling a wide range of independently 3D-printed building blocks (Figure 1). Thanks to the modular approach, we can 3D-print building blocks in parallel to increase the fabrication speed and enable scale-up. We assemble independently printed building blocks to generate centimeter-sized networks of thousands of compartments interconnected by lipid bilayers. We interconnect the building blocks by forming lipid bilayers at their contact interfaces, therefore our final assembled synthetic tissues consist in a continuous lattice of lipid-bilayer-interconnected compartments. Our modular approach allows us to assemble synthetic tissues with no restriction on the number of droplet types that can be patterned. Furthermore, building blocks generated under incompatible printing conditions, such as at different temperatures, can also be assembled into heterogeneous synthetic tissues. By this approach, we seamlessly assemble a wide variety of building blocks containing aqueous solutions, molecular cargoes, engineered membrane proteins, paramagnetic particles, hydrogels, and living cells (Figure 1d). By using a diverse library of building blocks, we generate synthetic tissues that store and release reagents, perform logic operations, respond to magnetic fields, and interact with living cells.
Connecting 3D-Printed Building Blocks
We first demonstrated assembly of 3D-printed building blocks into higher order synthetic tissues (Figure 2a). We 3D-printed two building blocks within an oil bath containing 1,2-diphytanoyl-sn-glycero-3-phosphocholine (DPhPC), as previously described [8,16] (Figure 2b, see Experimental Section). Each block consisted of 8 × 9 × 5 (w × d × h) picoliter-sized aqueous droplets (≈100 µm droplet diameter, ≈520 pL droplet volume) interfaced by phospholipid bilayers. We then manually moved the building blocks into contact by using a flat metal spatula (Figure 2b). By using fluorescent dyes with high affinity for lipid bilayers [18] (Atto550M and Atto647NM) in each of the two building blocks, we demonstrated the formation of lipid bilayers at the interface, as indicated by the co-localization of the fluorescent signals ( Figure 2c).
To demonstrate the formation of a functional bilayer interface between the two building blocks, we encapsulated the Assembly of modular synthetic tissues from 3D-printed building blocks. a) Diagram of the droplet-on-demand 3D-printer used to generate the building blocks. b) Photograph of the 3D-printer. Scale bar, 5 mm. c) Assembly of a synthetic tissue from two 3D-printed building blocks. d) Overview of the library of modules generated in this work, comprising: conducting connectors produced by patterning droplets containing the pore forming protein αHL, logic operators obtained by incorporating engineered αHL mutants (αHL*) able to respond to environmental signals, magnetic handles with encapsulated paramagnetic beads, reagent reservoirs containing αHL and cargo molecules, and biological modules encapsulating living cells. membrane pore-forming protein α-hemolysin (αHL) within the 3D-printed droplets. This pore-forming protein is known to self-assemble into lipid bilayers and allow the transfer of small molecules [19][20][21][22] and transmission of ionic currents [8,11,16] through lipid-bilayer-interconnected aqueous compartments. We then investigated the electrical properties of the assembled synthetic tissue (Note S1, Supporting Information). At an applied potential V = +50 mV, we measured steady-state ionic currents I 1 = 9.7 ± 0.1 nA and I 2 = 11.5 ± 0.2 nA flowing through the individual building blocks before assembly (Figure 2d -i,ii). After assembly, we measured a steady-state ionic current I joined = 4.6 ± 0.1 nA flowing through the assembled synthetic tissue (Figure 2d -iii), demonstrating that αHL pores had inserted into the lipid bilayers connecting the building blocks (Table S1, Supporting Information).
By applying Ohm's Law, we calculated the effective resistance (i.e., the total observed resistance of the circuit formed by the network of droplets interconnected by αHL-permeabilized lipid bilayers) of each building block, R 1 = V / I 1 = 5.17 ± 0.05 MΩ and R 2 = V / I 2 = 4.36 ± 0.08 MΩ, and the resistance of the assembled synthetic tissue, R joined = V / I joined = 10.84 ± 0.11 MΩ. The resistance of the assembled synthetic tissue R joined was higher than the sum of the resistances of its components, R 1 + R 2 = 9.6 ± 0.13 MΩ (p < 0.001, unpaired t-test with Welch's correction). This phenomenon is similar to the electrical contact resistance found in conventional electrical systems based on solid-state conductors, which arises from a reduced true area of contact between different electrical components caused by surface imperfections or oxidation. [23] In our system, we attributed the increase in resistance to the roll-off of the top edges of the 3D-printed building blocks ( Figure 2b). As we observed previously, [16] a small fraction of droplets located at the top edges of our 3D-printed building blocks roll to the lower droplet layers during printing. This leads to a roll-off of the top edges of the building blocks and consequently to a smaller cross-sectional area of contact between assembled building blocks compared to the crosssectional area in the middle of each individual building block ( Figure 2b). We further confirmed the presence of contact resistance in assembled synthetic tissues by comparing I joined to the steady-state current measured in a 3D-printed synthetic tissue of equal size (16 × 9 × 5 droplets, w × d × h, double the size of each building block composed of 8 × 9 × 5 droplets) I double = 7.0 ± 0.1 pA ( Figure S1, Supporting Information). As expected, I double > I joined , indicating that the effective resistance of two assembled building blocks was higher than the resistance of a synthetic tissue of equal size obtained by direct printing. Despite this effect, our results demonstrate that a reliable and functional junction could be formed between the two assembled building blocks.
Interestingly, we also observed that conductive synthetic tissues can be cut and then re-assembled, regaining their ability to conduct ionic currents ( Figure S2, Supporting Information). This observation opens up to the design of modular systems where modules can be assembled and re-assembled as needed.
Scaling-Up the Fabrication of Synthetic Tissues
Next, we applied the modular assembly of 3D-printed building blocks to generate centimeter-sized synthetic tissues. We 3D-printed six building blocks composed of 18 × 12 × 5 (w × d × h) picoliter-sized droplets (≈100 µm droplet diameter, ≈520 pL droplet volume) containing αHL. To speed-up the fabrication process, we printed pairs of building blocks in parallel by simultaneous ejection from two printing heads in the same printing container (Figure 3a). Clearly, even greater speeds could be achieved with more than two printing heads. After printing, we joined the six building blocks in a linear fashion (Figure 3b) to obtain a synthetic tissue of ≈1.2 cm in length ( Figure 3c) composed of more than 6000 droplets interconnected by αHL-permeabilized lipid bilayers. By applying a potential V = 100 mV across the synthetic tissue, we recorded a steady-state ionic current I 6 = 1.4 ± 0.1 nA, confirming the structural and functional integrity of the assembled synthetic tissue.
We then studied how the effective resistance R N of a synthetic tissue composed of N identical building blocks of resistance R 1 varied with N. Based on our observations with two building blocks, we hypothesized that the total effective resistance R N would be: where R C is the contact resistance (Note S1, Supporting Information). To validate this model, we measured the steady-state ionic currents {I 1 ,I 2 , …, I 6 } flowing through N = {1, 2, …, 6} assembled building blocks (at an applied potential V = +100 mV, Figure S3 and Table S2, Supporting Information), and calculated the corresponding effective resistances {R 1 ,R 2 , …, R 6 }. We estimated the contact resistance R C from R 1 and R 2 as: and verified that the measured steady-state ionic currents {I 1 , I 2 , …, I 6 } were in good agreement with the curve: [ ] ( ) where Î N is the predicted current flowing through a synthetic tissue composed of N assembled building blocks of effective resistance R 1 , under applied voltage V (Figure 3d, green dashed line). Using Equation (3), we can infer that by assembling 500 building blocks, we could form a synthetic tissue of 1 m in length, able to conduct an ionic current of ≈10 pA under applied voltage V = 100 mV.
Assembly of Synthetic Tissues by Tiling of Building Blocks
We generated patterned synthetic tissues by the tiling of building blocks. We first 3D-printed two types of building blocks of 14 × 8 × 5 (w × d × h) droplets, one type containing αHL and the other without αHL. We assembled eight building blocks, four of each type, according to a common basket-weave tiling pattern, in such a way that the building blocks containing αHL formed a conductive cross-shaped pathway spanning the synthetic tissue (Figure 3e). We then measured the steady-state ionic current I tiled,1 = 7.8 ± 0.1 nA flowing through the tiled synthetic tissue and confirmed its structural and functional integrity ( Figure 3g; Table S3, Supporting Information). In this way, we demonstrated the formation of patterned synthetic tissues by the tiling of unpatterned building blocks. We then generated 3D-printed building blocks containing simple patterns and tiled them to generate synthetic tissue with higher order patterns. For example, we printed building blocks of 12 × 12 × 5 droplets (w × d × h) containing four-droplet-wide L-shaped pathways that contained αHL, surrounded by droplets that did not contain αHL (Figure 3f). We then joined four patterned building blocks to obtain an assembled synthetic tissue containing a sinusoidal conductive pathway. We confirmed the correct functionality of the assembled synthetic tissue by recording a steady-state ionic current I tiled,2 = 3.2 ± 0.1 nA through the conductive pathway, while no flow of current was detected outside the pathway ( Figure S4 and Table S3, Supporting Information).
Stacking of Building Blocks by Magnetic Levitation
In order to stack synthetic tissues along the z-direction, we needed to lift building blocks from the printing surface and deposit them on top of other building blocks (Figure 4a). We generated magnetically susceptible building blocks by 3D-printing melted agarose at ≈30 °C containing paramagnetic beads. After printing, we gelled the agarose at 4 °C for 1 h, resulting in building blocks composed of lipid-bilayerinterconnected agarose droplets encapsulating paramagnetic beads ( Figure 4b). We then used a neodymium magnet to lift and manipulate the magnetically susceptible building blocks within the oil bath without direct contact (Figure 4c and Video S1, Supporting Information). The use of agarose allowed us to produce an even dispersion of beads within the droplets during printing by increasing the sedimentation time of the beads within the nozzle (Figure 4b). After gelation, the agarose also trapped the beads in place, preventing them from destabilizing the lipid bilayers during magnetic manipulation.
To demonstrate the assembly of a functional synthetic tissue by stacking, we generated three building blocks. Two were not magnetically susceptible and were patterned in such a way to form an assembled synthetic tissue with a pathway of droplets containing αHL interrupted by droplets not containing αHL (Figure 4d,e). Upon application of a voltage, no current was detected (Figure 4h; agarose with embedded paramagnetic beads. We magnetically levitated and deposited this building block on top of the two previously assembled blocks, to connect the two ends of the interrupted pathway (Figure 4f,g). After inclusion of the magnetically susceptible building block, we measured a steady-state ionic current I stack = 690 ± 20 pA, upon application of a potential V = 10 mV, confirming the functionality of the construct. We also showed that no current flowed through the bilayers separating the droplets with magnetic beads, con-firming their integrity ( Figure S5 and Table S4, Supporting Information).
Diffusion of Chemicals in Synthetic and Hybrid Tissues
Next, we investigated the diffusion of chemicals through assembled synthetic tissues. We 3D-printed and assembled two building blocks, one acting as a Ca 2+ reservoir module, and the Figure S3 and Table S2, Supporting Information, for raw data). The decrease in current with N, based on the contact resistance model of Equation (3), is indicated by the dashed green curve. e) Diagram of the assembly of a synthetic tissue from 8 rectangular building blocks by following a tiling pattern. The four teal building blocks contained αHL, while the gray ones didn't. f) Diagram of a synthetic tissue made by assembling building blocks containing a conductive L-shaped pattern (teal, containing αHL). g) Electrical recordings of ionic currents flowing through synthetic tissues assembled as in (e) and (f). Stereomicroscopy images of the corresponding synthetic tissues are shown as insets. Scale bars, 800 (top) and 500 µm (bottom). (Figure 5a). The reservoir and sensor modules contained CaCl 2 and a membrane-impermeant Ca 2+ indicator dye (Rhod-dextran, MW 10 kDa), respectively, and both contained wild-type αHL to allow the movement of ions. After assembly, we visualized the diffusion of Ca 2+ ions through the droplets of the sensor module over time. A wave of fluorescence starting from the droplets located closer to the source of Ca 2+ ions was observed, which reached the droplets at the opposite end of the sensor block within 4 h (Figure 5b; Video S2, Supporting Information).
We then used our assembly strategy to generate concentration gradients of chemicals within bacterial cultures. We assembled a hybrid tissue composed of a synthetic building block, containing a fluorescent nucleic acid stain (SYTO9), with a biological building block, containing living cells. The biological building block contained RFP-tagged Escherichia coli cells printed in melted agarose as 8 × 9 × 5 (w × d × h) droplets at ≈30 °C, followed by gelling at 4 °C for 1 h. We have previously shown that 3D-printed cultures of living bacteria can be generated using this methodology. [24] After assembly, the SYTO9 released from the synthetic building block produced an increase in fluorescence of the cells in the biological building block, (Figure 5d; Video S3, Supporting Information). In this way, we visualized the localized delivery of chemical signals to living cells, demonstrating the potential of our assembly strategy to study the interaction and communication between synthetic and living systems in the future. A building block containing paramagnetic beads (red) can be lifted and deposited on top of a second building block not containing beads (teal). b) Brightfield stereomicroscopy image of a 3D-printed building block containing paramagnetic beads in the droplets (black dots). To protect the bilayers during magnetic levitation, the beads are embedded within gelled agarose. Scale bar, 100 µm. c) Side-view stereomicroscopy images of the levitation of a 3D-printed building block (orange) using a neodymium magnet. Yellow and red arrows indicate the movement of the magnet and of the building block, respectively. d) Diagram of an assembled synthetic tissue containing a conductive droplet pathway (teal) interrupted by non-conductive droplets (gray). e) Top-down stereomicroscopy image of the synthetic tissue described in (d). f) Diagram of the synthetic tissue described in (d) after addition of a magnetically susceptible building block (orange) containing a conductive pathway (teal) bridging the originally interrupted pathway. g) Top-down stereomicroscopy image of the synthetic tissue described in (f). h) Electrical recordings of ionic currents flowing through the synthetic tissue before (gray trace) and after (black trace) addition of the magnetically susceptible building block bridging the originally interrupted pathway.
Assembly of Modular Synthetic Tissues
Lastly, we aimed to assemble synthetic tissues that carry out logic operations. Unlike conventional solid-state logic gates that operate using electrical input signals, we integrated chemical input signals to produce electrical outputs. To this end, we used the Zn 2+ -sensitive αHL mutant αHL-4H, obtained by mutating residues Asn123, Thr125, Gly133, and Leu135 to histidine, [20,25] to build modular synthetic tissues implementing the logic operations NOT and NOR. With these mutations, heptamers of αHL-4H allow the flow of ionic currents in the absence of Zn 2+ ions. In the presence of Zn 2+ ions, the pore is blocked (Figure 6a). [25] We first assembled synthetic tissues implementing the NOT logic operation: where the input represents the absence (input = 0) or presence (input = 1) of Zn 2+ , and the output represents the flow of ionic current through the synthetic tissue I output (output = 0: low/no current, output = 1: high current). The input module contained wild type αHL without (input = 0) or with (input = 1) Zn 2+ , and the processing module contained αHL-4H (Figure 6b). Upon application of a voltage V = 10 mV, we measured a steady-state ionic current I output = 1.7 ± 0.2 nA flowing through the assembled synthetic tissue when the input module did not contain Zn 2+ , while no current was detected when Zn 2+ was included as the input (Figure 6c; Table S5, Supporting Information). Therefore, through this design, we successfully implemented the NOT logic operation. We next assembled a synthetic tissue implementing the NOR logic operation (Figure 6d): where the inputs represent the absence of Zn 2+ (input 1 and input 2 = 0) or presence of Zn 2+ in either or both input modules (input 1 , input 2 = {0,1} or {1,0} or {1,1}), and output represents the flow of ionic current through the synthetic tissue I output (output = 0: low/no current, output = 1: high current). Such a synthetic tissue should allow an ionic current to flow only if neither input contains Zn 2+ , while no current flows if either input or both inputs contain Zn 2+ . To achieve this, two input modules encapsulating wild type αHL and without (input = 0) or with (input = 1) Zn 2+ were connected to a rectangular processing module containing αHL-4H (Figure 6e) and incubated for 18 h. We detected a steady-state ionic current I output = 7.7 ± 0.2 nA only when neither input module contained Zn 2+ . No current was detected when either input or both inputs contained Zn 2+ (Figure 6f). According to our design, in the cases with mixed inputs ({0, 1} or {1, 0}), Zn 2+ ions contained in one of the input modules diffused to the other input module, blocking the αHL-4H pores in the whole structure (Figure 6e). We further verified this mechanism with electrical recordings of ionic currents flowing through the synthetic tissues, measured from each of the input modules ( Figure S6, Supporting Information).
Conclusions
We have demonstrated the assembly of 3D-printed building blocks into higher order, modular synthetic tissues. We connected independently printed building blocks and formed func-tional bilayers at their interfaces ( Figure 2). We scaled up the production of synthetic tissues by 3D-printing building blocks in parallel (Figure 3). By these means, we assembled a synthetic tissue able to transmit an electrical signal over centimeter distances through thousands of interconnected droplets permeabilized with transmembrane protein pores. We obtained patterns in the assembled synthetic tissues by both arranging various building blocks in specific architectures, and by assembling building blocks containing printed patterns within them. We Figure 6. Modular bio-electronic devices. a) Diagram of the mechanism of the Zn 2+ -sensitive protein pore αHL-4H. In the absence of Zn 2+ , αHL-4H pores are open and ionic currents can flow. When Zn 2+ is present, the αHL-4H pores are blocked, and no ionic current flows. b) Implementation of a NOT logic gate in a modular synthetic tissue. The input module contains αHL without Zn 2+ (teal) or with Zn 2+ (red), and the processing module contains αHL-4H (yellow). In the absence of Zn 2+ (input = 0), an ionic current flows through the synthetic tissue (output = 1). Conversely, when Zn 2+ is present (input = 1), no ionic current flows (output = 0). c) Recordings of ionic currents flowing through the synthetic tissues described in (b). Stereomicroscopy images of the corresponding synthetic tissues are shown as insets. d) Truth table for a NOR logic gate. e) Implementation of a NOR logic gate in a modular synthetic tissue. All combinations of two square input modules containing Zn 2+ (red) or not containing Zn 2+ (teal) are joined to a rectangular processing module (yellow). When Zn 2+ is not present in either input (inputs = {0, 0}), an ionic current flows through the synthetic tissue (output = 1). In the cases of mixed inputs (inputs = {0, 1} and inputs = {1, 0}), Zn 2+ can diffuse to the input module without Zn 2+ , resulting in blockage of current through the synthetic tissue (output = 0). When Zn 2+ is present in both inputs (input = {1, 1}), no ionic current can flow (output = 0). f) Electrical recordings of ionic currents flowing through the synthetic tissues described in (e). Stereomicroscopy images of the corresponding synthetic tissues are shown as insets.
www.afm-journal.de www.advancedsciencenews.com 2107773 (9 of 11) © 2021 The Authors. Advanced Functional Materials published by Wiley-VCH GmbH also demonstrated the generation of magnetically susceptible building blocks by encapsulating paramagnetic beads within them, enabling contact-free manipulation and assembly in 3D (Figure 4). We visualized the diffusion of Ca 2+ ions through αHL nanopores in synthetic tissues. Further, we assembled hybrid tissues by interfacing a synthetic building block with a biological building block containing living cells and demonstrated the localized delivery of chemicals to living cells ( Figure 5). Lastly, we used the Zn 2+ -sensitive protein nanopore αHL-4H to assemble modular synthetic tissues that integrate chemical input signals and produce electrical outputs, thereby implementing the logic operations NOT and NOR ( Figure 6).
Our modular synthetic tissues can be built by seamlessly combining a variety of building blocks, containing aqueous solutions, molecular cargoes, wild-type and engineered membrane proteins, paramagnetic particles, hydrogels, and living cells. We demonstrated contact-free manipulation by magnetic levitation which could be used to automate and further scale up the modular assembly of synthetic tissues. Other contactfree manipulation techniques could also be investigated in the future, such as electrowetting [26] and acoustic tweezers. [27] With our modular approach, we have established a powerful plugand-play platform for the fabrication of bioinspired devices that enable interactions between synthetic and living systems. For instance, by encapsulating engineered protein nanopores as well as cell-free signaling cascades, we could assemble synthetic tissues that respond to complex environmental cues. By assembling biological building blocks containing different populations of cells, we could build custom co-cultures to study multicellular interactions under controlled conditions. Building blocks containing competing strains of bacteria could be assembled in controlled architectures to study competition at the micron scale. [24] Complex in-vitro tissue and organ models could be generated by assembling building blocks containing mammalian cells. [28,29] Infection and disease models could be generated by interfacing building blocks containing bacteria and mammalian cells, or cancer cells with healthy cells. Synthetic building blocks could be used to control cell metabolism and tissue development within biological building blocks via spatially and temporally controlled release of chemicals. In the future, our versatile assembly platform may enable the study and design of communication pathways between synthetic and living systems, and the fabrication of advanced implants for diagnostic, therapeutic, and theranostic applications.
Experimental Section
Lipid-Oil Solutions: DPhPC lipids were purchased from Avanti Polar Lipids in powder form. Lipids were dissolved in chloroform (anhydrous, Sigma Aldrich) and aliquoted into isopropanol-cleaned Teflon-capped glass vials (Supelco). The chloroform was then evaporated under a slow stream of nitrogen while rotating the glass vial, to produce a lipid film at the bottom of the vial. The lipid film was then dried overnight (>16 h) under vacuum and stored under argon at −80 °C. On the day of use, an oil solution composed of undecane (Sigma Aldrich) and silicone oil AR20 (Wacker) was prepared in a glass vial and added to the lipid film at room temperature. The lipid-oil solution was sonicated in a sonicator bath (Branson) for 1 h. For all experiments, the final composition of the lipid-oil solution was 2 mm DPhPC in 35:65 v:v undecane to silicone oil.
Generation of αHL and αHL-4H Pores: Recombinant αHL was expressed in E. coli and purified as previously described. [22] In brief, αHL-D8H6 was expressed in E. coli BL21(DE3)pLysS cells (Agilent) and purified on Ni-NTA agarose, followed by size exclusion chromatography. Monomers were aliquoted and stored at −80 °C until use.
Plasmids expressing αHL-4H were generated by homologous recombination. The αHL-D8H6 plasmid was digested at the end of the αHL-D8H6 gene using HindIII and then amplified with polymerase chain reaction (PCR) primers to generate two fragments, each of which encoded two of the required mutations. The fragments overlapped with each other as well as the upstream and downstream regions of the aHL-D8H6 gene. The PCR conditions were as follows: 98 °C for 30 s, 35 cycles of 98 °C for 10 s, 55 °C for 30 s, and 72 °C (for the extension time specified below), and 72 °C for 5 min. The extension times and PCR primers were as follows: 4H-C-FWD (5′ GGTGATGATACAGGAAAAATTCACGGCCACATTG GTGCAAATGTTTCG 3′) [20] and C-CT-REV (5′ GCCGGATCCAAGCTTATCAATGATGGTGATGGTGG 3′) for amplification of the C-terminal half of the αHL gene with extension time = 30 s, CT-N-FWD (5′ CACCATCATTGATAAGCTTGGATCCGGCTGCTAACAAAG 3′) and N-4H-REV (5′ AATTTTTCCTGTATCATCACCATGAACATGACCGTTGAATCCATAAG 3′) [20] for amplification of the N-terminal half of the αHL gene with extension time = 135 s.
Homologous recombination was carried out in E. coli XL-10 Gold cells (Agilent) thawed on ice. PCR products (0.25-0.5 µL) were added into 15 µL of cells, which were then incubated on ice for 30 min. The cells were heat-shocked at 42 °C for 30 min, cooled on ice, plated onto LB-agar plates containing 100 µg mL −1 ampicillin, and grown at 37 °C for 16 h. Single colonies were then grown in LB containing 100 µg mL −1 ampicillin for 16 h. Plasmids were purified using the PureYield plasmid miniprep system (Promega) and sequenced by Sanger sequencing, using primers S-FWD (5′ CGATCCCGCGAAATTAATACGACTC 3′) and S-REV (5′ GCTCAGCGGTGGCAGCAGC 3′).
The αHL-4H protein was expressed by in-vitro transcription and translation (IVTT) using the PURExpress In Vitro Protein Synthesis kit (NEB, E6800) according to manufacturer's specifications, with addition of Murine RNAse Inhibitor (NEB, MB0314). The plasmid containing the αHL-4H gene was at a final concentration of 5 µg µL −1 in a reaction volume of 10 µL. Expression was carried out at 37 °C for 3 h, after which the PURExpress reaction mix was diluted for 3D-printing.
Culture of Bacteria and Bioink Preparation: Cultures of chromosomally labelled mRFP1 BZB1011 E. coli strains (RFP-tagged E. coli) were started directly from glycerol stocks in 4 mL LB medium and shaken at 250 rpm for 16 h overnight at 37 °C. A portion of the culture (40 µL) was added to fresh LB medium (4 mL) and cultured for 3 h at 37 °C with shaking at 250 rpm. The cells were then recovered by centrifugation at 8000 rcf for 5 min and resuspended at 10 10 cells/mL in melted 1.5% w/v ultra-low gelling temperature agarose in M9 medium (at 37 °C).
3D-Printing Building Blocks: 3D-printed building blocks were generated as previously described. [8,16] In brief, droplet networks were formed by 3D-printing picoliter-sized aqueous droplets (diameter 100 µm, volume ≈520 pL) into a lipid-oil bath contained within a glass or PMMA container on a micromanipulator stage. Ejection of the droplets was driven by the actuation of a piezo-electric transducer which generates controlled pressure pulses inside a glass nozzle filled with Milli-Q water (Millipore). The glass nozzle consisted of a pulled glass capillary with tip diameter of ≈100 µm, from which the aqueous ink was ejected. An undecane oil plug (≈5 µL) separated the aqueous ink solution from the water in the nozzle. The placement of the droplets was controlled by synchronizing the motion of the micromanipulator stage (PatchStar micromanipulator, Scientifica) with the droplet ejection, using custom software (LabView). When possible, building blocks were 3D-printed in parallel by synchronizing the ejection of two printing nozzles within the same printing container. For the experiments shown in Figure 3f,g, patterned building blocks were obtained by synchronizing the movement of the stage and the ejection of two types of droplets from two nozzles to form an L-shaped pattern. Building blocks containing agarose were 3D-printed at ≈30 °C by using an IR heater (Beurer Ltd.), as previously described. [24] After printing, the building blocks containing agarose were gelled at 4 °C for 1 h.
Assembly of Synthetic Tissues: With the exception of the experiments shown in Figure 4, the 3D-printed building blocks were manipulated and assembled by hand using a flat metal spatula. To connect two building blocks, one of the building blocks was put in contact with the other using the spatula, and a gentle pressure was then applied for ≈20-30 s to ensure the formation of lipid bilayers at the contact interface. For the experiments shown in Figure 4, magnetically susceptible building blocks were levitated within the lipid-oil bath using a neodymium magnet, as shown in Video S1, Supporting Information.
Electrical Recordings: Electrical recordings on the assembled synthetic tissues were performed as previously described. [16] In brief, the recordings were performed inside a Faraday cage (Mechanical Workshop, University of Oxford) using a patch-clamp amplifier (Axopatch 200B, Axon Instruments). The electrodes were silver wires (100 µm diameter), which had been incubated in sodium hypochlorite solution (Sigma Aldrich) for 15 min and subsequently coated with a thin layer of 1.5% w/v ultra-low gelling temperature agarose in the same buffer solution contained within the assembled synthetic tissues. The electrodes were manipulated and connected to the synthetic tissues using two micromanipulators (Narishige, NMN-1). The electrical traces were recorded using pClamp 10 (Molecular Devices) software and were analyzed using MATLAB.
Imaging of Synthetic Tissues: The assembled synthetic tissues were imaged during electrical recordings using an AmScope MU1000A digital camera fitted on the ocular of a Nikon SMZ645 stereomicroscope. A warm white LED was used as a light source (Thorlabs, MWWHLP1 LED through COP1-A collimator and powered by LEDD1B driver). Confocal images were acquired using a Leica SP5 laser scanning confocal microscope. Epi-fluorescence images were acquired using a Leica DMi8 inverted epi-fluorescence microscope.
Statistical Analysis: All shown electrical traces were acquired at a sampling frequency of 10 kHz, filtered at 2 kHz with 0.5X gain. Throughout the text, the current levels were presented as mean ± SD, calculated over n = 100 000 (Figures 2-4) or n = 50 000 ( Figure 5) samples. The analysis of the current traces was performed in MATLAB using a custom script.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
|
2021-11-07T16:20:43.377Z
|
2021-11-05T00:00:00.000
|
{
"year": 2021,
"sha1": "24f4acaf0791389782f811c66fd440181f4f3002",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adfm.202107773",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b60b55f3ffb713356e0c3d7aea973ff9f9fef3a2",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
117194355
|
pes2o/s2orc
|
v3-fos-license
|
Space of group orderings, quasi morphisms and bounded cohomology
For a group $G$, we construct a quasi morphism from its left orderings and the map from the space of left orderings to the second bounded cohomology. We show that these maps reflect various properties of the group orderings.
Introduction
A total ordering < of a group G is called a left ordering if the relation < is preserved by the left action of G, that is, a < b implies ca < cb for all c ∈ G. The right orderings are defined by the similar way. An ordering < is called a bi-ordering if the ordering is both left and right ordering. We denote by LO (G), BO (G) the set of all left orderings and bi-orderings of G, respectively. Each left ordering < is determined by its positive cone P < = {g ∈ G | g > 1}, which have the following two properties.
LO1 P is stable under the multiplication: P · P ⊂ P . Conversely, for a subset P of G having the properties [LO1] and [LO2], the ordering < P defined by x < P y if x −1 y ∈ P is a left-ordering whose positive cone is P . Thus, LO (G) is identified with the subset of the power sets {0, 1} G .
We define the topology of LO (G) by giving a basis of open sets of the form It is known that LO (G) is compact, totally discontinuous [7]. The aim of this paper is to study the relationship between the space LO (G) and quasi morphisms or bounded cohomology groups. For x ∈ G, a subgroup H ⊂ G, and P ∈ LO (G) having some additional properties, we construct a Zvalued quasi morphism ρ H x,P : H → Z and a map ψ H x from the subspace of LO (G) to the 2nd bounded cohomology group of H. We determine the image of ψ G x for a group whose R-coefficient 2nd bounded cohomology vanishes (Theorem 1), and show various properties of the maps ρ H x,P and ψ H x .
• ψ G x is continuous for a group whose R-coefficient 2nd bounded cohomology vanishes (Theorem 2).
• ψ H x classifies the dynamics of H if x belongs to Z G (H), the centralizer of H in G (Theorem 4).
• ρ H x,P provides a condition on abelian subgroups to be convex. (Theorem 5) The plan of the paper is as follows. In section 2 we describe the construction of quasi morphisms and its fundamental properties. In section 3 we provide a cohomological description of the map ψ H x and study the dynamics of the group G. Section 4 we use the map ψ H x to study the convex subgroups. tions of professor Toshitake Kohno during the preparation of the paper. He also wishes to express his thanks to Dale Rolfsen and Adam Clay for stimulating discussions. This research was supported by JSPS Research Fellowships for Young Scientists.
Quasi morphisms derived from left-orderings
First of all we review the definitions of quasi morphism and bounded cohomology which is used in this paper.
Let K = Z or R. For a group G, the bounded cohomology group H * b (G; K) is a cohomology of the bounded cochain complex. In this paper we only treat the 2nd bounded cohomology classes which are trivial in the usual group cohomology. Let us denote by H 2 b (G; K) the kernel of the comparison map ι : holds for all g, g ′ ∈ G. A quasi morphism is called an almost homomorphism if there exists a homomorphism τ : G → K and a constant C > 0 such that |φ(g) − τ (g)| ≤ C holds for all g ∈ G. Let QMor(G; K) and AHom(G; K) be the set of quasi morphisms and almost homomorphisms, respectively. The following lemma is well-known, but to provide an explicit correspondence we include here the proof.
Let c = dφ be a bounded 2-cocycle which represents an element of Ker i. Since Conversely, for a homomorphism τ : : G → Z is a Z-valued 1-cochain taking an integer part of τ . Then, c τ represents an element in H 2 b (G; Z). It is easy to see that c τ and c τ ′ are cohomologus if τ −τ ′ is a Z-valued homomorphism.
In this paper we often impose the condition that H 2 b (G; R) = 0. This condition is satisfied if G is amenable, for example, G is abelian group, solvable group and so on. Now we define quasi-morphisms from left orderings, and construct maps mentioned in introduction. We say an element x ∈ G is co-final to a subset A ⊂ G with respect to the left ordering P of G if for each element a ∈ A, there exists an integer N such that x −N < P a < P x N holds. We say x is universally co-final to A if x is co-final to A with respect to all left orderings of G.
Let H be a subgroup of G. For an element x ∈ G, let Cof H x (G) be the set of left orderings of G such that x is co-final to H. We define InvLO H x (G) as the set of left orderings of G such that the right action of x on the subgroup generated by H and x preserves the ordering. That is, P ∈ InvLO H x (G) if and only if b < P b ′ implies bx < P b ′ x for all b, b ′ in the subgroup generated by x and H.
It is obvious that BO (G) ⊂ InvLO G x (G), and InvLO x (G) = LO (G) if x is central. In the case H = G, we simply denote Cof G x (G), InvLO G x (G) by Cof x (G), InvLO x (G) respectively.
If H is finitely generated, then InvLO
Proof. Let H ′ be the subgroup of G generated by H and x. Then the set InvLO H x (G) is written by Observe that U h is the complement of U h −1 , hence it is closed. Thus InvLO H x (G) is closed subset of LO (G). Now assume that H is finitely generated, and let h 1 , . . . , h n be a generator of H. To show (2), we first observe that for P ∈ InvLO H x (G), x is co-final to H if and only if x is co-final to the generating subset {h 1 , . . . , h n }. The set For an ordering Lemma 3. ρ H x,P is a quasi morphism of defect 1. Proof. We prove the case x > P 1. The other case is similar. For h, h ′ ∈ H, let x N ≤ P h < P x N +1 and x M ≤ P h < P x M+1 . Then, hx M ≤ P hh ′ < P hx M+1 . Since the ordering < P is invariant under the right multiplication of x, we conclude that From Lemma 3, ρ H x,P is a well-defined R-valued quasi morphism of defect one. It is routine to check the following properties.
Lemma 4. The stable map ρ H x,P : H → R has the following properties.
The above construction of quasi morphisms is motivated from the following example.
Example 1. Let B n be the braid group of n strands, σ 1 , . . . , σ n−1 be the standard generators of B n , and ∆ = (σ 1 σ 2 · · · σ n−1 ) · · · (σ 1 σ 2 )(σ 1 ). ∆ 2 is a generator of the center of B n . It is known B n is left-orderable and ∆ 2 is universally co-final [1]. Let < D be the Dehornoy ordering of B n , which is the standard left ordering of B n . See [1] for precise definition. The quasi morphism ρ Bn ∆ 2 ,<D is called the Dehornoy floor quasi morphism, and the stable map ρ Bn ∆ 2 ,<D is called the twisting number, which are defined in [5]. These quasi morphisms are quite useful to study the relationships between topology and orderings [3], [4], [5]. Now we are ready to define a map from the space of left-orderings to the bounded cohomology groups.
The map ψ H x are natural with respect to the inclusions in the following sense. Let K and H be subgroups of G, and assume that K is also a subgroup of H. Take x ∈ H. Let i : H → G and j : K → H be inclusions. Then, by taking the restriction, i induces a continuous map i * : LO (G) → LO (H), which is explicitly written as i * (P ) = P ∩ H. Clearly this map defines the continuous map Then by definition, it is easy to see that the following diagram commutes.
Now let us study the properties of ψ H x . First we show the map ψ x is non-trivial by determining the image of ψ x for a group G whose R-coefficient 2nd bounded cohomology vanishes. Theorem 1. Let G be a finitely generated left-orderable group whose R-coefficient 2nd bounded cohomology vanishes, and x ∈ G be an element of G whose representing homology class [x] ∈ H 1 (G; Z) has infinite order. Then the image of the map Proof. First we show the theorem for the free abelian group, G = Z m case. Let {a 0 , a 1 , . . . , a m−1 } be a free generator of Z m . The assumption that [x] is infinite order implies that x is a non-trivial element. With no loss of generality, we can choose a 0 = x k where k is a positive integer.
We define the ordering P of A ′ as follows. Let L τ = {(t, kr 1 t, . . . , kr p t) ∈ R p+1 | t ∈ R} be the line in R p+1 , and p L : R m → L be the orthogonal projection. On the points of the line L τ , we define the ordering < L by (t, Next we extend the ordering P . Let p : A → A ′ be the projection, and Q be a left ordering of A 0 . We define the ordering R of A by a < a ′ if p(a) < P p(a ′ ) or, p(a) = p(a ′ ) and a < Q a ′ . From the construction, the restriction of R to A ′ coincide with the ordering P , and , so we complete the proof of theorem for the free abelian groups. Now we show the general case. Let A = H 1 (G; Z) − TorsH 1 (G; Z) be the torsion-free part of the 1st homology group, and p : G → A be the projection. Since H 1 (G; Z) = Hom(A, Z), the map p induces the isomorphism of the 2nd bounded cohomologies. Let [ τ ] be a 2nd bounded cohomology class of G, represented by homomorphism τ : G → R such that τ (x) = 1. Then one can find a homomorphism τ : . We construct left-ordering P of G as follows.
First observe that τ (x) = 1 and [x] = p(x) is non-trivial element in H 1 (G; Z). Thus from the abelian case we have just proved, we can find a left ordering < A of A which represents the bounded cohomology class [τ ]. Now let us consider the exact be a left ordering of Ker p. We define a left ordering P of G by defining g < P g ′ if p(g) < A p(g ′ ), or p(g) = p(g ′ ) and 1 < ′ g −1 g ′ . Then ψ G x (P ) = p * ([τ ]) = [ τ ] holds.
Next we provide an explicit description of the map ψ H x for a subgroup H whose R coefficient 2nd bounded cohomology vanishes. holds.
Proof. Recall that for a torsion-free abelian group Z m = a 1 , . . . , a m , the isomorphism a 1 ), . . . , φ(a m )) . Here φ : Z m → R is a stable map of a quasi morphism φ, defined by φ(a) = lim N →∞ φ(N a)/N . Now as in the proof of Theorem 1, let A be the torsion-free part of the 1st homology group of H and p : H → A be the projection. Since p * induces an isomorphism of bounded cohomology, we can find a quasi morphism φ : Z m → Z such that p * φ = φ • p represents the bounded cohomology class [ψ H x (P )]. This implies that the difference ρ H x,P − φ • p is an almost homomorphism. Thus, there exists a homomorphism τ : H → Z and constant C > 0 such that holds for all N > 0 and h ∈ H. Therefore, the stable maps ρ H x,P and φ • p coincide modulo Z.
Hence we conclude that holds.
Based on this description, we extend the maps ψ H x for the whole of InvLO H x (G) so that it contains more information.
Definition 2.
Let H be a finitely generated subgroup of G whose R-coefficient 2nd bounded cohomology vanishes and x ∈ G. Let y 1 , . . . , y b be elements of G which form the basis of We mainly use the following special case, which corresponds to the case H = Z = y . Definition 3. For y ∈ H, let us define the map This map is an extension of the lift of the map ψ H x,y .
is open, so we conclude that ψ H x,y is continuous.
Example 2. The above abstract constructions of the maps from LO (G) to S 1 are motivated to understand the topology of LO (G), and it is a generalization of the map f : LO (Z 2 ) → S 1 constructed in [7], which we describe here. First we review a description of LO (Z 2 ) given by Sikora [7]. Let a, b be the free generator of Z 2 . Let R [) be the real line with the topology having a basis of the form [a, b), and let R (] be the real line with the topology having a basis of the form (a, b]. Let S 1 [) = R [) /Z and S 1 (] = R (] /Z, and regard them as a subset of R 2 . A point (p, q) ∈ S 1 is called rational if p/q ∈ Q or q = 0. Let X be the topological space obtained by identifying the irrational points of S 1 (] and S 1 [) . Sikora showed that X is homeomorphic to LO (Z 2 ). This description is useful, because such coordinates reflects the properties of the ordering, as we will see Example 5. Now let us define the continuous map f : X = LO (Z 2 ) → S 1 by sending a point of x ∈ S 1 [) or S 1 (] to the corresponding point of S 1 . The relationship between ψ a,b and f is as follows. Let us take a double cover π 2 : S 1 → S 1 = R ∪ ∞, which sends (p, q) → q/p (Here we regard ±1/0 as ∞). Then π 2 • f = ψ a,b holds.
Using almost the same arguments, we can prove the following generalized version of the proposition.
Theorem 2.
If H is a finitely generated subgroup of G, whose R-coefficient 2nd bounded cohomology vanishes, then ψ H x : Proof. As in the proof of Theorem 1, it is sufficient to show the case H is a free abelian group, but this case easily follows from the arguments of the proof of Proposition 1.
Actions on real lines and circles
As is mentioned in many articles (See [2], [6]), left orderings of groups are closely related to the group action on the real line and circles. In this section we provide relations between our quasi-morphisms and dynamics of G which are related to the left ordering. In this section, we always assume that G is a countable group. First we review the definition of the bounded Euler class, which contains much information about the group actions on circle. Let us consider the central extension Take a set-theoretical section σ to p by σ(f ) = f , where f is an unique element of p −1 (f ) such that f (0) ∈ [0, 1), and let c be the cochain of Homeo + (S 1 ), defined by c(f, g) = σ(f g) −1 σ(f )σ(g). Since c(f, g) = σ(f g) −1 σ(f )σ(g) lies in the kernel of p = Z, hence we may regard c as a Z-valued bounded 2-cocycle of Homeo + (S 1 ). The bounded Euler class is a 2nd bounded cohomology class [eu] of Homeo + (S 1 ) defined by c. For a group action on circle φ : G → Homeo + (S 1 ), we define the bounded Euler class of the action φ by eu(φ) = φ * ([eu]) ∈ H 2 b (G; Z). Next we review the relationships between orderings and dynamics. Let G → Homeo + (R) be a faithful action of a countable group G on the real line. By choosing a countable dense sequence {r n } n>0 of R, we define the left-ordering < of G by defining g < g ′ if and only if the sequence of reals {g(r n )} is bigger than the sequence {g ′ (r n )} with respect to the lexicographical ordering of R N .
Conversely, for an left ordering P ∈ LO (G), we can construct a faithful action of G on the real line A P : G → Homeo + (R), which we call a dynamical realization of the ordering P as follows.
Take a numbering {g i } i≥0 of G, and define the real number t(g i ) in the following inductive way. First we define t(g 0 ) = 0. For i ≥ 1, we define t(g i ) by Now we define the action of G on the subset {t(g i )} of R by g · t(g i ) = t(gg i ). By extending this action to whole of R, we obtain the desired action A P .
Although this construction depends on a choice of the numbering and extensions, but its topological conjugacy class of the dynamical realization is independent of these choices. From now on, we always choose a numbering so that g 0 = id. Then by construction, f < P g if and only if [A P (f )](0) < [A P (g)](0).
Let H be a subgroup of the left orderable group G and take x ∈ Z G (H), the centralizer of H in G. For P ∈ Cof H x (G), let us consider its dynamical realization A P . We may assume that A P (x) acts R as translation r → r + 1 by taking an appropriate conjugation. We denote this normalized dynamical realization by A P,x . Since x ∈ Z G (H), [ A P,x (h)](r + 1) = [ A P,x (h)](r) + 1 holds for all h ∈ H and r ∈ R. Therefore the restriction of A P,x to H induces an action of H on R/ A P,x (x) = S 1 . We denote this action by A P,x : H → Homeo + (S 1 ) and call it the associated circle action. Now we provide a cohomological interpretation of our map ψ H x via the associated circle action. This theorem implies, ψ H x can be seen as a characteristic class of left orderings, which extends the bounded Euler class of circle actions. Thus, by definition of the bounded Euler class, we have ψ H x (P ) = −eu(A P ). Recall that a left ordering P ∈ LO (G) is dense if P admits no minimal positive element. For a dense ordering P , from the construction of the dynamical realization, all orbits of the dynamical realization are dense. We say two dense left orderings are dynamically equivalent if their dynamical realizations are conjugate.
Example 3. The group G itself naturally acts on LO (G) by g : P → P ·g. Two left orderings are called conjugate if they belongs to the same G-orbit. Let P, Q be two left orderings which are conjugate by h ∈ G, and take a dynamical realization A P of P . Then we can choose the dynamical realizations A Q as A Q (g) = A P (h −1 gh), so they are dynamically equivalent. Now we show that ψ x classify the dynamically equivalence.
Theorem 4. Let x ∈ Z(G) and P, Q ∈ Cof x (G) be dense orderings. Then P and Q are dynamically equivalent if and only if ψ x (P ) = ψ x (Q).
Proof. First observe that if P and Q are dynamically equivalent then A P,x and A Q,x are conjugate. Conversely, assume that A P,x and A Q,x are conjugate by φ. Then the lift σ(φ) provides a conjugation between A P,x and A Q,x . Since P and Q are dense, the all orbits of their associated circle actions A P,x and A Q,x are dense on S 1 . By Ghys' theorem [2, Theorem 6.5], A P,x and A Q,x are conjugate if and only if eu(A P,x ) = eu(A Q,x ). Thus by Theorem 3 we obtain the desired result.
Using this, we give a cohomological characterization of infinite Thurston type orderings of the braid groups. Thurston-type orderings are orderings of the braid group B n which are defined by the Nielsen-Thurston action, which is the action of B n on the real line R constructed by using Hyperbolic geometry. These orderings are generalization of the standard Dehornoy ordering < D . See [1], [8] for details.
Corollary 1.
A dense left ordering P of the n-braid group B n is a Thurston type ordering if and only if ψ ∆ 2 (P ) = ψ ∆ 2 (D), where D represents the Dehornoy ordering.
Remark 1. We may define the notion of semi-dynamical equivalence of general left orderings by using the notion of semi-conjugation instead of conjugation. This relation is the same as the dynamical equivalence if we consider dense orderings. Since the bounded Euler class is a complete invariant of semi-conjugacy [2], we can rephrase Theorem 4 as Two left orderings P, Q ∈ Cof x (G) are semi-dynamically equivalent if and only if ψ x (P ) = ψ x (Q).
Convexity criterion
In this section, we utilize the maps ψ H x and ρ H x,P to study the structure of the orderings. Let A ⊂ B be subsets of the left-orderable group G. We say A is convex in B with respect to the ordering P ∈ LO (G) if some elements b ∈ B satisfies the inequality a < P b < P a ′ for some a, a ′ ∈ A, then b ∈ A. Convex subgroups have the following properties. Lemma 6. Let G be the left-orderable group and B be a subgroup of G. Let A, A ′ be subgroup of B, which are convex in B with respect to an ordering P ∈ LO (G).
Proof. Assume that we have neither A ⊂ A ′ nor A ′ ⊂ A. Then there exist elements a ∈ A − A ′ and a ′ ∈ A ′ − A. With no loss of generalities, we may assume 1 < P a and 1 < P a ′ . If a ′ < P a, then by convexity of A, a ′ ∈ A. Thus a < P a ′ . On he other hand, by the same reason, we have a ′ < P a. This is a contradiction.
The following lemma is simple but provides a useful criterion for convexity of some subgroups. Lemma 7. Let A be a subgroup of G generated by x 1 , . . . , x m . Let w be an element of A, and fix a word expression w = x j1 i1 x j2 i2 · · · x jn in and define e p = q ; iq=p j q . For an ordering P ∈ InvLO A x (G), if ρ A x,P (w N ) = 0 holds for all N ∈ Z (This assumption is satisfied, for example, w is convex in A), then for all N > 0.
Example 4. Let B 3 = x, y | x 2 = y 3 be the braid group of 3-strands. Let us consider the cyclic subgroup A = w , where w = xy 2 = x 3 y −1 . Assume that there exists a left-ordering P ∈ InvLO A x (G) which makes A convex. Then, by Lemma 7 we have two inequalities x,P (y)| ≤ 2. However, there is no real number which satisfies the above two inequalities, so we conclude that w is not convex. Now we provide a criterion for convexity for abelian subgroups.
Theorem 5. Let G be a left orderable group and x ∈ G. Let A be a rank n free abelian subgroup of G generated by x 1 , . . . , x n and B be a rank k(< n) free abelian 2. j e i j ρ A x,P (x j ) = 0 holds for all i = 1, . . . , k.
3. dim Q span Q {ρ A x,P (x 1 ), . . . , ρ A x,P (x n )} = n − k. Proof. We prove the theorem by induction of n. The case n = 1 is obvious. Assume that we have already proved the theorem for < n.
Assume that B is convex. Then, (1) is obvious, and the assertion (2) follows from Lemma 7. Assume that (3) Conversely, assume that all conditions 1-3 hold but B is not convex. Then, there exists an element y = x q1 1 · · · x qn n ∈ B such that b < P y < P b ′ holds for some b, b ′ ∈ B. With no loss of generality, we may assume that the integers {q i } have no common divisor. Now, ρ A x,P (y N ) = 0 for all N ∈ Z, so by Lemma 7, q i ρ A x,P (x i ) = 0. Since y ∈ B, we have dim Q span Q {ρ A x,P (x 1 ), . . . , ρ A x,P (x n )} < n − k, which contradicts the condition (3).
This theorem is useful to relate the topology of LO (G) and the structure of orderings. In particular, this implies that ψ H x serves as some kinds of good coordinates, which reflects the property of orderings. The following observation is first done by Adam Clay, by using Sikora's description of LO (Z 2 ), which provides relationships between the topology of LO (Z 2 ) and convex subgroups.
Example 5. Let us consider the rank two free abelian group A = x, y . Then by Theorem 5, the subgroup x p y q is convex with respect to the ordering P ∈ InvLO A x (A) if and only if p, q has no common divisor and ρ A x,P (y) = −q/p. On the other hand, if P ∈ InvLO A x (A), then x is convex in A with respect to the ordering < P .
Thus, an ordering P ∈ LO (A) admits a convex non-trivial subgroup if and only if ψ A x,y (P ) ∈ Q ∪ {∞}. In Sikora's description of the space LO (G), this implies that P ∈ LO (A) admits a convex non-trivial subgroup if and only if P is a rational point of S 1 [) or S 1 (] .
|
2010-07-06T06:12:13.000Z
|
2010-06-29T00:00:00.000
|
{
"year": 2010,
"sha1": "16e07e651c15162122d10b96e5c4f99285c8247a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "16e07e651c15162122d10b96e5c4f99285c8247a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
267759594
|
pes2o/s2orc
|
v3-fos-license
|
Effective and Efficient Conversation Retrieval for Dialogue State Tracking with Implicit Text Summaries
Few-shot dialogue state tracking (DST) with Large Language Models (LLM) relies on an effective and efficient conversation retriever to find similar in-context examples for prompt learning. Previous works use raw dialogue context as search keys and queries, and a retriever is fine-tuned with annotated dialogues to achieve superior performance. However, the approach is less suited for scaling to new domains or new annotation languages, where fine-tuning data is unavailable. To address this problem, we handle the task of conversation retrieval based on text summaries of the conversations.A LLM-based conversation summarizer is adopted for query and key generation, which enables effective maximum inner product search. To avoid the extra inference cost brought by LLM-based conversation summarization, we further distill a light-weight conversation encoder which produces query embeddings without decoding summaries for test conversations. We validate our retrieval approach on MultiWOZ datasets with GPT-Neo-2.7B and LLaMA-7B/30B. The experimental results show a significant improvement over relevant baselines in real few-shot DST settings.
Introduction
Dialogue state tracking (DST) is one of the most crucial components in task-oriented dialogue systems.The goal of DST is to track users' intents, slots and values at every turn of a dialogue based on a predefined schema (Budzianowski et al., 2018).The challenge of training a supervised DST model lies in the cost of dialogue state annotations, which is not scalable to new schemas, domains or annotation languages.To address these challenges, recent works (Hu et al., 2022;Chen et al., 2023) adopt in-context learning with pre-trained large language models (LLM) for few-shot DST.In the few-shot setting, similar dialogue exemplars are retrieved based on the test sample and then these exemplars are added to the LLM prompt for target generation.This approach is attractive since no domain-specific fine-tuning is required for the LLM but it can still generalize to unseen domains.
One challenge in few-shot DST is how to retrieve salient conversation exemplars (e.g., in a set of 3 to 5) from the support set, which serves as demonstrations for the LLM.Ideally, a retrieved exemplar should carry both the same dialogue history and state change as the test sample.However, in a practical few-shot setting (e.g., with at most 100 annotated support examples), it is likely that no exemplar in the support set satisfies the above requirement.Consider a test example with two user turns: user: book a flight to London Heathrow system: where are you departing from user: Amsterdam It is possible that the closest exemplar we can get from the support set is: user: I'm leaving Manchester by air system: where are you flying to user: To Paris which neither matches the test dialogue state nor the state change.Nevertheless, we hope that LLM can generalize by learning from such exemplars with an identical user intent.The retrieval task gets harder when the conversation becomes lengthy with only partial history related to the current user input.For example, in another test dialog: user: what's the weather in London system: sunny user: book a flight to London Heathrow system: where are you departuring from user: Amsterdam the user's current intent is identical to the earlier test sample, but it involves unrelated history.Still we want to match the test sample to a similar exemplar, which reflects the user's intent up to the current point of conversation.This retrieval cannot be easily accomplished with pre-trained dense retrieval models based on word or sentence similarity.To optimize retrieval performance, previous works (Hu et al., 2022;Chen et al., 2023) finetune a dense retriever with "structurally similar" dialogue examples identified from dialogue state annotations with heuristics.Hu et al. (2022) additionally report that including dialogue state information in the retrieval key is helpful.However, the approach is not scalable to a practical few-shot setting (with fewer than 100 annotated support examples), as fine-tuning easily leads to overfitting and catastrophic forgetting (McCloskey and Cohen, 1989;Lee et al., 2022).It is also impractical to expect every domain owner to create their own fine-tuning data with well-engineered rules.
In this work, we propose a new solution for conversation retrieval starting with the introduction of a LLM-based conversation summarizer.For each exemplar to be indexed and also each test dialog, the summarizer produces a text summarizing what the user wants at this point of the conversation.In Section 2, we provide a discussion of this specific summarization choice and how it compares to dialogue state.The summaries are then used as condensed search keys and queries applicable to pre-trained dense retrievers with standard nearest neighbor search.We empirically show that in the few-shot setting, using summaries as retrieval keys and queries is more effective than using raw dialogues.
Notably the conversation summarization task described above can be easily handled by state-ofthe-art LLMs via prompt learning, as we will show in an ablation study.However, the deployment of such a retrieval system also introduces extra model parameters and inference cost.Unlike search keys, which can be pre-built offline, a search query needs to be auto-regressively decoded for each test dialogue right during inference.To improve the efficiency of this conversation retriever, our second contribution in this work focuses on distilling a light-weight conversation encoder which embeds a raw dialogue directly into a vector space similar to the embedding of its summary.The light-weight conversation encoder enables efficient conversation search over a vector database without explicit query generation.When evaluated on the Mul-tiWOZ dataset with GPT-Neo-2.7B(Black et al., 2022), LLaMA-7B, and LLaMA-30B (Touvron et al., 2023) for few-shot DST, we find that the distilled conversation encoder is not only more ef-ficient, but also more effective than a cascaded conversation retriever with explicit query generation.Our approach also significantly outperforms relevant baselines, which use annotated dialogues for retriever fine-tuning.
Conversation Retrieval with Summaries
for LLM-based DST In the context of task-oriented dialogues with multiple turns of interactions between a user and a system, the objective of DST is to predict the accumulated intents, slots and values at each user turn.In a LLM-based approach, the generation of a dialogue state is conditioned on a task-specific prompt.The prompt includes at least the test conversation and a set of k demonstration examples, from which we expect the LLM to learn to generalize.Considering the size limit of the prompt, k is expected to be small (3-5 examples).Each of the retrieved examples is an annotated conversation sharing similar features as the test conversation.
We expect to retrieve these exemplars with a dense retriever from a "support set" (e.g., 100 annotations) that can be constructed with minimum effort for domain scaling.
There are two major challenges of conversation retrieval for LLM-based DST described above.First, a good representation of search keys and queries need to be found.As we analyze in Section 1, the similarity of two dialogues is not directly quantifiable by semantic distance, but rather requires more sophisticated structural matching mechanism or a higher-order similarity function.This requirement leads to the second challenge as to how to train an effective conversation retriever that can scale across domains.Previous works (Hu et al., 2022;Chen et al., 2023) mainly fine-tune pre-trained dense retrievers with annotated dialogues obtained from the support set.However, fine-tuning is not realistic in a few-shot setting and for every domain.
As shown in Figure 1-(a), our work introduces a query/key generation step in the LLM-based DST.The generation is performed with another LLM which transforms the raw dialogue context into a text summary whose similarity can be evaluated more easily with pretrained retrievers.Specifically, the text summary represents the user's intent up to the current point of the conversation.It grounds the latest user input onto the dialogue history, keeping only information related to the current user intent.Note that the summary is a contextual rewriting of the current user intent that is possibly expressed in multiple turns, with applied ellipsis recovery (Hardt, 1997) and co-reference resolution (Pradhan et al., 2012).Examples of the conversation summary are in Table 6.The summary can also be viewed as a text description of an updated dialogue state which is to be predicted by the LLM.Unlike the dialogue state, the summary does not maintain all conversation history but only includes information relevant to the current user input.
With the introduction of an explicit query/key generation step, we expect that the conversation retrieval becomes easier and the search index can be built more efficiently.To construct the search index, an offline process can be triggered to generate text summaries for every example in the support set.Note that search key generation does not add any inference cost.However, the query generation step comes at an extra cost since the generation needs to happen in an online process.In the next section, we describe how to make the conversation retrieval more efficient by stepping away from explicit query generation.
Conversation Encoder Distillation
Note that in the proposed conversation retriever, the LLM-based conversation summarizer needs to be invoked for every test sample to generate the search query as shown in Figure 1-(a).To eliminate the extra inference cost, we propose to distill a light-weight conversation encoder which directly embeds a dialogue into a vector space similar to its summary, by maximizing their embedding similarity.The encoder is trained with large-scale dialogue-summary pairs generated by the conversation summarizer in an offline process.After training the model, as shown in Figure 1-(b), we can directly encode each dialogue into a query embedding for maximum inner product search.We call our conversation encoder CONVERSE, stand-ing for CONversation embeddings for VErsatile Retrieval with implicit SummariEs.Next we explain the structure and training objective of CON-VERSE.
Model Preliminaries
In our problem setup, we are given a set of unlabeled conversations between a user and system, denoted as where each conversation x i consists of l i utterances (u i,j , . . ., u i,l i ) and each utterance u i,j is a sequence of T i,j tokens (x i,j,1 , . . ., x i,j,T i,j ).As shown in Figure 2-(b), the training data of CON-VERSE is prepared by invoking the conversation summarizer to generate a summary for each conversation x i , denoted as z i , which consists of T ′ i tokens with T ′ i ≪ l i j=1 T i,j .We denote the dataset augmented with summaries as D a = {(x i , z i )} n i=1 .For brevity, we omit the first subscript i if there is no ambiguity.Given the set of conversationsummary pairs, the goal is to train an encoder f θ : V T → R T ×d such that the similarity between a conversation and its summary is maximized, where V denotes a set of predefined tokens.
Conversation and Summary Embedding
To match a conversation against a summary, we leverage the commonly used architecture in dense retrieval known as the dual encoder (Yih et al., 2011;Lee et al., 2019;Karpukhin et al., 2020a), where a conversation and a summary are encoded jointly for similarity comparison.State-of-the-art dual encoders (Khattab and Zaharia, 2020) represent each encoding as multiple vectors, typically the contextualized token vectors, to represent the text.These models largely improve the model expressiveness, and exhibit much stronger performance and robustness compared to their single-vector counterparts (Thakur et al., 2021).Based on it, we represent both the conversation embedding f θ (x) and the summary embedding f θ (z) as a matrix.While the summary encoder in the dual architecture can be directly integrated into off-the-shelf sentence encoders, our conversation encoder (CONVERSE) is designed to reflect the inductive bias of the summarization task.
CONVERSE Remember that the task of the conversation summarizer is to summarize the current user intent by grounding it to the conversation history.Hence the latest user input (the state delta) is most important and any past utterances irrelevant to the latest input should be dropped out.To reflect the nature of the summarization task, we explicitly model the grounding step between the latest user input u l and past utterances u 1 , . . ., u l−1 as a structural bias in CONVERSE.This is achieved with the introduction of a soft retrieval structure that softly retrieves past utterances or tokens which are relevant to the latest user input.Specifically, the soft retrieval is simulated with another neural network g ϕ : R d × R d → [0, 1], which outputs the relevance score of each token in the utterances u 1 , . . ., u l−1 conditioned on the latest user utterance u l .Then, the relevance scores are used to downweight irrelevant token representations of the conversation x: where t ∈ {1, . . ., T j } for each j ∈ {1, . . ., l − 1} and f θ (x) j,t is a contextual representation of t-th token in u j .Intuitively, an irrelevant token in the conversation history receives a small weight, reducing its contribution to the final similarity scoring against the summary.Conversely, a token in the latest user input always carries the highest weight 1 and contributes more to the similarity computation.
Training Objective
Given the conversation encoder fθ,ϕ and summary encoder f θ with a set of conversation and summary pairs D a = {(x i , z i )} n i=1 , as illustrated in Figure 2-(b), we train the dual encoder to maximize the similarity between a dialogue and its summary with the contrastive loss (Henderson et al., 2017): , where sim is the multi-vector similarity function (Khattab and Zaharia, 2020), which computes the similarity between the conversation and its summary, denoted as sim( fθ,ϕ (x), f θ (z)), by averaging maximum dot product between summary tokens and each conversation token as: In practice, due to computational costs, we sample a mini-batch B ⊂ D a for computing the denominator of the contrastive loss in equation 2.
Inference
In LLM-based DST, we are given a small support set of labeled dialogues D s 1 = {(x s i , y s i )} m i=1 .The search keys can be pre-built offline by calling the conversation summarizer to generate a summary for each dialogue x s i from the support set D s 1 , resulting in a set of (conversation, label, and summary) triplets denoted as . The search index is then built with the summary as the key, and a labeled conversation as the value.The summaries are encoded with the fine-tuned summary encoder described in Section 3.2.
During inference, for each conversation x q i from the test set D q = {x q i } Q i=1 , we embed the conversation with the CONVERSE encoder and compute its similarity with every search key using the similarity function in equation 3, i.e., sim( fθ,ϕ (x q i ), f θ (z s j )) for j = 1, . . ., m.As shown in Figure 2-(c), the retriever ranks examples (x s j , y s j ) based on the similarity score and chooses the top-k exemplars.Finally, the retrieved exemplars are added to the prompt of the downstream LLM for dialog state generation.
Experimental Setup
Common We evaluate LLM-based DST with the proposed conversation retriever on MultiWOZ 2.1 (Eric et al., 2020) and 2.4 (Ye et al., 2022).
To simulate few-shot scenario, we consider a support set of 100 labeled conversations as the default setting in our comparison.For each experimental run, we randomly sample 100 labeled conversations from the training data of MultiWOZ 2.1/2.4.The analysis of other support set sizes is deferred to an ablation study.During inference, we retrieve the top 5 examples from the support set.The examples along with a test conversation are inserted into the prompt, following Hu et al. (2022).This setting is applied to all comparisons.We use both GPT-Neo (Black et al., 2022) and LLaMA-7B/30B (Touvron et al., 2023) as the LLM for DST generation.For evaluation, we report average and standard deviation of Joint Goal Accuracy (JGA) and F1 score (Henderson et al., 2014) on all 7,368 test dialogues from MultiWOZ with three runs.
Baselines We compare the proposed conversation retriever with the following baselines.
1. IC-DST (Hu et al., 2022): It utilizes dialogue labels to construct positive and negative pairs for fine-tuning a pretrained SBERT (Reimers and Gurevych, 2019) or LinkBERT (Yasunaga et al., 2022) as a retriever.The retrieval key is a dialogue context, and the best dialogue context is reported to be previous dialogue state + current user input (which is better than a full dialogue).2. SM2 (Chen et al., 2023): Similar to IC-DST, it fine-tunes SBERT on labeled dialogue data with contrastive loss, where conversations with partial matching slots or values are considered as positive samples.The retrieval key is a dialogue context similar to IC-DST.3. GTR-T5-LARGE (Ni et al., 2022): It uses a T5 encoder, which is pretrained on large scale corpora for sentence representation, to compute the similarity between conversations for retrieving examples.The retrieval key is the full dialogue.4. JINA-LARGE (Günther et al., 2023): Similar to GTR-T5, the pretrained sentence encoder Jina is used to compute similarity between conversations.The retrieval key is also the full dialogue.
Ours We use gpt-3.5-turbo(OpenAI, 2022) as the conversation summarizer, since it provides reliable summaries that satisfy the task requirement in the prompt (see human evaluation in 4.4 and the prompt specified in Appendix A).First, we evaluate the effectiveness of summarybased search key and query generation, using offthe-shelf retrievers GTR-T5-Large and Jina-Large, which are directly comparable with the baseline.Second, we evaluate the distilled conversation encoder (CONVERSE).To train CONVERSE, we use the same conversation summarizer to generate a summary for every turn of every conversation from the full MultiWOZ training set, resulting in a total of 56,776 conversation-summary pairs.The parameters θ of the dual encoder f θ and fθ,ϕ are shared and initialized with LinkBERT (Yasunaga et al., 2022), and trained on the conversation-summary pairs for 20 epochs with the objective in equation 2. LinkBERT (Yasunaga et al., 2022) is chosen since we empirically find that it offers the best general-purpose weight of initialization.We use the AdamW optimizer (Loshchilov and Hutter, 2018) with learning rate 5•10 −5 and batch size 200.We use eight A100 GPUs for training the model.
Main Results
The DST results are shown in Table 1 and Table 2.The first set of comparisons is between conversation retrieval with and without explicit query/key generation.We observe that using the summary as search keys/queries significantly improves the end-to-end (E2E) results, when evaluated with the same off-the-shelf retriever (GTR-T5 or Jina).The result is slightly behind IC-DST which fine-tunes the retriever with dialogue state information in the key.However, after introducing the distilled CONVERSE model, we achieve much better E2E results than all baselines.Although our motivation of distilling a conversation encoder is to reduce the inference cost, it turns out that the light-weight model is also helpful in E2E performance.We hypothesize that the improved performance brought by CONVERSE is attributed to two factors.The first and foremost is that we leverage a dual encoder architecture to optimize the matching between conversation-summary pairs.This suggests that the retrieval component is optimized for the task-specific keys and values.A secondary explanation of the performance gain is that the conversation encoder avoids error propagation in explicit summary decoding and re-embedding.It should be noted that the above findings are consistent across different datasets (MultiWOZ 2.1/2.4) and language models (GPT-Neo and LLaMA-7B/30B).
Comparison Against Few-Shot Finetuning Recently, Mosbach et al. (2023) have shown that fewshot fine-tuning outperforms in-context learning in some settings, which makes people wonder how few-shot fine-tuning behaves with 100 labeled di- alogues for dialogue state tracking tasks.To answer this question, we compare our method, CON-VERSE, against one of the strongest few-shot finetuning methods, DS2 (Shin et al., 2022), using BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) language models.As shown in Table 3, our method, CONVERSE, outperforms the fewshot fine-tuning method, DST.Note that the BARTbased DS2 severely overfits to the small labeled dataset and the T5-based model performs worse than the in-context learning method, even though T5 model is pretrained on an additional large-scale labeled dialogue summarization dataset, SAMSum (Gliwa et al., 2019).
Out-of Domain Generalization
To verify our hypothesis that our unsupervised retriever CON-VERSE generalizes better to unseen domain than supervised methods, we hold out the hotel domain from the MultiWOZ dataset and train the retrievers, IC-DST and CONVERSE on the remaining four domains: train, restaurant, taxi, and attraction.Then we evaluate the performance of the few-shot in-context learning with the retrievers on test examples from the unseen domain, hotel.As shown in Table 4, our model CONVERSE outperforms IC-DST by a large margin, which empirically validates that our unsupervised retriever generalizes better to unseen domain than the supervised one.
Ablation Study
Size of Support Set We empirically study the size of the support set (labeled dialogues) in the conversation retrieval task.Notably, a smaller support set requires less annotation effort from the domain owner, placing more emphasis on general- ization to unseen dialogue structures.In contrast, a larger support set contradicts the fundamental motivation behind few-shot learning, but it is likely to improve the E2E accuracy, as more test dialogue structures are observable from the exemplars.In Figure 3, we plot the JGA of LLaMA-7B with CONVERSE on varying sizes of the support set constructed from MultiWOZ.As we expect, the JGA increases as the number of labeled conversation increases, even though we do not fine-tune the retriever with any labeled conversations.
Summary vs. state delta The conversation summary we adopt in this work concludes the user's current intent when the dialogue takes place.A limitation is that the summary does not directly highlight the state delta carried by the latest user input.As a remedy, we consider a multi-key and query retrieval setup, where we use both the summary and the latest user input as search keys and queries.More specifically, we first retrieve 20 dialogues with CONVERSE and re-rank the 20 dialogues based on the similarity of the latest utterance between the test sample and the support examples, using the pre-trained GTR-T5-Large.As shown in Table 5, re-ranking with the latest user utterance yields marginal performance gains.In future work, we aim to explore a better way of summarizing the conversation structure that reflects both the joint intent and the latest user input.
Qualitative Results
Visualization of history grounding As described in equation 1, CONVERSE softly retrieves USER: I am also looking to eat somewhere expensive, in the south area of town.
USER: I will also need a taxi , please.SYSTEM: Where would you like your taxi to pick you up and drop you off?USER: I want to be picked up at the hotel and dropped off at the restaurant.
Summary: The user wants to book a taxi to be picked up at a specific location and dropped off at another.conversation history based on the latest user utterance.Specifically, the network g ϕ outputs a relevance score between 0 and 1 for each token of the conversation history.In Figure 4, we visualize this relevance score of each token in the history.The tokens with darker blue color indicates a higher weight, which are considered to be more relevant to the latest input.The examples in Figure 4 shows that the model successfully focuses on relevant part of history.For the first example in Figure 4a, the user wants to search for a Chinese restaurant in the center with moderate price range.The model assigns large weights to the tokens related to "Chinese", "center", and "price".Similarly, the tokens relevant to booking a taxi gets larger weights in Figure 4b.For the last example in Figure 4c, the model pays attention to the tokens related to a museum and ignores many irrelevant ones.
Human Evaluation on Conversation Summarizer
The success of CONVERSE is highly dependent on the output quality of the conversation summarizer, which are used as labels for encoder distillation.We conduct human evaluation of 135 summaries generated by the conversation summarizer, namely gpt-3.5-turbo.Specifically, three human judges are asked to assess whether the generated summaries are consistent with the instructions in the prompt in Table 9.The results indicate that 90.3% of the 135 summaries are deemed consistent with the given prompt.
Examples of the generated summaries are shown in Table 6 and 7 noteworthy that the model focuses on the latest user utterance while disregarding previous user requests for hotel and restaurant reservation.For the second example, the model misses out on the arrival time for generating the summary.Identification and correction of such errors are topics we will explore in future work.We include more examples in Appendix B.
Retrieved Exemplars
In Table 8, we show the top three most similar examples retrieved by CON-VERSE.In this example, the user asks to find an expensive Indian restaurant and a retriever needs to retrieve conversations about a restaurant.Indeed, our CONVERSE retriever assigns high similarity scores to pairs of the target conversation and summaries about finding a restaurant.Note that the language model (LLaMA-7B) with in-context learning successfully generalizes to decode test slot values from the exemplars, though the retrieved exemplars consist of values for food or price range, which are different from the target conversation.
Related Work
Dialog State Tracking Most of existing works on DST train a supervised model with large-scale labeled datasets (Wu et al., 2019;Zhang et al., 2020;Peng et al., 2021;Lin et al., 2020;Lee et al., 2021;Zhao et al., 2022;Kim et al., 2020;Heck et al., 2020;Hosseini-Asl et al., 2020;Ham et al., 2020;Cheng et al., 2020;Platanios et al., 2021).However, a supervised model does not scale well to new domains or annotation schemas.To address the problem, several recent works explore few-shot DST (Wu et al., 2020;Li et al., 2021;Gao et al., 2020;Lin et al., 2021;Campagna et al., 2020;Su et al., 2022).Most related are the works of Hu et al. (2022); Chen et al. (2023), who adopt in-context learning with LLM for dialog state generation.The work demonstrated the few-shot generalization ability of LLM applied to DST without parameter updates, but the dialog retriever is still fine-tuned with in-domain data.
Another work related to ours is Shin et al. (2022), which formulates DST as a summarization task.The authors train a T5 language model to decode text summaries, which are then transformed into dialog states with heuristic rules.Different from their work, we do not aim to alter the target of DST as summaries but rather our goal is to enable effective conversation retrieval.
Retrieval Our work mainly focuses on retrieving relevant conversations for in-context learning (Liu et al., 2022).There is a vast number of papers (Karpukhin et al., 2020b;Khattab and Za- haria, 2020; Izacard et al., 2022;Santhanam et al., 2022) proposing neural network based retrievers which encode queries and keys into low dimensional vectors and compute similarities between them.Hu et al. (2022); Chen et al. (2023) propose to utilize slots and values to represent a long history of conversation for retrieval.However, in order to train the retriever, their approaches require labeled dialogue data to construct positive and negative conversations for each query conversation.Recently, Ravfogel et al. (2023) retrieve texts based on abstract descriptions generated by a LLM.
Conclusion
The contribution of this work is twofold.First, we proposed an effective way of retrieving conversations in LLM-based DST with conversation summaries as search keys and queries.We then improved the efficiency of the retrieval system by distilling a conversation encoder capable of embedding a conversation into a vector space similar to its summary.This eliminates the cost of decoding an actual summary for each test sample during inference.We validated our CONVERSE encoder for LLM-based DST in a real few-shot setting with 100 conversations in the support set.Results showed that CONVERSE consistently improved both the efficiency and the performance of few-shot DST when using different LLMs, outperforming previous LLM-based DST baselines that rely on annotated dialogues for retriever fine-tuning.
Figure 1 :
Figure 1: Comparison between (a) off-the-shelf retriever with query generation and (b) CONVERSE w/o query generation.
Figure 2 :
Figure 2: Concept.(a) Generating a summary of a dialogue with language model (LM).(b) Training the retriever to maximize a similarity between the dialogue and generated summary.(c) Given a test dialogue as a query, we retrieve the dialogue (value) of which summary (key) obtains the best similarity score with the query.
Figure 3 :
Figure 3: JGA of LLaMA-7B with CONVERSE as a function of the number of labeled data.
. For the first example, the model generates the summary about booking a taxi.It is US ##ER : Hi , I am looking for a Chinese restaurant in the centre .S ##Y ##ST ##EM : There are 10 Chinese restaurants in the centre .Is there a particular price range you ' re interested in ?[SEP] [CLS] US ## ER : Yes, I would prefer a restaurant in the moderate price range .[SEP]Summary: The user wants to find a Chinese restaurant in the centre with a moderate price range.(a) US ##ER : Can you help me with a taxi booking ?S ##Y ##ST ##EM : Sure ! when would you like to arrive ?[SEP] [CLS] US ## ER : I must arrive to na ##ndo ##s by 23 : 15 [SEP] Summary: The user wants to book a taxi to a specific destination at a specific time.(b) US ##ER : What is the address of A ##corn Guest House ?S ##Y ##ST ##EM : a ##corn guest house is located at 154 chest ##erton road .US ##ER : Great .Can you book it for 7 people and 4 nights starting on Friday ?S ##Y ##EM : I have your reservation for 7 people staying 4 nights , starting on Friday .Your reference number at the A ##corn Guest House is 6 ##IA ##6 ##7 ##8 ##H ##6 .US ##ER : Okay , thanks .S ##Y ##ST ##EM : May I assist with anything else ?US ##ER : I am interested in visiting a museum while I am there .S ##Y ##ST ##EM : sure , we have 23 !any particular area of town ?[SEP] [CLS] US ## ER : Wow, 23 !I don ' t have a particular area of town in mind .Can you please recommend a great one to visit ?[SEP] Summary: The user wants a recommendation for a museum to visit in the area.
Figure 4 :
Figure 4: Visualization of importance scores.Tokens with darker blue gets larger weights based on the latest user utterance.
Conversation USER: I need some tourist information please.I need to know about a hotel called the Arbury lodge guest house.SYSTEM: The Arbury lodge guest house is in the north area and has a moderate price range.• • • USER: I would like to book a stay for 3 people for 2 nights starting from Tuesday.
Table 6 :
The LLM successfully summarize the conversation based on the latest user utterance.
Table 7 :
A failure case of summarization with the LLM.
Table 8 :
Given the target conversation, we show the top 3 most similar examples retrieved by our model CONVERSE.
|
2024-02-21T06:45:23.867Z
|
2024-02-20T00:00:00.000
|
{
"year": 2024,
"sha1": "59de4703ca458ed416fbf9aa7219658e94aef9bc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ArXiv",
"pdf_hash": "d37768f9a407a1fb76df599d74a6227fd17d9ee7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
252554072
|
pes2o/s2orc
|
v3-fos-license
|
Tug-of-War Driven by the Structure of Carboxylic Acids: Tuning the Size, Morphology, and Photocatalytic Activity of α-Ag2WO4
Size and morphology control during the synthesis of materials requires a molecular-level understanding of how the addition of surface ligands regulates nucleation and growth. In this work, this control is achieved by using three carboxylic acids (tartaric, benzoic, and citric) during sonochemical syntheses. The presence of carboxylic acids affects the kinetics of the nucleation process, alters the growth rate, and governs the size and morphology. Samples synthesized with citric acid revealed excellent photocatalytic activity for the degradation process of Rhodamine B, and recyclability experiments demonstrate that it retains 91% of its photocatalytic activity after four recycles. Scavenger experiments indicate that both the hydroxyl radical and the hole are key species for the success of the transformation. A reaction pathway is proposed that involves a series of dissolution−hydration–dehydration and precipitation processes, mediated by the complexation of Ag+. We believe these studies contribute to a fundamental understanding of the crystallization process and provide guidance as to how carboxylic acids can influence the synthesis of materials with controlled size and morphology, which is promising for multiple other scientific fields, such as sensor and catalysis fields.
The size and morphology of crystals govern their properties for a range of important applications. These characteristics are thermodynamically controlled by the values of surface energies, allowing a crystal with lower-energy surfaces to achieve more stability [16]. The realization of crystals with a defined size and morphology requires the efficient control of nucleation and growth processes, including not only the precise adjustment
Characterization
Details about the characterization techniques are presented in the Supplementary Materials (see section SM-1).
Photodegradation
The photocatalytic activity of the samples was evaluated through the photocatalytic degradation of 50 mL of RhB (P.A., Synth) in an aqueous solution under UV-Vis light. In a typical process, 50 mg of the synthesized materials was dispersed in 50 mL of the RhB solution (1 × 10 −5 mol L −1 ) for 10 min in an ultrasonic bath (42 kHz, model 1510). The mixture was then transferred to a 100 ml glass bottle and stirred for 30 min in the dark for the homogeneous dispersion of the catalyst and to allow adsorptive processes. Then, the suspensions were irradiated with six UV lamps (PHILIPS TL-D, São Paulo, BR, 15 W) at a distance of 10 cm from the reactor under vigorous stirring, and the temperature was maintained at 20 °C via a thermostatic bath. At predetermined times (0, 10, 20, 30, 40, 60, and 90 min), a 2 mL aliquot of the suspension was removed from the photocatalytic system and placed into a plastic tube. Afterward, the suspension was centrifuged at 10,000 rpm for 5 min for the complete removal of the catalyst particles. The remaining solution was analyzed by UV-Vis absorption spectroscopy on a V-660 spectrophotometer (JASCO) in order to monitor the variations in the absorption band of RhB, with the maximum at λ = 554 nm for all photocatalytic tests.
Photocatalytic Concentration and Photodegradation Rate
The effect of the photocatalytic concentration on the photodegradation rate was analyzed by using the following ratios: 0.5 mg/mL (25 mg of catalyst in 50 mL of RhB), 1 mg/mL (50 mg of catalyst in 50 mL of RhB), 2 mg/mL (100 mg of catalyst in 50 mL of RhB), and 4 mg/mL (200 mg of catalyst in 50 mL of RhB). The procedure adopted was the same as described in Section 2.3.1.
Characterization
Details about the characterization techniques are presented in the Supplementary Materials (see section SM-1).
Photodegradation
The photocatalytic activity of the samples was evaluated through the photocatalytic degradation of 50 mL of RhB (P.A., Synth) in an aqueous solution under UV-Vis light. In a typical process, 50 mg of the synthesized materials was dispersed in 50 mL of the RhB solution (1 × 10 −5 mol L −1 ) for 10 min in an ultrasonic bath (42 kHz, model 1510). The mixture was then transferred to a 100 ml glass bottle and stirred for 30 min in the dark for the homogeneous dispersion of the catalyst and to allow adsorptive processes. Then, the suspensions were irradiated with six UV lamps (PHILIPS TL-D, São Paulo, BR, 15 W) at a distance of 10 cm from the reactor under vigorous stirring, and the temperature was maintained at 20 • C via a thermostatic bath. At predetermined times (0, 10, 20, 30, 40, 60, and 90 min), a 2 mL aliquot of the suspension was removed from the photocatalytic system and placed into a plastic tube. Afterward, the suspension was centrifuged at 10,000 rpm for 5 min for the complete removal of the catalyst particles. The remaining solution was analyzed by UV-Vis absorption spectroscopy on a V-660 spectrophotometer (JASCO) in order to monitor the variations in the absorption band of RhB, with the maximum at λ = 554 nm for all photocatalytic tests.
Photocatalytic Concentration and Photodegradation Rate
The effect of the photocatalytic concentration on the photodegradation rate was analyzed by using the following ratios: 0.5 mg/mL (25 mg of catalyst in 50 mL of RhB), 1 mg/mL (50 mg of catalyst in 50 mL of RhB), 2 mg/mL (100 mg of catalyst in 50 mL of RhB), and 4 mg/mL (200 mg of catalyst in 50 mL of RhB). The procedure adopted was the same as described in Section 2.3.1.
Scavenger Measurements
The identification of the reactive oxygen species (ROS) was performed by scavenger tests. For this purpose, equivalent amounts of benzoquinone (
Results
An analysis of XRD data (see Figure S1) renders that all samples have well-defined diffraction peaks, indicating a good degree of structural order. The as-synthesized α-Ag 2 WO 4 samples present an orthorhombic structure belonging to the symmetry Pn2n space group, according to card No. 4165 in the Inorganic Crystal Structure Database (ICSD), showing that the SC method proved to be efficient for the synthesis of α-Ag 2 WO 4 materials. It is verified in Figure S1 that the gradual increase in the full width at maximum (FWHM) of the plane (321) for the CAAS samples when compared to the pure α-Ag 2 WO 4 sample is related to the reduction in crystallite sizes. The values of the lattice parameters, unit cell volume, and statistical parameters of quality obtained by Rietveld refinements are presented in Table S1. According to the statistical parameters obtained in the Rietveld refinement in Table S1, the quality of the structural refinement data is acceptable.
Raman and FTIR spectroscopy were also used to characterize all samples. From the Raman spectra of the α-Ag 2 WO 4 , α-Ag 2 WO 4 -TA, α-Ag 2 WO 4 -BA, and α-Ag 2 WO 4 -CA samples in Figure S3A, it is possible to observe that the active modes between 500 and 100 cm −1 are related to external vibrational modes of [AgO x ] (x = 2, 4, 6, and 7). The active modes between 500 and 1000 cm −1 can be attributed to vibration motions in the atoms of the [WO 6 ] clusters. Among them, the intense band at 878 cm −1 is assigned to the symmetrical stretching of the W−O bond in octahedral [WO 6 ] clusters. As a complementary analysis to Raman spectroscopy, FT-IR measurements were performed. Figure S3B Illustrated the FT-IR spectra and the corresponding positions of IR-active modes of α-Ag 2 WO 4 samples. The tungstate with a scheelite-type structure has eight stretching and/or bending IR-active vibrational modes [24]; however, only two were identified in the α-Ag 2 WO 4 samples between the spectral range of 400 and 900 cm −1 . These modes are located at 802 and 849 cm −1 and can be attributed to the overlapping of two intense bands, referring to A u and E u , respectively, whereas the IR-active modes are ascribed to the O−W−O antisymmetric stretching vibrations in the [WO 6 ] clusters.
XPS measurements identify the elemental composition, oxidation state, the overall electronic structure, and the density of the electronic states in the material (see Figure S4A-D). There were no expected percentages of C 1s in the samples, and the large amount of C 1s observed in the α-Ag 2 WO 4 -CA sample was assigned to the sample holder once the samples were dried for 10 h at 60 • C. The characteristic peaks of the Ag, W, and O atoms indicate a high purity for all samples. The binding energy values calculated for all samples agree with those in the literature for α-Ag 2 WO 4 samples [25,26].
High-resolution XPS spectra were performed for the elements forming the α-Ag 2 WO 4 structure. The spectra of the Ag species in Figure S5A show two bands located between ∼368 and ∼374 eV, which can be attributed to the binding energies of Ag 3d5/2 and 3d3/2, respectively, while the XPS spectra of the W species in Figure S5B show two bands located between ∼36 and ∼34 eV, which can be attributed to W 4f 7/2 and 4f 5/2 binding energies, respectively, and a broad peak related to W 5p 3/2 is located between 40.7 and 41.2 eV [3]. More information about the XPS spectra of Ag and W species can be found in Figure S5.
High-resolution XPS spectra of O 1s atoms present in α-Ag 2 WO 4 , α-Ag 2 WO 4 -TA, α-Ag 2 WO 4 -BA, and α-Ag 2 WO 4 -CA samples are illustrated in Figure 2 The results of the UV-Vis diffuse reflectance spectra of the α-Ag2WO4 samples are shown in Figure S6 (A-D), and the values of the Egap follow this order: α-Ag2WO4 > α-Ag2WO4-TA > α-Ag2WO4-BA > α-Ag2WO4-CA. From these results, we can propose that in the α-Ag2WO4-CA, there is a formation of intermediate levels in the band gap region that can be associated with the increase in the amount of LO. A detailed analysis of the images from the FE-SEM and TEM techniques in Figure 3 reveals that the different CAAs provoke changes in the morphology of the as-synthetized samples. The characteristic morphology of α-Ag2WO4, already reported in the literature, corresponds to long prisms or needles with bases similar to a hexagon, composed of the (010), (001), and (101) exposed surfaces [23,27,28], as can be seen in Figure 3A. The images represented in Figure 3B,C show a rectangular morphology for the α-Ag2WO4-TA sample due to the stabilization of the (100) surface with respect to the (101) surface [23]. The image of the α-Ag2WO4-BA sample ( Figure 3D) shows a change in the α-Ag2WO4 morphology from well-defined surfaces to rice-grains with poorly defined surfaces (see Figure 3E). A dramatic particle size reduction is sensed in the α-Ag2WO4-CA sample ( Figure 3F). The results of the UV-Vis diffuse reflectance spectra of the α-Ag 2 WO 4 samples are shown in Figure S6A-D, and the values of the E gap follow this order: α-Ag 2 WO 4 > α-Ag 2 WO 4 -TA > α-Ag 2 WO 4 -BA > α-Ag 2 WO 4 -CA. From these results, we can propose that in the α-Ag 2 WO 4 -CA, there is a formation of intermediate levels in the band gap region that can be associated with the increase in the amount of L O .
A detailed analysis of the images from the FE-SEM and TEM techniques in Figure 3 reveals that the different CAAs provoke changes in the morphology of the as-synthetized samples. The characteristic morphology of α-Ag 2 WO 4 , already reported in the literature, corresponds to long prisms or needles with bases similar to a hexagon, composed of the (010), (001), and (101) exposed surfaces [23,27,28], as can be seen in Figure 3A. The images represented in Figure 3B,C show a rectangular morphology for the α-Ag 2 WO 4 -TA sample due to the stabilization of the (100) surface with respect to the (101) surface [23]. The image of the α-Ag 2 WO 4 -BA sample ( Figure 3D) shows a change in the α-Ag 2 WO 4 morphology from well-defined surfaces to rice-grains with poorly defined surfaces (see Figure 3E). A dramatic particle size reduction is sensed in the α-Ag 2 WO 4 -CA sample ( Figure 3F). The average distribution of length and width of crystallites is reported in Figure 4, and it was obtained from the FE-SEM and TEM images ( Figure 3) using the program GNU Image Manipulation Program. An analysis of the results in Figure 4 renders that the average values of length and width, respectively, have decreased as follows: 1700 nm and 248 nm for α-Ag2WO4 ( Figure 4A,B), 561 nm and 147 nm for α-Ag2WO4-TA ( Figure 4C,D), and 126 nm and 54.9 nm for α-Ag2WO4-BA ( Figure 4E,F). For the α-Ag2WO4-CA sample, a spheroidal morphology was observed by TEM, and there is no distinction between length and width, with an average size value of 13.4 nm ( Figure 4G). It is important to remark that this is the first time this morphology for α-Ag2WO4 has been reported. In addition, we also calculated the size of the α-Ag2WO4-CA crystal by using the Scherrer equation and the Halder-Wagner-Langford method [29,30], and values of 13.33 and 16.84 nm have been obtained, respectively. These three values are very similar. The average distribution of length and width of crystallites is reported in Figure 4, and it was obtained from the FE-SEM and TEM images (Figure 3) using the program GNU Image Manipulation Program. An analysis of the results in Figure 4 renders that the average values of length and width, respectively, have decreased as follows: 1700 nm and 248 nm for α-Ag 2 WO 4 ( Figure 4A,B), 561 nm and 147 nm for α-Ag 2 WO 4 -TA ( Figure 4C,D), and 126 nm and 54.9 nm for α-Ag 2 WO 4 -BA ( Figure 4E,F). For the α-Ag 2 WO 4 -CA sample, a spheroidal morphology was observed by TEM, and there is no distinction between length and width, with an average size value of 13.4 nm ( Figure 4G). It is important to remark that this is the first time this morphology for α-Ag 2 WO 4 has been reported. In addition, we also calculated the size of the α-Ag 2 WO 4 -CA crystal by using the Scherrer equation and the Halder-Wagner-Langford method [29,30], and values of 13.33 and 16.84 nm have been obtained, respectively. These three values are very similar. The average distribution of length and width of crystallites is reported in Figure 4, and it was obtained from the FE-SEM and TEM images (Figure 3) using the program GNU Image Manipulation Program. An analysis of the results in Figure 4 renders that the average values of length and width, respectively, have decreased as follows: 1700 nm and 248 nm for α-Ag2WO4 ( Figure 4A,B), 561 nm and 147 nm for α-Ag2WO4-TA ( Figure 4C,D), and 126 nm and 54.9 nm for α-Ag2WO4-BA ( Figure 4E,F). For the α-Ag2WO4-CA sample, a spheroidal morphology was observed by TEM, and there is no distinction between length and width, with an average size value of 13.4 nm ( Figure 4G). It is important to remark that this is the first time this morphology for α-Ag2WO4 has been reported. In addition, we also calculated the size of the α-Ag2WO4-CA crystal by using the Scherrer equation and the Halder-Wagner-Langford method [29,30], and values of 13.33 and 16.84 nm have been obtained, respectively. These three values are very similar. In the next stage, the presence of CAAs, in turn, causes H 2 O molecules to gradually become a relatively worse partner of the Ag + cation due to the fact that a more stable bond with carboxylic groups (COO − ) can be formed. Thus, H 2 O molecules play an indirect role in weakening the Ag + hydration shell via the dehydration process, which may be interpreted as the beginning of the chelation processes of Ag + with the different CAAs to form a strong bond with COO − moieties.
It is expected that the nucleation and growth processes of α-Ag 2 WO 4 are controlled by the strong binding effect of CAAs with Ag + cations because both processes are directly related to the release kinetics for the formation of Ag + cations. The strong binding effect of CAAs prevents the agglomeration of Ag + cations. Based on the above considerations, a schematic representation of the synthesis progress is proposed, which involves a series of dissolution, dehydration, chelation, nucleation, and growth processes, mediated by the complexation of Ag + , as shown in Figure 5. In the next stage, the presence of CAAs, in turn, causes H2O molecules to gradually become a relatively worse partner of the Ag + cation due to the fact that a more stable bond with carboxylic groups (COO − ) can be formed. Thus, H2O molecules play an indirect role in weakening the Ag + hydration shell via the dehydration process, which may be interpreted as the beginning of the chelation processes of Ag + with the different CAAs to form a strong bond with COO − moieties. It is expected that the nucleation and growth processes of α-Ag2WO4 are controlled by the strong binding effect of CAAs with Ag + cations because both processes are directly related to the release kinetics for the formation of Ag + cations. The strong binding effect of CAAs prevents the agglomeration of Ag + cations. Based on the above considerations, a schematic representation of the synthesis progress is proposed, which involves a series of dissolution, dehydration, chelation, nucleation, and growth processes, mediated by the complexation of Ag + , as shown in Figure 5. The tug-of-war between the formation of the [Ag (H2O)2] + nH2O complex and the chelation process controls the release of Ag + as the synthesis progresses. There is a dynamic balance between the strengths of the Ag−O and Ag−CAAs bonds in the hydration and chelated complexes, respectively. Therefore, the presence of CAAs serves as a template directing the size and morphology of the as-synthesized samples. The stabilization of the chelate complex is directly linked to the nucleation and growth process of α-Ag2WO4; thus, in this case, it is observed that the lower the stability of the formed chelate complex, the larger the average size of the obtained samples. The experimental results further support the proposed mechanism; the molecular-level interactions involving Ag + cations in H2O and CAAs drive the size and morphology of the as-synthesizer samples.
The photocatalytic performance was investigated via the degradation of RhB. The time-dependent curve of the concentration and spectrum during RhB degradation is shown in Figures 6 and S7, respectively. The tug-of-war between the formation of the [Ag (H 2 O) 2 ] + nH 2 O complex and the chelation process controls the release of Ag + as the synthesis progresses. There is a dynamic balance between the strengths of the Ag−O and Ag−CAAs bonds in the hydration and chelated complexes, respectively. Therefore, the presence of CAAs serves as a template directing the size and morphology of the as-synthesized samples. The stabilization of the chelate complex is directly linked to the nucleation and growth process of α-Ag 2 WO 4 ; thus, in this case, it is observed that the lower the stability of the formed chelate complex, the larger the average size of the obtained samples. The experimental results further support the proposed mechanism; the molecular-level interactions involving Ag + cations in H 2 O and CAAs drive the size and morphology of the as-synthesizer samples.
The photocatalytic performance was investigated via the degradation of RhB. The time-dependent curve of the concentration and spectrum during RhB degradation is shown in Figures 6 and S7, respectively. The degradation process follows first-order kinetics and can be described by ⁄ = , where is the RhB concentration and indicates the rate constant, which can be obtained from the graphical representation of the integrated equation (see Figure 6). The corresponding rate constant value for the RhB degradation was 4.13 × 10 −3 min −1 , while in the presence of the α-Ag2WO4, α-Ag2WO4-TA, α-Ag2WO4-BA, and α-Ag2WO4-CA catalysts, the values were 3.32 × 10 −3 min −1 , 8.86 × 10 −3 min −1 , 9.43 × 10 −3 min −1 , and 3.46 × 10 −2 min −1 , respectively. These results show that the presence of CAAs in the SC synthesis increases the degradation process of RhB dye, as demonstrated in Table 2 when the activity is compared to those reported in the literature [9,23,32,33] under the same lamp (UV-Vis). The α-Ag2WO4-CA catalyst exhibited excellent catalytic performance (95% of degradation in 90 min) due to its optimal size effect with a higher surface/volume ratio and the presence of plenty of active sites at the sphere-like morphology.
To analyze the effect of the amount of α-Ag2WO4-CA catalyst on the photodegradation rate, the photocatalyst's concentration was increased to 2 and 4 mg/mL. For both cases, 100% of degradation was achieved in 60 min, and the rate constant values were 68.25 and 55.81 × 10 −3 min −1 , respectively ( Figure S8). The degradation process follows first-order kinetics and can be described by is the RhB concentration and k indicates the rate constant, which can be obtained from the graphical representation of the integrated equation (see Figure 6). The corresponding rate constant value for the RhB degradation was 4.13 × 10 −3 min −1 , while in the presence of the α-Ag 2 WO 4 , α-Ag 2 WO 4 -TA, α-Ag 2 WO 4 -BA, and α-Ag 2 WO 4 -CA catalysts, the values were 3.32 × 10 −3 min −1 , 8.86 × 10 −3 min −1 , 9.43 × 10 −3 min −1 , and 3.46 × 10 −2 min −1 , respectively. These results show that the presence of CAAs in the SC synthesis increases the degradation process of RhB dye, as demonstrated in Table 2 when the activity is compared to those reported in the literature [9,23,32,33] under the same lamp (UV-Vis). The α-Ag 2 WO 4 -CA catalyst exhibited excellent catalytic performance (95% of degradation in 90 min) due to its optimal size effect with a higher surface/volume ratio and the presence of plenty of active sites at the sphere-like morphology. To analyze the effect of the amount of α-Ag 2 WO 4 -CA catalyst on the photodegradation rate, the photocatalyst's concentration was increased to 2 and 4 mg/mL. For both cases, 100% of degradation was achieved in 60 min, and the rate constant values were 68.25 and 55.81 × 10 −3 min −1 , respectively ( Figure S8).
To investigate the active species along the photodegradation mechanism, the trapping experiments were performed only for the best photocatalytic (α-Ag 2 WO 4 -CA). To try to correlate the size/morphology/photocatalytic activity and the nature of the ROS for all samples, it was necessary to perform a trapping experiment with α-Ag 2 WO 4 , α-Ag 2 WO 4 -TA, and α-Ag 2 WO 4 -BA samples. However, this is out of the scope of our work. As shown in Figure 7A, the addition of TBA and AO had an obvious effect, which revealed that both •OH and h + were the primary active substance in the photocatalytic process. However, after adding BQ and AgNO 3 To investigate the active species along the photodegradation mechanism, the ping experiments were performed only for the best photocatalytic (α-Ag2WO4-CA). to correlate the size/morphology/photocatalytic activity and the nature of the ROS samples, it was necessary to perform a trapping experiment with α-Ag2WO4, α-Ag TA, and α-Ag2WO4-BA samples. However, this is out of the scope of our work. As s in Figure 7A, the addition of TBA and AO had an obvious effect, which revealed tha •OH and ℎ were the primary active substance in the photocatalytic process. How after adding BQ and AgNO3, no changes are sensed, which means that •O2 − and little effect on photocatalytic reactions. Reuse experiments using 2 mg/mL were performed to evaluate the photoca stability of the α-Ag2WO4-CA. The results are shown in Figure 7B, where it is poss observe that the sample decreases in photocatalytic activity at each cycle, decomp 91% of the dye in the fourth photocatalytic cycle, thus demonstrating that the mater stability. In Figure S9, the loss of mass of the α-Ag2WO4-CA along the catalytic cy presented. An analysis of the results reveals that a low decrease in photocatalyt ciency can be associated with the loss of mass after the fourth cycle.
Structural characterization for assessing the stability of α-Ag2WO4-CA aft fourth cycle was performed. In Figure 8, the XRDs before and after reuse are pres An analysis and comparison of the results show that the positions of the different are similar. Reuse experiments using 2 mg/mL were performed to evaluate the photocatalytic stability of the α-Ag 2 WO 4 -CA. The results are shown in Figure 7B, where it is possible to observe that the sample decreases in photocatalytic activity at each cycle, decomposing 91% of the dye in the fourth photocatalytic cycle, thus demonstrating that the material has stability. In Figure S9, the loss of mass of the α-Ag 2 WO 4 -CA along the catalytic cycles is presented. An analysis of the results reveals that a low decrease in photocatalytic efficiency can be associated with the loss of mass after the fourth cycle.
Structural characterization for assessing the stability of α-Ag 2 WO 4 -CA after the fourth cycle was performed. In Figure 8, the XRDs before and after reuse are presented. An analysis and comparison of the results show that the positions of the different peaks are similar.
Conclusions
Crystal size and morphology engineering of metal oxides is a promising route for tuning their properties and enhancing their performance. These characteristics depend on preparation conditions, which in turn can also affect the surface chemistry and reactivity. Therefore, it is critical to have tight control over their size and morphology as these parameters have strong correlations with a range of properties. Size-and morphology-con- 10
Conclusions
Crystal size and morphology engineering of metal oxides is a promising route for tuning their properties and enhancing their performance. These characteristics depend on preparation conditions, which in turn can also affect the surface chemistry and reactivity. Therefore, it is critical to have tight control over their size and morphology as these parameters have strong correlations with a range of properties. Size-and morphologycontrolled synthesis can be performed by using surface ligands, such as surfactants, due to their capability to stabilize different exposed surfaces at different morphologies. This work presents a facile synthesis strategy for altering the morphologies and sizes of α-Ag 2 WO 4 by using three carboxylic acids (tartaric, benzoic, and citric) as chelating agents, thereby modulating their photocatalytic activity. The main conclusions of the present work can be summarized as follows: (i) a comprehensive understanding of the relationship between the size, morphology, and photocatalytic activity is provided; (ii) plausible mechanisms to explain the kinetics of nucleation and growth processes are proposed; (iii) a tug-ofwar between dehydration and chelation processes is disclosed to control the rate of Ag + release for tuning the size, morphology, and photocatalytic activity of the as-synthetized α-Ag 2 WO 4 material; (iv) the α-Ag 2 WO 4 samples synthesized with citric acid have nanoscale dimensions and reached 100% degradation of RhB in 60 min at the concentration of 2 mg/mL; (v) hydroxyl radical, •OH, and hole, h + , are the main oxidative species, as indirectly evidenced by means of scavenging experiments; (vi) recycling tests render that α-Ag 2 WO 4 -CA nanomaterials are stable after four cycles. Finally, we hope that the present findings and concepts can be applied to future research directions for the controlled synthesis of complex metal oxides with desirable size and morphology. Subsequent research will investigate the influence of the crystal structure on their properties and provide opportunities for further development.
|
2022-09-28T15:05:24.990Z
|
2022-09-23T00:00:00.000
|
{
"year": 2022,
"sha1": "ec6255e16a1113dc4cae6ef6a1f1a291f6c88d69",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/12/19/3316/pdf?version=1664358959",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55ceb50e897fd69fb6dbe70c4f0beca24641aca1",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8433948
|
pes2o/s2orc
|
v3-fos-license
|
Bifurcation Diagram for Compartmentalized Granular Gases
The bifurcation diagram for a vibro-fluidized granular gas in N connected compartments is constructed and discussed. At vigorous driving, the uniform distribution (in which the gas is equi-partitioned over the compartments) is stable. But when the driving intensity is decreased this uniform distribution becomes unstable and gives way to a clustered state. For the simplest case, N=2, this transition takes place via a pitchfork bifurcation but for all N>2 the transition involves saddle-node bifurcations. The associated hysteresis becomes more and more pronounced for growing N. In the bifurcation diagram, apart from the uniform and the one-peaked distributions, also a number of multi-peaked solutions occur. These are transient states. Their physical relevance is discussed in the context of a stability analysis.
I. INTRODUCTION
One of the key features of a granular gas is the tendency to spontaneously separate into dense and dilute regions [1,2,3,4,5,6]. This clustering phenomenon manifests itself in a particularly clear manner in a box that is divided in a series of N connected compartments, with a hole (at a certain height) in the wall between each two adjacent compartments. The system is vibro-fluidized by shaking the box vertically. With vigorous shaking the granular material is observed to be distributed uniformly over the compartments as in any ordinary molecular gas. Below a certain driving level however, the particles cluster in a small subset of the compartments, emptying all the others.
For N = 2 the transition from the uniform to the clustered state is of second order, taking place through a pitchfork bifurcation [7]. For N = 3 it was recently found that the transition is hysteretic. It is a first order phase transition, involving saddle-node bifurcations [8]. This difference has been explained by a flux model. In the present paper we will use the same flux model to construct the bifurcation diagrams for arbitrary N.
The main ingredient of this model is a flux function F (n), which gives the outflow from a compartment to one of its neighbors as a function of the fraction of particles (n) contained in the compartment [7]. The function F (n) starts out from zero at n = 0 and initially increases with n. At large values of n it decreases again because the particles lose energy in the non-elastic collisions, which become more and more frequent with increasing particle density. So F (n) is non-monotonic, and that is why the flux from a well-filled compartment can balance that from a nearly empty compartment.
Assuming that the granular gas in each compartment is in thermal equilibrium at any time (in the sense of the granular temperature [9]) the following approximate form for F (n) can be derived [7]: which is a one-humped function, possessing the features discussed before (See Fig. 1). In the above equation n k is the fraction of particles in the k-th compartment, normalized to n k = 1. The factors A and B depend on the number of particles and their properties (such as the radius, and the restitution coefficient of the interparticle collisions), on the geometry of the system (such as the placement and form of the aperture between the compartments), and on the driving parameters (frequency and amplitude). The factor A determines the absolute rate of the flux, and will be incorporated in the time scale, which thus becomes dimensionless. The clustering transition is governed only by B.
The time rate of changeṅ k of the particle fraction in the k-th compartment is given by the inflow from its two neighbours minus the outflow from the compartment itself, with k = 1, 2, .., N. Here we have assumed that the interaction is restricted to neighboring compartments only.
For a cyclic arrangement the above equation is valid for all N compartments (with k = N + 1 equal to k = 1). If we take non-cyclic boundary conditions, by obstructing the flux between two of the compartments, the equation has to be modified accordingly for these compartments.
The total number of particles in the system is conserved ( k n k = 1), so kṅ k = 0.
Statistical fluctuations in the system would add a noise term to Eq. (2), but we will not consider such a term here. So the present analysis has to be interpreted as a mean field theory for the system.
Equation (2) can also be written in matrix-form, asṅ = M · F, or more explicitly: ··· 0 0 : : : : : : : : : : : : The given matrix M corresponds to a cyclic arrangement of the N compartments. A similar matrix can be written down for the case of a non-cyclic arrangement. We will come back to this later, when we will see that most of the results for the cyclic arrangement carry over to the non-cyclic case.
It is easily seen, from the fact that the elements of each row of M sum up to zero, that FIG. 1: The solutions n − and n + of F (n k ) = constant, cf. Eq. 5. Also shown is how the flux balance responds to an increase of n − by an amount of δn (see also Eq. 11). The diagram on the right hand side depicts the relation between F and the quantity σ = F ′ (n + )/F ′ (n − ), which plays an important role in the stability analysis of section 3. the fact that the compartments cannot all be filled (or emptied) simultaneously: kṅ k = 0 or l M lk = 0. For future reference we note that all the other eigenvalues of M are negative (see Appendix).
The remainder of the paper is set up as follows. In Section II we show how to construct the bifurcation diagram, on the basis of Eq. (4), for an arbitrary number of compartments.
In Section III we discuss the stability of the various branches in the diagram. Section IV discusses the physical consequences resulting from the diagram, in particular in the limit for N → ∞. Finally, Section V contains concluding remarks. The paper is accompanied by a mathematical Appendix, in which some essential results concerning the stability analysis are derived.
II. CONSTRUCTING THE BIFURCATION DIAGRAM
To calculate the bifurcation diagram, we have to find the fixed points of Eq. (4) as a function of the parameter B, i.e., those points for whichṅ k = M · F = 0. So F must be a multiple of the zero-eigenvalue vector 1 = (1, 1, . . . , 1). This tells us that, in a stationary situation, all components of the flux vector F are equal: there is a detailed balance between all pairs of neighboring compartments. This rules out, for instance, the possibility of stable standing-wave-like patterns with equal but non-zero net fluxes throughout the system. The Since F is a one-humped function, F (n k ) = constant has two solutions, which will be called n − and n + (see Fig. 1). Every fixed point can be represented as a vector with elements n − and n + (in any order, and summing up to 1) corresponding to a row of nearly empty and well-filled compartments. Let us call the number of well-filled compartments m. Apart from the ordering of the elements, every fixed point is then specified by only two numbers: n + and m.
Before actually calculating the bifurcation diagram, it is convenient to replace the fraction n by the (also dimensionless) variable z = Nn √ B, as then the flux (1) simplifies to F (z k ) ∝ z 2 k exp(−z 2 k ). The fixed point condition Eq. (5) then reads: So the B-dependence has been transferred from F to the particle conservation, and this enables us to determine the entire bifurcation diagram from one single graph. This is illustrated in Fig. 2 for the case of N=5 compartments.
First, the one-humped function F (z) is inverted separately on both sides of the maximum, yielding the functions z − (F ) and z + (F ). Then, we construct the sumfunctions: Now, from Eq. In general, if a sumfunction S m (F ) has a minimum for a certain B, the associated m branch will have a bifurcation. So the bifurcation condition is that the derivative dS m (F )/dF equals zero, or equivalently: Not surprisingly, the quantity on the left hand side (dz − /dz + ≡ σ) will play an important role in the stability analysis of the next section.
III. STABILITY OF THE BRANCHES
The stability of the branches (i.e., of the fixed points) is determined by the eigenvalues of the Jacobi matrix J corresponding to Eq. (4), with components: Here F ′ denotes the derivative of F with respect to n. Note that the Jacobi matrix can also be written as the product of M and the diagonal matrix D = diag(F ′ (n 1 ), . . . , F ′ (n N )), see also Eq. (18) in the Appendix. For a fixed point the only diagonal elements that occur are F ′ (n + ) (m times) and F ′ (n − ) (N − m times), in any order. The ratio between these two functions is precisely the quantity we encountered earlier in the bifurcation condition Eq. (8), namely σ: The Jacobi matrix J has N eigenvalues, one of which is always zero. The other N − 1 eigenvalues depend on m and the value of σ. lines. Although they are very similar (and are represented by exactly the same branch in the bifurcation diagram), it is clear that the second configuration is the more stable of the two. Apparently the two well-filled compartments prefer to keep a distance.
The saddle-node bifurcation of the m = 2 branch takes place where the third non-trivial eigenvalue goes through zero. The fourth non-trivial eigenvalue always remains positive, indicating that the m = 2 branch never becomes completely stable.
(As a matter of fact, only the m = 0 branch and part of the m = 1 branch can be completely stable). Note that for σ → 0 (large B) the positive eigenvalue tends to zero, so the degree of instability is quite weak there. Here one sees all the branches that were present already for N = 5, only slightly shifted towards the left, plus an additional pair of branches (m = 3) bifurcating in the forward direction from B = 1.
The special status of the branch m = N/2 is also evident from Eq. (8), which tells us that the bifurcation condition for this branch is σ = −1. This condition is fulfilled only by n + = n − = 1/ √ B = 1/N. So, unlike all other branches, this one originates at B = 1 from the (until then stable) uniform state. Related to this, the branch is the only one that is symmetric for interchanging n + and n − .
IV. PHYSICAL ASPECTS
The bifurcation analysis from the previous section can also be understood from a more physical point of view. To this end, let us first have a closer look at a 2-box system. In the equilibrium situation the net flux between the two boxes is zero, with one filled (n + ) and one nearly empty (n − ) box. Suppose the level of the empty box is raised by an amount δn.
The level of the filled box then decreases by an equal amount and the net flux φ −→+ from the empty to the filled box becomes (see also Fig. 1): where we have used that σ = dn − /dn + and neglected the higher order terms in the Taylor From this expression it follows that the transition between a (relatively) stable (σ > −m/(N − m)) and a (relatively) unstable (σ < −m/(N − m)) configuration is marked by the bifurcation condition Eq. (8). So, by straightforward physical reasoning we have reproduced the exact result obtained earlier from an eigenvalue analysis.
The pitchfork bifurcation discussed at the end of Section III is especially important for N = 2. In this case it is the only non-uniform branch. To be specific, it is a stable m = 1 branch. This N = 2 case [7] is the only one without any saddle-node bifurcations, and consequently it is the only case where the change from the uniform to the clustered situation takes place via a second order phase transition without any hysteresis. For all N > 2 the transition is of first order [8], and shows a hysteretic effect that becomes more pronounced for growing N.
In the limit N → ∞ the hysteresis is maximal: the first saddle-node bifurcation takes place immediately after B = 0, and this means that there exists a stable m = 1 solution over the entire range B > 0. So, if one starts out from this solution (at a certain value of B) and then gradually turns down B, one will never witness the transition to the uniform distribution. Vice versa, also the transition from the uniform solution to the m = 1 state will not occur in practice, even though the uniform distribution becomes unstable at B = 1.
If one starts out from the uniform solution (at a certain value of B below 1) and increases B, one will witness the transition to a clustered state, but in practice this will always be one with a number of peaks. That is, the system gets stuck in a transient state with m > 1, even though such a state is not stable (it has one or more positive eigenvalues).
The fact is that its lifetime may be exceedingly large, since the flux in the neighborhood of a peak and its adjacent boxes (which are practically empty) is very small. Furthermore, the communication between the peaks is so poor that usually (even for moderate values of N) the dynamics comes to a standstill in a state with peaks of unequal height.
Another point we would like to address is that practically the transition to a clustered state will take place already before B = 1, because the solution is kicked out of its basin of attraction by the statistical fluctuations in the system [8]. An example is shown in Fig. 6. Here we see a snapshot for the cyclic system with N = 80 compartments, which were originally filled almost uniformly, at B = 0.90. The small random fluctuations in the initial condition are sufficient to break away from the (still stable) uniform distribution, and one witnesses the formation of a number of isolated clusters. In the further evolution these clusters deplete the neighbouring compartments and indeed the whole intermediate regions.
But the peaks themselves, once they are well-developed, do not easily break down anymore.
V. CONCLUDING REMARKS
In this paper we have constructed the bifurcation diagram for a vibro-fluidized granular gas in N connected compartments. Let us now comment upon the result.
Starting out from B = 0, i.e, vigorous shaking, the equi-partitioned state is for some time For increasing B we encounter more and more bifurcations, where unstable m-clustered states come into existence (each with 1 more positive eigenvalue than the previous one), and for large N the bifurcation diagram is covered by a dense web of branches. In Fig. 7 this is shown for N=80. The last saddle-node bifurcation takes place shortly before B = 1 and, for this even value of N, is followed by a final pitchfork bifurcation (creating the m = N/2 branch) at B = 1. Finally, one arrives at the outermost m = 1 branch, which has no positive eigenvalues. This is the only solution that is completely stable for B > 1. But as we have seen in the previous section, on its way from the uniform distribution to the single peaked state, the system can easily get stuck in one of the transient states (especially for large N) even though these are not strictly stable.
Throughout the paper, we have concentrated on the case where the N compartments are arranged in a cyclic manner. But in doing so, we have in fact also solved the non-cyclic case.
Here we close the hole in the wall between the 1st and Nth compartment, and consequently the flux between them is zero. The matrix M then takes the following form [differing from the cyclic one only in the first and last row, cf. Eq. (4)]: ··· 0 0 : : : : : : : : : : : : The eigenvalue problem for this matrix is treated in the Appendix. One eigenvalue is identically zero, and the other N − 1 eigenvalues are negative, just like for the cyclic system.
This leads to a bifurcation diagram that is indistinguishable from that of the cyclic case.
Even the stability along the branches is the same; only the magnitude (not the sign) of the eigenvalues of the Jacobi-matrix J is slightly different for the two cases.
Finally, it should be emphasized that the results of the present paper do not depend on the precise form of the flux function. We have concentrated on the form given by Eq. (1), but virtually everything remains true for other choices of this function, as long as it is a non-negative, one-humped function, starting out from zero at n = 0 (no flux if there are no particles) and going down to zero again for very many particles (no flux also in this limit, since -due to the inelastic collisions -the particles form an inactive cluster, unable to reach the hole in the wall anymore). Any function with these properties will produce a bifurcation diagram similar to that of Eq. (1).
In the likely case that the range of σ = dn − /dn + is the same, extending from −1 (this value is attained in the maximum) to zero (in the outer regions of the flux function, for associated with the second difference operator known from numerical schemes for solving second order pde's. Its eigenvalue problem can be solved exactly [10], and the same is true for M. The eigenvalues of M are given by: where k runs from 0 to N/2 for N even, and from 0 to (N − 1)/2 for N odd. The corresponding eigenvectors are: with i = 1, . . . , N and arbitrary coefficients C 1 and C 2 .
As we see, the first eigenvalue (k = 0) is zero and the corresponding eigenvector is 1 = (1, 1, . . . , 1). Physically, this eigenvector represents simultaneous filling of all N compartments, and the eigenvalue 0 expresses the fact that this is prohibited (because the number of particles in our system is conserved).
All non-zero eigenvalues are negative and (except the one for k = N/2 in the case of even N) doubly degenerate. This means that the corresponding eigenvectors span a twodimensional subspace, reflected by the two terms C 1 and C 2 in Eq. (15). Since M is symmetric, and therefore normal, linear subspaces corresponding to different eigenvalues are orthogonal. Especially, the eigenvectors of all non-zero eigenvalues span a N − 1 dimensional subspace perpendicular to 1 = (1, 1, . . . , 1).
The matrix M (nc) for the non-cyclic case, given by Eq. (13), has a different set of eigenvalues: Here k runs from 0 to N − 1. The corresponding eigenvectors are: Just like in the cyclic case, the first eigenvalue equals zero, and all the others are negative.
However, they are non-degenerate and the corresponding eigenspaces are one-dimensional.
Zero-eigenvalues of matrix J
Now we turn to the Jacobian matrices. We consider the cyclic version J, with components as given in Eq. (9), but the results are also valid for the non-cyclic version. This matrix can be written as the product of M and a diagonal matrix D = diag(F ′ (n 1 ), F ′ (n 2 ), . . . , F ′ (n N )): : : : : : : : : In the context of the bifurcation diagram, the main thing one wants to know is the number of positive eigenvalues of J for each branch. This is what we are going to determine now.
First we note that the eigenvalues of J are real, even though the matrix is not symmetric.
This is a consequence of the following similarity relationship between J and J † : This implies that J and J † have the same eigenvalues, and hence they must be real.
Because M is singular, J must be too (it has a zero eigenvalue) and so its determinant det(J) is zero. More explicitly: where, for a fixed point with m filled compartments, the product term equals For the other eigenvalues we have to look at the characteristic equation det(J − λI) = 0.
This is a polynomial expression in λ, of which the constant term is zero since it is equal to det(J). The coefficient L of the linear term is: where the matrix J (k,k) is the (N − 1) × (N − 1) matrix obtained from J by deleting its k-th row and its k-th column. In the right-hand side of this equation, the only product that survives is the one that does not contain the trivial (zero) eigenvalue. So: Alternatively, the determinant of J (k,k) in Eq. (21) can be written in terms of det(M (k,k) ), by deleting the k-th factor from the product in Eq. (20): It can be shown that for all k the determinant det(M (k,k) ) is a constant, C, which equals (N − 1)(−1) N −1 in the cyclic, and (−1) N −1 in the non-cyclic case. Thus, Eq. (23) reduces to: For a fixed point with m filled compartments, we can write (using that in the above summation each of the products misses either an F ′ (n + ) or an F ′ (n − )): From this equation we conclude that L becomes zero at σ = −m/(N − m). This is exactly the bifurcation condition already given in the main text [Eq. (8)]. Also, with Eq. (22), we see that an eigenvalue crosses zero at this value of σ.
It can be shown, by a similar analysis, that the coefficient of the quadratic term is not equal to zero at σ = −m/(N − m), so not more than one of the eigenvalues changes sign at the bifurcation. boxes. The precise ordering of the factors is not essential for the following argument, so we may choose the above order for notational convenience.
The factor F ′ (n − ) is always positive, so we only have to deal with M · D. Note that only D depends on σ and that in the limit σ → 0 this matrix becomes [11]: Instead of taking the matrix J 0 = M · P as input for solving our eigenvalue problem (in the limit σ → 0), we will rather look at the matrix P · M · P which is symmetric and has the same eigenvalues as J 0 .
For proof of the last statement, let µ be a (non-zero) eigenvalue of J 0 : J 0 · x = µx. Then: (P · M · P) · (P · x) = P · (M · P · x) = µ(P · x). Note that P · x = 0, because otherwise also J 0 · x = M · P · x would be zero, contradicting the assumption that µ is non-zero. This completes the proof.
The matrix M is negative semi-definite. This means that M has only negative or zero eigenvalues or, equivalently, the inner product x, M · x ≤ 0 for all x. This means that also P · M · P is negative semi-definite, because: x, P · M · P · x = P · x, M · (P · x) = y, M · y ≤ 0 In conclusion, J 0 has negative and zero eigenvalues only.
The remaining task is to identify the number of negative eigenvalues, or otherwise stated, the rank of the matrix J 0 . The statement which we shall prove is that rank(J 0 ) = rank(P) = N − m.
Proof: Note that the image Im(P) of P is spanned by the first m unit vectors of R N . Its Now, for all x ∈ Ker(P) it holds that J 0 · x = M · (P · x) = 0, so Ker(P) ⊂ Ker(J 0 ).
On the other hand, for all y / ∈ Ker(P) one has P · y ≡ z = 0, with z ∈ Im(P), and therefore J 0 · y = M · z = 0 because of Eq. (28b). This means that y / ∈ Ker(J 0 ), and thus Ker(P) ⊃ Ker(J 0 ). Together these two results prove that Ker(P) = Ker(J 0 ), so obviously the rank of the two matrices must be equal. Since rank(P) = N − m, this is also the rank of J 0 , which completes the proof.
In short, we have shown that in the limit σ → 0, the Jacobi-matrix J has N − m negative eigenvalues.
Again, Q is a projection matrix, which now projects R N to the subspace spanned by the last m unit vectors, so Q is complementary to P. Following the same line of reasoning, but keeping in mind that now the constant factor in front of J −∞ is negative, we find that in the limit σ → −∞, the matrix J has m positive eigenvalues.
|
2016-03-22T00:56:01.885Z
|
2001-03-15T00:00:00.000
|
{
"year": 2001,
"sha1": "99e734a47cb17feccbb37ae525df1113eb0f3e52",
"oa_license": null,
"oa_url": "https://ris.utwente.nl/ws/files/6503842/bifurcation_diagram.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2e14058d6cc70da82bfef81db606a06d8704aef6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Mathematics"
]
}
|
133937861
|
pes2o/s2orc
|
v3-fos-license
|
Introductory Chapter: Development of Assessment Models to Support Pollution Preventive and Control Decisions
The continuous increase in human activities affects the environment in notable ways; these effects need to be monitored and controlled when appropriate to ensure the sustainability of our lives. Environmental pollution is one of the major problems that associate these activities; it is initiated when a substance is released into the environment in a way that prevents its natural restoration [1, 2]. These releases could be classified as planned and uncontrolled releases. The first class is a part of routine human activity where discharge is performed after complying with the regulatory requirements, whereas uncontrolled releases associate accidents and nonregulated activities [1]. Uncontrolled releases and historical practices have led to several contamination problems, so restoration or remediation programs are being initiated to control these problems from spreading [2]. Currently, preventing and controlling environmental pollution and restoration of affected environmental systems receive great attention globally. This attention was translated into issuing strengthen regulations and allocating natural and human resources to support pollution prevention and control activities. In this respect, a continuous increase in research efforts was dedicated to investigate new materials and/or systems to evaluate their potential applications in preventing and controlling environmental pollution, that is, wastewater, gaseous, and solid waste management, and in and ex situ remediation projects. Table 1 lists some pollution control and prevention systems and their classifications in terms of the scientific bases of the used technologies. These investigations are supported with enormous efforts to understand, simulate, predict, and decide on the performance of these materials and systems under predefined conditions using wide range of models. In this context, kinetic models are applied to:
Introduction
The continuous increase in human activities affects the environment in notable ways; these effects need to be monitored and controlled when appropriate to ensure the sustainability of our lives. Environmental pollution is one of the major problems that associate these activities; it is initiated when a substance is released into the environment in a way that prevents its natural restoration [1,2]. These releases could be classified as planned and uncontrolled releases. The first class is a part of routine human activity where discharge is performed after complying with the regulatory requirements, whereas uncontrolled releases associate accidents and nonregulated activities [1]. Uncontrolled releases and historical practices have led to several contamination problems, so restoration or remediation programs are being initiated to control these problems from spreading [2]. Currently, preventing and controlling environmental pollution and restoration of affected environmental systems receive great attention globally. This attention was translated into issuing strengthen regulations and allocating natural and human resources to support pollution prevention and control activities. In this respect, a continuous increase in research efforts was dedicated to investigate new materials and/or systems to evaluate their potential applications in preventing and controlling environmental pollution, that is, wastewater, gaseous, and solid waste management, and in and ex situ remediation projects. Table 1 lists some pollution control and prevention systems and their classifications in terms of the scientific bases of the used technologies. These investigations are supported with enormous efforts to understand, simulate, predict, and decide on the performance of these materials and systems under predefined conditions using wide range of models. In this context, kinetic models are applied to: 1. assess the formation and/or evolution of the system and its subsystems; 2. assess, control, and optimize the chemical reactions used in different waste treatment technologies; 3. design and optimize the operation of remediation projects; and 4.support the decision-making process at regulatory agencies and operational facilities during different life cycle phases of pollution control and prevention systems, that is, planning, design, licensing, etc.
Modeling by definition is an abstract of the real systems, where essential features, event, and process (FEP) that affect the performance of the studied system are presented [3,4]. Generally, the modeling efforts are divided into research and assessment models. Research (process) models use laboratory and field experiments to identify FEPs that affect a subsystem or more, whereas assessment models link important processes (determined from research model) to predict the overall system performance [5,6]. Figure 1 illustrates the integration of research and assessment models, in which the studied subsystems are characterized and the factors that affect their behavior are identified experimentally. Then models are used within the research efforts to interpret, extrapolate/interpolate, and optimize the collected data; the modeling results will be used to evaluate and rank the FEPs that affect the system. In assessment models, important FEPs are linked to identify the problem formulation and basic system description, and then conceptual and computational models are constructed, verified, and used [5][6][7][8][9][10][11]. For instance, the quantification of the effect of time on the pollutants migration in terrestrial, aquatic, and/or atmospheric subsystems is usually conducted by measuring the concentration of major pollutants at incremental time at different distances from the source. Experiments are run for specified time determined based on the temporal scale of the study. The collected experimental data are analyzed to quantify the processes that control the migration. This analysis might include the use of simple empirical, semiempirical, or mechanistic mathematical models that allow a clear understanding of the nature of the processes that affect the migration. In terrestrial subsystems, these processes might include percolation, retardation, biodegradation, advection, and hydrodynamic dispersion [8,11]. In subsequent sections, the development of assessment models to support the decision-making process will be illustrated with special emphasis on the prediction of pollutant migration. In this respect, the iterative nature of the assessment modeling will be overviewed, the conceptual model will be introduced, and some conceptual models that could be used to predict pollutant migration will be illustrated. The selection of computational models will be presented, where some simple models that could be used to estimate the migration in terrestrial subsystems will be summarized.
Iterative nature of the assessment modeling
Assessment models are used to support the decision-making process during different life cycle stages of any pollution prevention and/or control system, for example, sitting waste management facility and designing remediation program. Their outputs should provide assurance that the systems will be sited, designed, operated, etc., in a manner that compiles with the safety requirement issued by the regulatory body. Assessment modeling starts with problem formulation and basic system description based on available system information. During problem formulation, the assessment objectives and audiences, regulatory framework, system boundaries, spatial and temporal scales, stage of project development, critical receptors (affected groups), adopted assessment approaches, nature of assumptions, data availability, level of accuracy, cost, and uncertainty treatment should be clarified [4]. The level of the assessment complexity is largely dependent on the national regulations and state of project development. Assessment modeling is an iterative process, where basic system data are used to develop a simple model that contains all essential FEPs derived based on basic system description. The model is then verified using system-specific data to check its prediction adequacy. If adequate simulation results are obtained, the model will be applied; otherwise more system-specific data should be collected to help in improving the model predictions. Figure 2 illustrates the iterative nature of the modeling process and its relation with the system-specific data, in which the developed model complexity or simplicity is determined based on the stage of development of the studied system and the availability of system-specific data [11,12]. The developed model, in each iterative stage, is produced from multi-step process that includes the development of conceptual and computational models (mathematical model and the tool that solves the mathematical model) [5][6][7][8][9]. Integration of research and assessment models in studying a system.
Development of conceptual model for pollutant migration assessments
Conceptual model is defined as "A simplified representation of how the real system is believed to behave based on a qualitative analysis of field data" [11]. The development of a conceptual model starts with a clear determination of available information and knowledge gaps about the system. Subsequently, essential FEPs and their interactions in each subsystem are identified, and assumptions that were made to include or exclude any of these FEPs are highlighted based on the results of the research models [11]. Finally, flowcharts are used to describe the graphical relationship between different processes in different physical subsystems. It should be noted that the conceptual model could be imperfect if over-or undersimplification of the studied system were used, where over-simplification can lead to ineffective model with large uncertainties and under-simplification can lead to complex model that raises the project costs. Imperfect conceptual model could be resulted from incomplete problem identification/assessment context, wrong assumptions in developing the conceptual model, and poor identification of the important processes.
Conceptual models are usually constructed based on source-pathway-receptor analysis, where pollution sources are defined by investigating the driving forces and duration of the releases for each pollutant, the routes of pollutant transport between different physical subsystems are determined, and receptor exposure mechanisms and duration are identified [9,13,14]. Below are some examples that illustrate the construction of conceptual model for pollutant migration into different subsystems that could be developed to support the pollutant control and prevention decisionmaking process.
To characterize the extent of the contamination problems due to contaminant spill, there is a need to collect samples from potentially affected subsystems, that is, groundwater, surface water, air, and soil and subsoil. Sampling procedure should consider both the main pollutants and subsystem properties, for example, pollutant concentrations in different subsystems, water pH, velocity, wind velocity, etc. Characterization results will be analyzed within the research modeling efforts, and the results of this analysis will determine the complexity of the model. Based on these results, homogenous or nonhomogenous subsurface may be considered to estimate pollutant percolation and sorption, and the elimination or inclusion of biodegradation and aquifer recharge as sink or source for pollutants in the subsurface and surface water will be determined. In this case, different terrestrial and atmospheric exposure pathways to receptors, downstream the contamination source, were identified as main exposure routes. Figure 3 illustrates the main processes that can lead to pollutant migration or attenuation from a contaminant spill into different subsystems. The pollutants are assumed to be transported by percolation, surface runoff, and evaporation, and attenuation is assumed to occur as a result of sorption into the subsurface and biodegradation within surface water, groundwater, and geosphere.
To determine the worker dose in a radioactive waste incinerator facility during the planning phase for transition from batch to continuous operation, a conceptual model was constructed [14]. The pollutants are assumed to be transported through the air via advective-diffusive process, and the exposure means were determined to include inhalation of gaseous pollutants (which is the main exposure mean in that study), direct dermal exposure, and ingestion of contaminated water (Figure 4).
Generic conceptual model to quantify the effect of pesticide application on the environment is suggested by US EPA (Figure 5) [15]. The model represents terrestrial exposure pathways, where the pollutants (pesticide) are transported through the atmospheric and aquatic subsystems and were assumed to affect terrestrial receptors, that is, plants, invertebrates, and vertebrates. The exposure means included inhalation, dermal exposure, and ingestion with a detailed characterization of the dietary routes.
Computational representation of the conceptual model
The development of the computational model that represents accurately the conceptual model is a crucial task, where the accuracy of the obtained results will be used to judge if the modeling effort is enough to represent the system or there will be a need to acquire field data and develop an updated model (Figure 2). For a simple conceptual model, a simple empirical model could be used, as the sitespecific information is available and a more realistic model could be used [13]. The type of the mathematical representation of the conceptual model is defined during the problem formulation, and the selection of the appropriate model is bounded by [4,11]: 1. System dimensions: decision should be made if one, two, and three dimensions will be used to represent the system. 5. Homogenous and nonhomogenous system. 6. Type of flow and transport process: the flow occurs via intergranular or fissure flow, and the transport is governed by advection or hydrodynamic dispersion.
During the development of a mathematical representation, the studied system is usually divided into a subsystem. For the conceptual model presented in Figure 3, the system could be divided into source subsystem which describes the mobilization of the pollutant from the source, terrestrial migration, atmospheric transport, and receptors subsystems. Table 2 shows some simple models that could be used to develop a mathematical representation of pollutant migration in terrestrial compartment [5,[16][17][18][19][20]. This table presents models that could be used to estimate both flow (infiltration/flow rate, travel time, and average water velocity) and transport parameters (hydrodynamic dispersion, distribution, and retardation coefficient) for homogenous and nonhomogenous soil under saturated and vadose conditions. Concentration in the solution (C, ppm) at initial (i) and final (e) state, Solution volume (V, l), Soil weight (m, g) Retardation coefficient (Rf) in vadose zone Soil density (ρ, kg/m 3 ), Soil porosity (ε).
Retardation factor assuming Freundlich isotherm Freundlich constant indicative of the relative sorption capacity (n) and (Kf, mg/g) Table 2.
|
2019-04-27T13:12:49.946Z
|
2019-01-30T00:00:00.000
|
{
"year": 2019,
"sha1": "0a1fdd4b38f64757039c81c91728952c184ee31b",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/65433",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "89fa1dcdf745bae9674930376208ea8d67a6517e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
4976868
|
pes2o/s2orc
|
v3-fos-license
|
Fluorinated Nucleotide Modifications Modulate Allele Selectivity of SNP-Targeting Antisense Oligonucleotides
Antisense oligonucleotides (ASOs) have the potential to discriminate between subtle RNA mismatches such as SNPs. Certain mismatches, however, allow ASOs to bind at physiological conditions and result in RNA cleavage mediated by RNase H. We showed that replacing DNA nucleotides in the gap region of an ASO with other chemical modification can improve allele selectivity. Herein, we systematically substitute every position in the gap region of an ASO targeting huntingtin gene (HTT) with fluorinated nucleotides. Potency is determined in cell culture against mutant HTT (mtHTT) and wild-type HTT (wtHTT) mRNA and RNase H cleavage intensities, and patterns are investigated. This study profiled five different fluorinated nucleotides and showed them to have predictable, site-specific effects on RNase H cleavage, and the cleavage patterns were rationalized from a published X-ray structure of human RNase H1. The results herein can be used as a guide for future projects where ASO discrimination of SNPs is important.
INTRODUCTION
Antisense oligonucleotides (ASOs) bind to their cognate mRNA by Watson-Crick base-pairing and modulate its processing to produce a pharmacological effect. 1 ASOs that function through the RNase H-based antisense mechanism were first discovered more than three decades ago when administration of exogenous short DNA oligonucleotides was shown to direct RNA reduction in cells. 2 Since those initial reports, extensive studies have been performed to improve the drug-like properties of RNase H active ASOs. 3 This body of work resulted in use of phosphorothioate-modified oligonucleotides 4 and development of the "gapmer" design, i.e., a DNA gap region of 8-16 nt flanked on either end with 2 0 -modified nucleotides. 4 More than 35 ASOs with this general design are currently in various stages of clinical trials and one ASO, Kynamro, was recently approved for the treatment of homozygous familial hypercholesterolemia. 5 Following introduction of the gapmer design, 6,7 the majority of studies aimed at improving ASO properties have focused on modifying the wing chemistry. 2 0 -Modified nucleotides, such as 2 0 -O-methoxyethyl RNA (MOE), which enhance ASO RNA-binding affinity and metabolic stability, have been extensively employed. 8 Replacing some or all of the MOE nucleotides in the wings with modifications such as locked nucleic acid (LNA) or S-constrained ethyl (cEt), which further enhance RNA-binding affinity, provided ASO designs with improved activity in animal models. 9,10 In contrast to modification of the wing region, introducing chemical modifications in the gap region of RNase H active ASOs has been investigated less extensively. 11,12 We recently showed that introducing chemical modifications like 2-thio-deoxythymidine 13 and 5 0 -substituted DNA analogs 14 in the gap region of ASOs can enhance the discrimination of SNPs between two alleles in the huntingtin gene. The resulting ASOs showed potent reduction of mutant HTT (mtHTT) mRNA and protein in patient fibroblasts and in a mouse model of Huntington's disease (HD), 15 an autosomal dominant disorder that is thought to result from expansion of a polyglutamine-encoding CAG tract in the huntingtin gene. Allele-selective reduction of mtHTT by directly targeting the CAG repeat has also been reported. 16 As part of the above effort, we also showed that introducing chemical modifications such as 2 0 -fluoro RNA (FRNA) or 2 0 -arabino fluoro RNA (FANA) near the 5 0 end of the gap region in ASO control (CNTR) (Figure 1) can modulate allele selectivity. 17 FRNA and FANA differ only in the relative configuration of the fluorine atom at the 2 0 -position of the nucleotide furanose ring. However, the highly electronegative fluorine atom produces local changes in the conformation of the nucleotide furanose ring, 18 which can affect the biological properties of the modified ASOs. 19 To further understand how fluorinated modifications in the gap region can modulate the allele selectivity of SNP-targeting ASOs, we sequentially replaced each DNA nucleotide in ASO CNTR with FRNA or FANA. FRNA is an RNA-analog adopting a C3 0 -endo conformation, whereas FANA adopts an unusual O4 0 -endo conformation ( Figures 1B and 1C) 20 and is one of the very few nucleotide modifications reported to increase RNA cleavage by RNase H1. 21,22 In addition, we also investigated the effect of replacing DNA nucleotides with other fluorinated modifications such as 2 0 -fluoro-hexitol nucleic acid (FHNA), 2 0 -fluoro cyclohexenyl nucleic acid (F-CeNA), and 2 0 -fluoro N-methanocarba nucleic acid (F-NMC) for modulating allele selectivity. FHNA and F-CeNA are ring-expanded analogs of FRNA. However, the six-membered hexitol ring in FHNA is more rigid and mimics the RNA-like C3 0 -endo conformation of the furanose ring ( Figures 1B and 1C). 23 In contrast, the cyclohexenyl ring in F-CeNA is more flexible and was shown to assume either the DNA-or RNA-like sugar conformation depending on the sequence context on its incorporation. 24 The 3.1.0 ring system in F-NMC is locked in the RNA-like C3 0 -endo conformation. 25 In this report, we show that introducing fluorinated modifications within the gap region can modulate the cleavage of the ASO-RNA heterodu-plexes by human RNase H1, and that this can produce profound changes in allele selectivity when targeting SNPs with gapmer ASOs.
ASO Design
We selected a 3-9-3 gapmer ASO with mixed MOE/cEt wings as a starting point because it exhibits good potency and intermediate allele selectivity (ASO CNTR, 9-fold selectivity; Figure 1A). 26 Each fluorinated nucleotide was then walked systematically through the gap to examine effect on potency and selectivity in cell culture and to investigate RNase H cleavage patterns and determine relative amounts of RNA cleavage (for F-CeNA and F-NMC, we only had access to the pyrimidine analogs).
We had previously characterized the cleavage patterns produced by recombinant human RNase H1 for ASO duplexes with mtHTT RNA and wild-type HTT (wtHTT) RNA, representing the mtHTT and wtHTT mRNA, respectively ( Figures 1D and 1E). 17 Five cleavage sites (a, b, c, d, and e) were identified on mtHTT RNA, whereas only three cleavage sites were detected on wtHTT RNA. ASO CNTR has a T:G mismatch with wtHTT RNA corresponding to the SNP site in the HTT mRNA, which ablates cleavage sites a and b on wtHTT RNA. The loss of RNase H1 cleavage sites reduces degradation of the wtHTT mRNA and provides a rationale for the modest selectivity observed with this ASO in patient-derived fibroblast cells and in mice bearing the human HTT transgene. 26 Previous work by Nowotny et al. 27 had shown that the catalytic domain of human RNase H1 has a unique 7-nt footprint ( Figures 1D and 1E, gray shaded region on ASO) on the DNA-RNA heteroduplex for a given cleavage site on the RNA strand. It was anticipated that introducing modifications at every position in the gap region could have a unique and differential impact on the individual cleavage sites because the modification would be located at a different position in the footprint for each cleavage site.
Potency toward Complementary mtHTT and G Mismatched wtHTT RNA in Cell Culture
ASOs were tested in GM04022 fibroblast cells heterozygous at SNP rs7685686 (A-to-G mismatch). ASOs were transfected using electroporation, and RNA knockdown was evaluated 24 hr posttreatment using an allele-selective quantitative real time polymerase chain reaction (rtPCR) approach that allows simultaneous monitoring of mtHTT and wtHTT RNA reductions. 17 Chemical modification and position in the gap have pronounced effect on potency against mtHTT ( Figure 2). Moving FRNA 1 or 2 nt into the gap results in slightly improved potency relative to control ASO ( Figure 2B, position A4 and T5); however, further walking 1 and 2 nt into the gap led to significant reductions in potency (position T6 and G7). Interestingly, positioning of an FRNA modification in the middle of the gap (position T8) restores the potency relative to control ASO. Further moving of FRNA through the gap had only minor effects on potency except at the last position (C12), where potency is improved. Potency against T:G mismatched wtHTT is also very position dependent, but trends are not similar to on-target potencies. FRNA allele selectivity is improved (i.e., wtHTT potency is reduced) upon moving the modification from the 5 0 -gap junction toward the center (Figure 2, top, position 4-7); however, positioning FRNA across from the mismatch (T8) reduces selectivity as compared with the control ASO. Further moving FRNA to the 3 0 part of the gap leads to high selectivity at position A10 and low selectivity at positions C9 and C12.
Moving FHNA through the gap has similar but more pronounced effects relative to FRNA ( Figure 2). Potencies against mtHTT are in most cases reduced as compared with FRNA at the same position and only at a few positions similar to FRNA (at positions A4, T8, and C12). Inhibitor concentration where target is reduced by 50% (IC 50 ) for wtHTT are generally higher for FHNA relative to FRNA, which in most cases results in better allele selectivity. It is important to note that general trends when moving FRNA and FHNA through the gap are similar against mtHTT, as well as wtHTT ( Figures 2B and 2C). The similar properties but more pronounced effects when comparing FHNA with FRNA can be rationalized by comparing their conformations ( Figure 1C). Although FHNA features a sixmembered sugar ring, it positions the nucleobase, O3 0 -, and O5 0 -substituents very similar to a 3 0 -endo furanose nucleotide, and because of the rigid cyclohexane chair conformation FHNA behaves similar to 2 0 ,4 0 -constrained nucleotides like LNA. 23 FANA exhibits properties that are markedly different relative to the RNA-like modifications. Potency against complementary mtHTT RNA is similar or better than the control ASO at every position in the gap (Figure 2A). This is in line with previous reports showing that FANA is one of a handful of chemically modified nucleotides that increase potency of ASOs when positioned in the gap. 28 Generally, potency against mismatched wtHTT is also increased relative to control, but there are two notable exceptions: FANA at position A10 is exhibiting excellent selectivity; interestingly, FRNA and FHNA are also very selective at this position. Also, positioning FANA across from the SNP (at T8) results in improved allele selectivity, which is especially interesting because a large number of modifications were previously evaluated at this position, but FANA is the only one that significantly improves allele selectivity. 17 It is also worth noting that although FANA generally improves potency for both mtHTT and wtHTT, in most cases the allele selectivity is reduced or similar to the control ASO ( Figure 2A).
Human RNase H1 RNA Cleavage and Comparison with In Vitro Data
The very position-dependent effects on cell culture RNA reduction likely arise from differential interactions with RNase H1. Therefore, to better understand the cell culture data, we measured human RNase H1-mediated RNA cleavage for ASOs duplexed with either 19-mer complementary RNA (mtHTT) or G mismatched RNA (wtHTT).
There is a very general trend when walking FRNA and FHNA through the gap with complementary RNA: as the modification is walked toward the center, less total RNA is cleaved, with the least amount cleaved when modifying position T8 ( Figure 2C). This observation is consistent with previous reports that most chemically modified nucleotides are disruptive toward RNase H-mediated cleavage. 11,12 At most positions, however, the reduction in total RNA cleavage is small, and it is possible that small changes will have little, if any, effect in biological systems where ASOs can behave in a catalytic manner to degrade RNA. 29,30 FANA again behaves differently and exhibits similar or increased RNA cleavage at all positions except when positioned at T8.
RNase H1-mediated RNA cleavage has very distinct trends for each modified nucleotide against mtHTT RNA, whereas it is more complicated against RNA with a centrally positioned G mismatch (wtRNA). FRNA and FHNA have similar trends, but the magnitude is different and FHNA has the most pronounced effects. When moving the modification from the 5 0 end of the gap toward the center, reduced wtHTT RNA is cleaved (from A4 to G7; Figure 2D); however, modifying position T8 results in wtHTT RNA cleavage similar to control ASO. Further walking the modification toward the 3 0 end of the gap has minor effects on RNA cleavage. FANA substitution results in similar or increased wtHTT RNA cleavage, which is especially notable at positions T5, T6, and G7. The only position where FANA substitution results in considerably less wtHTT RNA cleavage is across from the SNP (position T8); unfortunately, this position also has reduced mtHTT RNA cleavage, although cell culture potency is similar to control ASO.
Good correlation between cell culture potency and human RNase H1 cleavage will make it much easier to develop more mismatch-selective ASO designs because biochemical RNase H-mediated RNA cleavage assays are much simpler, faster, and give better signal-to-noise ratios relative to cell culture and animal experiments. For FHNA there is good linear correlation for wtHTT reduction, whereas mtHTT reduction shows little correlation. In general, there is good correlation between cell culture potency and human RNase H1 cleavage when comparing only the 5 0 end of the gap (A4, T5, T6, and G7), but little overall linear correlation when modifying the 3 0 end of the gap. The 5 0 end of the gap is where the catalytic domain of human RNase H1 binds the heteroduplex. 27 It is possible that other factors in the cell are important for human RNase H1 interactions with the 3 0 end of the gap. 31 Notably, the RNase H biochemical assay does not explain the good selectivity of any of the modifications when substituted at position A10 (Figure 2A) or the good selectivity exhibited by FANA when inserted across from the SNP site. ASO potency, however, is affected by many factors in addition to RNase H. ASO binding to intracellular proteins has been shown in some cases to alter potency 32 and, in particular, oligonucleotides incorporating fluorinated nucleotides occasionally exhibit enhanced binding to certain www.moleculartherapy.org proteins. 33 Furthermore, it cannot be ruled out that chemical modification patterns can affect ASO uptake pathways. 34
Structural Analysis of the RNase H1 Cleavage Footprint for Gap-Modified ASOs
Human RNase H1 is a ubiquitously expressed enzyme that is comprised of three domains: catalytic, RNA-binding, and linker domains. The crystal structures of the catalytic and RNA-binding domains of human RNase H1 bound to DNA-RNA heteroduplexes have been described by Nowotny et al. 27 and were used to better understand the cleavage patterns observed in our experiments. The catalytic domain of human RNase H1 has a unique 7-nt footprint on the DNA-RNA heteroduplex for every individual cleavage site ( Figures 3A and 3B). As a result, each nucleotide in the DNA strand will be located at a different position within the footprint based on the register in which the enzyme binds the heteroduplex ( Figure 3C).
The catalytic domain of human RNase H1 makes several key contacts with the DNA and RNA strands of the heteroduplex (Figures 3A and 3B). The side chains of amino acids Asn151, Asn182, and Gln183 make several key H-bonding interactions with the nucleobases in the major groove. As a result, introducing nucleobase modifications such as 2-thio dT can disrupt specific cleavage sites depending on their position of incorporation in the ASO gap region. 13 The enzyme also makes key interactions with a phosphodiester linkage of the DNA strand within the phosphate binding pocket formed by amino acids Arg179, Thr181, and Asn240. Another key site of interaction is within the hydrophobic DNA-binding channel formed by the aromatic amino acids Phe213, Trp221, and Trp225. The aromatic side chains of these amino acids make close hydrophobic contacts with the bottom face of the DNA sugars. This provides specificity for DNA over RNA as the 2 0 -hydroxyl groups in RNA disrupt the hydrophobic contacts. Lastly, the enzyme makes a weak contact (Ser233) with the non-Watson-Crick face of the nucleobase at the last position of the footprint. This contact can be disrupted using bulky C5-modified pyrimidine nucleobases. 13 The enzyme also produces numerous structural distortions in the DNA strand, which provide insights into why certain modifications are tolerated at specific positions in the gap region, but not at others ( Figure 3B). For example, the DNA sugar of the first nucleotide of the footprint for a given cleavage site is in an RNA-like C3 0 -endo conformation. The DNA sugars of the second and third nucleotides are in the canonical DNA-like C2 0 -endo sugar pucker, whereas the DNA sugar of the fourth nucleotide is in a conformation that resembles O4 0 -endo. The conformation of the DNA sugars of nt 5 and 6 resemble an RNA-like C3 0 -endo sugar pucker, whereas the DNA sugar of nt 7 appears to be somewhat flexible. The paradoxical requirement for the DNA sugars at positions 4, 5, and 6 to be in an RNA-like conformation is better understood from a structural perspective. The RNA-like conformation positions the 3 0 -phosphodiester linkages in a pseudo-equatorial orientation that facilitates closer hydrophobic contacts with the bottom face of the DNA sugars. Lastly, the complex structural requirements are further complicated as each nucleotide in the gap region is located at a different position within the 7-nt footprint for any given cleavage site ( Figure 3C).
Analysis of Cleavage Patterns for FHNA Gap-Modified ASOs in Matched and Mismatched Duplexes
Introduction of FHNA at the 3 0 edge of the gap region (C12) has no noticeable impact on the cleavage pattern relative to the control ASO CNTR with a 9-base DNA gap (Figures 4A and 4B). FHNA at C12 corresponds to position 6 for cleavage site "a" and position 7 for site "b," but is not part of the footprint for sites "c," "d," and "e." FHNA at T11 corresponds to position 5 for cleavage site "a," position 6 for "b," and position 7 for "c" ( Figure 4D). FHNA at T11 ablates cleavage site "a," but not sites "b" and "c." This suggests that FHNA is tolerated at positions 6 and 7, but not at position 5 of the footprint. This is surprising because the DNA sugar at position 5 is in the RNA-like C3 0 -endo conformation that is mimicked by FHNA ( Figure 4C). This suggests that the larger six-membered ring of FHNA is not accommodated within the hydrophobic DNA-binding channel or in the vicinity of the phosphate-binding pocket region of the catalytic domain of RNase H1.
Similarly, FHNA at A10 ablates "a" and "b"; C9 ablates "a," "b," and "c"; and T8 ablates "a," "b," "c," and "d" because it is located at positions 2, 3, 4, and 5 of the footprint for sites "a," "b," "c," and "d." Interestingly, FHNA at T8 does not ablate site "e" because it is now located at position 6 of the footprint for this cleavage site. Interestingly, cleavage site "e" is located at the 5 0 edge of the gap region, where it is flanked with a cEt modification, which is "locked" in the RNA-like conformation. This tolerance for cEt at position 1 can be rationalized because the DNA sugar at position 1 in the crystal structure is in the RNA-like conformation that is mimicked by cEt.
This becomes apparent again when FHNA is incorporated at G7, which restores cleavage site "a." In this position, FHNA is at position 1 and cEt at position 7 of the footprint, where both modifications are tolerated. This ASO now effectively has a 5-base DNA gap that was previously shown to be the minimum length required for catalysis by RNase H1. 7 Along these lines, FHNA at T6 restores cleavage at sites "a" and "b"; FHNA at T5 restores "a," "b," and "c"; and FHNA at A4 restores "a," "b," "c," and "d," but ablates "e" because it is not tolerated at position 2 of the footprint.
The FHNA-modified ASOs versus the mismatched wtHTT RNA follow the same rules except that the T:G mismatch ablates cleavage sites "a" and "b" because RNase H1 does not tolerate mismatches in www.moleculartherapy.org the vicinity of the cleavage site. FHNA at positions T5, T6, and G7 ablate cleavage sites "c," "d," and "e," which translates to excellent allele selectivity in HD patient fibroblasts. However, FHNA at T6 and G7 also reduces overall cleavage of mtHTT RNA (Figure 2), which reduces potency, thus making the improved selectivity less interesting. Overall, all the observations for FHNA also apply for FRNA given that FHNA generally mimics FRNA in its conformational properties. 18 However, the furanose ring in FRNA is more flexible, resulting in less intense ablation of certain cleavage sites for both matched and mismatched duplexes (supporting Figure S5).
Structural Analysis of the RNase H1 Cleavage Footprint for FANA Gap-Modified ASOs
Introducing FANA at C12, T11, and A10 had no noticeable effect on the RNase H cleavage patterns for mtHTT RNA relative to control ASO CNTR (Figures 5A and 5B). This suggests that FANA is tolerated at positions 4, 5, and 6 of the footprint. Slight ablation of "a" was seen with FANA at C12, suggesting that position 6 may not a preferred position for FANA. Interestingly, FANA at C9 almost completely focused RNase H1 cleavage at site "a," suggesting that position 3 in the footprint is a highly preferred site for FANA. This preference is also seen with FANA at T8, G7, T6, and T5, which corresponds to position 3 in the footprint for sites "b," "c," "d," and "e," respectively.
The strong preference for FANA at position 3 of the footprint is interesting because the DNA sugar at this position in the crystal structure is in the DNA-like C2 0 -endo conformation, which FANA can adopt but is not preferred. 20 This region of the crystal structure is where RNase H1 makes several intimate contacts with the nucleobases in the major groove, and the 2 0 -fluorine in the "ara" configuration in FANA might be able to modulate these interactions. Alternately, incorporation of a fluorine atom at the 2 0 position of the furanose ring facilitates CH,,,O type interactions between the 2 0 -hydrogen atom and the 4 0 -oxygen of the adjacent nucleotide. 35 This could potentially help pre-organize the DNA strand for more efficient cleavage.
The FANA-modified ASOs versus the mismatched wtHTT RNA follow the same rules except that the T:G mismatch ablates cleavage sites "a" and "b." However, because FANA is essentially tolerated at all positions of the footprint, complete ablation of RNase H1 cleavage on wtHTT RNA does not take place with FANA at any position in the gap. The improved selectivity seen with FANA at certain positions is more a result of enhanced potency versus the mutant allele as opposed to reduced activity versus the wild-type allele ( Figure 2). However, the improved potency and selectivity with FANA (or FRNA) at A10 cannot be rationalized by analysis of cleavage patterns and suggests that additional factors may be involved in determining activity and selectivity for some ASOs.
F-CeNA Substitution
F-CeNA-modified ASOs generally exhibit mtHTT reductions that are very similar to the control ASO at the positions tested ( Figure 6). wtHTT reduction and allele selectivity are in most cases similar to FRNA and FHNA; however, because of the good mtHTT potency at position T6, F-CeNA exhibits the best properties of all the modifications examined when placed at this position. F-CeNA has an RNase H cleavage pattern that at most positions resembles FRNA; position T6, though, is a notable exception ( Figure 6C) because mtHTT RNA cleavage bands at the main cleavage sites a and c are at similar intensities to CNTR ASO, whereas for FRNA they are significantly reduced. Thus, F-CeNA behaves similar to the RNA-like modifications FRNA and FHNA, although it reduces RNase H-mediated RNA cleavage less than FRNA and FHNA.
Comparing North-Methanocarba with F-North-Methanocarba
To further investigate the effect fluorine substitution can have on ASO properties, we investigated 2 0 -F-North-methanocarba (F-N-MC) 36 in the huntingtin SNP system. Because the non-fluorinated nucleotide North-methanocarba (N-MC) is conformationally restricted in the 3 0 -endo configuration, 37,38 fluorine substitution will have minimal influence on the sugar pucker ( Figure 7A). Previous biophysical examination, however, showed that F-N-MC improved thermal stabilization with cRNA relative to parent nucleotide N-MC. 36 This stabilization was attributed to increased polarization of the nucleobase, likely resulting in improved Watson-Crick basepairing and base stacking. 39 F-N-MC-substituted ASOs are generally slightly more potent than parent N-MC ASOs in cell culture, whereas allele selectivity is similar ( Figure 7C). Potency can be explained by the improved thermal stability of F-N-MC containing oligonucleotides relative to N-MC, but favorable interactions with RNase H cannot be ruled out. At one position, though, the trend is reversed. When modifying position T11, N-MC-substituted ASO is approximately 2-fold more potent than F-N-MC containing ASO ( Figure 7C). This peculiar effect, though, can be explained by examining RNase H cleavage patterns ( Figure 7B). Although the cleavage patterns are very similar for the two modifications, it is striking that at position T11, F-N-MC containing ASO has reduced cleavage at the normally main cleavage site "a." It is possible that the fluorine at position T11 has a negative impact on the RNase H footprint.
Conclusions
This work has systematically investigated incorporation of fluorinated nucleotides in the gap region and finds a specific RNase H cleavage footprint for each nucleotide. Accordingly, it is possible to modulate RNase H cleavages when introducing fluorinated nucleotide modifications in the gap region using rational design principles to generate more mismatch-selective ASOs. Fluorination of nucleotides is a straightforward method to alter the properties of the nucleotides without adding much steric bulk. By incorporating such nucleotides systematically into the gap of an ASO targeting a huntingtin SNP, we have found multiple modifications and positions in the gap that significantly improve allele selectivity without affecting potency. Previous work has shown that at this SNP the ASO allele selectivity can simply be increased by decreasing the gap size from 9 to 7 nt. 17 Our work provides additional chemical design options to modulate the activity selectivity profile of SNP-targeting ASOs for the treatment of autosomal dominant disorders.
Cell Culture mRNA Knockdown GM04022 fibroblast cells were trypsinized and resuspended to a density of 400,000 cells/mL in growth medium prior to transfection with varying concentrations of ASO using electroporation (115 V, 6 ms). Cells were maintained at 37 C and 5% CO 2 in minimal essential medium containing 15% fetal bovine serum, non-essential amino acids, and penicillin-streptomycin. After 24 h the cells were washed with Dulbecco's PBS and lysed. RNA was extracted using the QIAGEN RNeasy96 kit, and human HTT mRNA alleles were quantitated using the qPCR assay C_2231945_10 at SNP rs362331 (Life Technologies). Mutant huntingtin (muHTT) and wtHTT mRNA levels were determined simultaneously using two different fluorophores: 6-carboxyfluorescein for muHTT and a fluorophore with structure not given by commercial vendor (VIC) for wtHTT mRNA. Quantitative rtPCRs were performed on an ABI 7900 HT instrument using the quantitect Prote rtPCR kit following the manufacturer's instructions. HTT mRNA levels were normalized relative to total RNA content measured using Ribogreen. All experiments were performed in duplicates, and data are expressed as means ± SD. Dose-response curves were analyzed using non-linear regression with normalized response and variable slope, and IC 50 values calculated using GraphPad Prism version 5. All dose-response curves are shown in the Supplemental Information (Figures S1-S4). Allele selectivity was calculated by dividing the IC 50 for inhibiting wtHTT with the IC 50 for inhibiting muHTT. Human RNase H1 Cleavage Pattern and Cleavage Intensity RNA was 5 0 end labeled with 32 P using 20 U of T4 polynucleotide kinase, 120 pmol (7000 Ci/mmol) of (g-32 P)ATP, 40 pmol of RNA, 70 mM Tris-HCl, 10 mM MgCl 2 , and 50 mM dithiothreitol at pH 7.6. The labeling reaction was incubated at 37 C for 30 min followed by heating at 90 C for 1 min. Labeled RNA was purified using 12% denaturing polyacrylamide gel. ASO (200 nM), unlabeled 19-mer RNA (100 nM), and a small amount of 32 P-labeled RNA was mixed in hybridization buffer (20 mM Tris-HCl, 20 mM KCl [pH 7.5]) and heated to 90 C for 2 min. To the hybridization mixture was added 0.1 mM (tris(2-carboxyethyl)phosphine) (TCEP), 1 mM MgCl 2 , and 40 U of RNaseOUT, and it was incubated at 37 C for 1 hr. The human RNase H1 enzyme was incubated in dilution buffer (20 mM Tris-HCl, 50 mM KCl, 2 mM TCEP [pH 7.5]) for 1 hr at room temperature. Enzyme solution (4% volume relative to duplex solution) was added to duplex solution and incubated at 37 C. After 6 min, reaction was quenched by addition of loading buffer and snap-frozen on dry ice. Cleavage products were separated using 12% denaturing polyacrylamide gel electrophoresis, and products were quantitated on a Phosphor-Imager. RNA sequences used herein were: 19-mer muHTT RNA, 5 0 -CUGGUGAUGACAAUUUAUU-3 0 ; 19-mer wtHTT RNA, 5 0 -CUGGUGAUGGCAAUUUAUU-3 0 . Identity of RNA cleavage products was determined by separating cleavage products on ion-pairing reverse-phase HPLC and determining cleavage products using electrospray ionization mass spectrometry as previously described. 17
ACKNOWLEDGMENTS
We acknowledge Prof. Marcin Nowotny for critical reading of the manuscript and Dr. Stanley Crooke for many useful discussions.
|
2018-04-03T01:19:59.271Z
|
2017-02-09T00:00:00.000
|
{
"year": 2017,
"sha1": "3e935e333008ba1734448c6e0acad5ca5d363d1d",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2162253117301270/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e935e333008ba1734448c6e0acad5ca5d363d1d",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
158805313
|
pes2o/s2orc
|
v3-fos-license
|
Does efficiency of the Nordic pension system evolve after crisis ?
This study examines efficiency of the pension system in the Nordic countries in comparison to other European counterparts using efficiency indicators proposed by Chybalski (2016). Principal Component Analysis (PCA) is used to check weather Nordic Countries forms a cluster – it is possible to extract Nordic Countries from European Countries by their pension system efficiency before and after 2008 crisis. PCA was applied to the year 2008 and 2013 to investigate whether 2008-financial crisis changed pension system efficiency. According to our analysis pension systems in Iceland, Norway and Sweden are very efficient in terms of labour market and forms a cluster both before and after financial crisis. Denmark and Finland don’t differ significantly from the rest of analysed European countries.
INTRODUCTION
The three worlds of welfare capitalism by Espring-Anderson (1990) paid attention to the various pension systems considering different welfare systems.The Denmark, Finland, Netherlands, Norway and Sweden had the highest number of socialist traits.However, pension systems were accounted to the different groups: Norway, Sweden and 'possibly' Denmark were tied to universal system, but Finland was 'possibly' included in the corporationist group.The distinction here were based mainly on the public and private share of the system.After famous Espring-Anderson (1990) work many investigated welfare regime (i.e.Hicks & Kenworthy, 2003), but rarely who inquired pension system regimes (Soede & Vrooman, 2008).Soede and Vrooman (2008) used CatPCA analysis in 34 traits to categorize pension systems in EU and OCED countries.Pension system regimes were analyzed in two dimension -Private/Funded and Pension Level Wealth.They also concluded the pension regimes do not fit Espring-Anderson classification to the corporationist, liberal and universal.Denmark and Sweden were accounted to the mandatory private regime, Finland to the corporationist and Norway to moderate.On the other hand Ebbinghaus (2012) argue that pension system in the Nordic Countries represent different variations of the Beveridge-system, as they provide basic income and different private/public solutions through different multipilar system: mandatory public pension (Sweden), mandated occupational pension (Finland) and negotiated occupational pension in Denmark.
In the famous World Bank (1994) report introducing three-pillar pension system to replace state-run dominant scheme is suggested.Actuarial methods instead of defined benefit are also recommended.Some of Nordic Countries had pension system constructed on the multi-pillar basis many years before famous World Bank report.In Denmark earnings-related pension through agreements in the labor market was approved (process was not fully completed around 1990).In Sweden earnings related ATP pension started in the early 1960's.(NOSOSCO, 2008).Since 1990's Nordic Countries reformed their pension systems, differences among them deepen (Andersen et. al., 2014).Sweden introduced notional defined contribution with automatic balancing mechanism to maintain sustainability.Denmark increased in 2006 retirement age and introduced incentives to stay longer in the labor market.Norwegian system was widely reformed in 2011, introduction of flexible retirement age took place.In Finland dominates public pension system both means-tested and earnings related, recent reforms also introduce flexible retirement age.Icelandic pension system characterizes flexible retirement age as well.According to Soede et. al. (2004) Nordic welfare system characterizes large-scale of overall security system and moderate pension system, which means that uniqness of Nordic pension cluster is rejected.Timonen and Kautto (2014) asks if recent pension reforms changed the ideals of the Nordic model.All the residents are still covered by at least one pension scheme, however provision is tight to retirement age and career history.This is an evidence of adopting the Nordic model to raising longevity, nevertheless model loose some universalism.
Another approach to assess pension systems welfare regime apply to efficiency.Open Method Coordination goals: adequacy, financial stability and modernization of pension systems can be considered as some measure of efficiency (Chybalski, 2012).Chybalski (2015) investigate efficiency through comparison of pension system functioning such as poverty alleviation, consumption smoothing, employment in age-specific groups to pension expenditure as a share of GDP.It indicates how particular aspect of pension system influence economy, for example we assume that high pension expenditure to GDP discourage elderly to work.As high employment is desirable, the higher elderly employment in terms of pension expenditure to GDP, the system more efficient.As previously stated Nordic countries pay attention to high participation in the labor market, sustainability of the system is achieved by the universal minim pension as additional to the other pillars.Nilsson et. al. (2016) discovered that in Sweden, after recent financial crisis raised participation of older employees, among those in low-skilled occupation.Larsen and Pedersen (2015) noted small decline in labor force participation among 60-64 years old on the onset of 2008-crisis in Sweden and Norway, but in 2013 labor force participation was higher than in 2008.That indicates at least in some of the Nordic Countries pension system reform, which were addressed to raise employment through postponing retirement were efficient.So variables reflecting efficiency in terms of labor market should be considered.
The aim of the study is to show how pension systems in the Nordic Countries were immune to the 2008 financial crisis in terms of efficiency.In the late 1980's and early 1990's financial crisis took over Swedish and Finnish economy.Norway had banking problem in the middle 1980's and recent financial crisis 2007-09 hit Icelandic economy the most among Nordic countries.Over the 1990's Nordic countries took lessons from their economic downturns and reformed their welfare regimes including pension system.Many countries followed their solution so now it is harder to find their uniqness if compared to other OECD, EU countries.As Nordic countries can be found in the same welfare state regime cluster, their pension systems' regime differs.It is worth to check weather are they similar in terms of efficiency.To do this task Principal Component Analysis is performed.
LITERATURE REVIEW
In the early 1990's debate on restructuring pension funds rolled through United States and Europe.Many countries reformed their pension system through converting it into three pillar system, with at least one funded.As the reforms were introduced new problem arisen.Funded pension funds were said to higher return for all retires, but in the same time current generation must pay tax for past generation and save money for their own retirement (Geanakopolos, Mitchell & Zeldes, 1998;Kalyugina et al., 2015).This example shows that the outcome of reforming pension system is dependent on current state.Orszag and Stiglitz (1999) present three categories of myths on privatization pension funds: macro, micro-and Political economy, proving that funded system is no better than unfunded.
'Pension System Efficiency' is rather new term in contrast to the 'Pension Fund Efficiency'.The latter contributes to Pension Funds' investment return, administration cost etc.(Zamuee, 2015).Chybalski (2016) states that pension system efficiency consists of pension system adequacy and costs.Pension system adequacy refers to many systems' functions: consumption smoothing, poverty alleviation, replacements ratios Grech (2013).Pension Fund Efficiency is considered as the maximum return from invested assets.Witkowska and Kompa (2017) compare pension funds investment after new regulation in Poland, concluding that saveings will be higher, when invested by the Open Pension Funds then by Social Insurance Institution Indexation.In such a case we assume that the higher rate of return on average, means Pension System advantage.Contrasting that with 'Pension System Efficiency' we would ask more questions such as administration cost, macroeconomic effects i.e. real value of saveings for retiree, compulsory saveings influence on financial bubble.A notable example one can find in Chovancova and Arendas (2015).They compared long term passive investment on money and stock market.After adjusting of inflation return rate differ between analyzed countries (Germany, USA & Japan) -in some countries rate of return was higher, but not when adjusted for inflation.Nepp (et. al., 2018) look for pension reform optimization by effective retirement age and investment return from pension fund.In sum in Pension Funds Efficiency approach various elements of investment return are analyzed, but other macroeconomic issues are omitted.Čábelková and Strielkowski (2013) examined the welfare state concept and the taxation as a product of culture.Barr and Diamond (2006) defined and summarized pension system functions: consumption smoothing, insurance, poverty relief, redistribution and labor market incentives.Many possibilities of reforming pension system -multipilarity, retirement age, notational solution for different social groupsmatter for assessing pension system through variety of indicators.Let's look at the simple example: some pension system is efficient in labour market incentive, but not in consumption smoothing.In such a case income is consumed at young age, so not saved for maturity.The idea of multidimentional efficiency of efficiency is based on building appropriate indicators and cross-countries analysis, so to group pension system by the trait.(Chybalski, 2016).Chybalski (2016) proposed four static dimensions of the pension system efficiency: GDPdistribution, Adequacy, Labor market and Cost efficiency.The data were collected in the same manner (but private and public pension were taken together) than standardized and destimulants changed to stimulants (the higher the value pension system more efficient) as suggested in Chybalski (2012).Data interpretation are listed in the Table 1.
To analyze whether the Nordic countries form cluster Principal Component Analysis (PCA) is used.The task is done for the year 2008 and 2013 to capture if Nordic countries change their relative position to other European countries.As of 2017 data collected form Eurostat and OECD database are not present in many cases for recent years.Compromising sufficient observation number with contemporariness I choose 2013-data as up-to-date.
In Principal Component Analysis there are no readily criteria to test solution (Tabachnick & Fidell, 2007, p. 607) and how many components to use (Ledesma, 2015) so interpretation criteria are the most important, however some attempts have been made.For example Kaiser-Meyer-Olkin (KMO) test measures sampling adequacy for overall data set (Kaiser, 1974).
EMPIRICAL RESULTS AND DISCUSSION
In the figure 1 components loading for all the variable are plot, those concerning labor market forms cluster, they are also highly correlated with component 1. Variables reflecting consumption smoothing and poverty alleviation are highly correlated with the component 2. We can interpret component 1 as pension funds efficiency in terms of labor market and component 2 as consumption smoothing and poverty alleviation respectively.
233
In both year 2008 and 2013 (Table 2) the first component can be interpreted as a labour market efficiency -higher employment just before and after retirement, higher average retirement age.The second component reflects pension system adequacy: consumption smoothing and poverty alleviation.Source: Author's results.
CONCLUSION
The question: "Is there a Nordic pension system regime?" is still open.Many answers no, especially those, who look from the standard welfare state research perspective for example at the different private and public scheme sizes.
Around 2008 Sweden, Iceland and Norway took resolute reform to maintain workers on the labour market introducing flexible retirement age and other strong incentives to stay longer in workforce.This in consequence makes this country different in comparison rest of Europe.
From our analysis imply that all the Nordic Countries are not in one cluster, but Norway, Sweden and Iceland have similar pension system efficiency.Before 2008's crisis they were very efficient on labour market and compromise poverty alleviation and consumption smoothing.In 2013 those three countries remain very strong on labour market, but different on poverty alleviation.
Applied method -Principal Component Analysis -have some limitation, as there are a few tests verifying significance and its correctness is based mostly on interpretability, so future research on the interaction between labour market and pension system in the Nordic countries should be done.
Table 2
Principal component loadings (normalized) Marcin BryczDoes efficiency of the Nordic pension system evolve after crisis?
|
2019-05-20T13:06:59.272Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "0af9e06b43e91f3fec3e7067d3f0917828082081",
"oa_license": "CCBY",
"oa_url": "https://www.jois.eu/files/16_401_Brycz.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0af9e06b43e91f3fec3e7067d3f0917828082081",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
195829013
|
pes2o/s2orc
|
v3-fos-license
|
Nrf2 participates in mechanisms for reducing the toxicity and enhancing the antitumour effect of Radix Tripterygium wilfordii to S180-bearing mice by herbal-processing technology
Abstract Context: Radix Tripterygium wilfordii Hook. f. (Celastraceae) (LGT) has outstanding curative efficacy; however, side effects include high toxicity, particularly hepatotoxicity and nephrotoxicity. Objective: To investigate detoxification mechanisms of LGT through processing separately with each of these medicinal herbs including Flower Lonicera japonica Thunb. (Caprifoliaceae) (JYH), Radix Paeonia lactiflora Pall. (Ranunculaceae) (BS), Herba Lysimachia christinae Hance (Primulaceae) (JQC), Radix et Rhizoma Glycyrrhiza uralensis Fisch. (Fabaceae) (GC) and Seed Phaseolus radiatus L. (Fabaceae) (LD) in S180-bearing mice by involving nuclear factor (erythroid-derived 2)-like 2 (Nrf2). Materials and methods: LGT raw and processed products were orally administered at 60 mg/kg to KM male mice inoculated with S180 tumour cells for 14 consecutive days, and blood, tumour, liver and kidney were taken to observe the detoxifying effects and biological mechanisms. Results: Herbal-processing technology significantly weakened hepatotoxicity and nephrotoxicity evoked by LGT with ED50 of the converted triptolide in each processed-herb product for serum alanine transaminase, aspartate transaminase, creatinine and urea nitrogen of 9.3, 16.6, 2.5 and 4.2 μg/kg, for liver glutathione, glutathione S-transferase, catalase, tumour necrosis factor-α and interleukin-10 of 114.9, 67.8, 134.1, 7.7, 4171.6 μg/kg, and for kidney 21.9, 20.5, 145.0, 529.7, 19.4 μg/kg, respectively. Moreover, herbal-processing technology promoted the accumulation of Nrf2 into the nucleus, and upregulated mRNA expression of Nrf2 and heme oxygenase-1. Additionally, herbal-processing technology enhanced the tumour inhibition rate with ED50 12.2 μg/kg. Discussion and conclusions: Herbal-processing technology improves the safety and effectiveness of LGT in cancer treatment, and future research may be focused on the Nrf2-related molecules.
Introduction
Processing ('Paozhi' in Chinese Pinyin), as an ancient and classic pharmaceutical technology of traditional Chinese medicine (TCM), is one of the characteristics and advantages of TCM (Wang et al. 2012a;Cai et al. 2017). According to the TCM theory, proper processing can reduce the toxicity and change the curative efficacy of Chinese herbal medicine through proper processing (CHM) (Wang et al. 2012a;Cai et al. 2017). Processed-herb technology, as one of the most common and traditional concocted techniques, is to use some CHMs to concoct other CHMs in order to promote efficacy, attenuate toxicity, prevent the poor bias or influence medicinal properties (Wang et al. 2012a;Cai et al. 2017). Since 'Toxicity of Radix Aconitum carmichaelii Debx. (Ranunculaceae) (Chuanwu) is alleviated by processing with a medicinal herb Apis cerana Fabricius (Apidae) (Fengmi)' ) and so on. However, the principle and essence on processed-herb technology are almost unknown, which restricts its reasonable application. In recent years, research (Wu et al. 2012;Chen et al. 2013;Gong et al. 2013;Zhao et al. 2014;Cao et al. 2015b;Yun et al. 2015) on the processed detoxification of toxic CHMs, such as Radix Euphorbia kansui T.N. Liou ex T.P. Wang Radix Tripterygium wilfordii Hook. f. (Celastraceae) (Leigongteng, LGT) is first recorded in one of the oldest books on the foundation of TCM, 'Shen Nong's Herbal Classic' ), which was published over two thousand years ago.
LGT possess common characteristics with a bitter/pungent flavour and a cold nature ) and has been used as a traditional oriental CHM for centuries to treat a variety of cancers (Wang et al. 2016a;Liu et al. 2009), diabetic nephropathy (Ge et al. 2013), rheumatoid arthritis (Bao and Dai 2011) and so on. However, it can also often cause multiple organs toxicities (Wang et al. 2012b(Wang et al. , 2012cCao et al. 2015a;Li et al. 2015;Liu et al. 2017), hepatotoxicity and nephrotoxicity in particular, due to an overdose or prolonged exposure in clinic. A significant increase in serum alanine transaminase (ALT) and aspartate transaminase (AST) is usually indicative of hepatotoxicity , and increase in serum creatinine (Cr) and urea nitrogen (BUN) is usually indicative of nephrotoxicity .
At present, research on the processed-herb detoxification of LGT is rare. Only a few studies Zhao et al. 2017) mainly focused on the GC-processed detoxification actions of LGT in vivo and in vitro, but these studies did not elucidate the detoxification mechanisms. In this context, we tried to select Chinese medicinal herbs to concoct LGT under the guidance of TCM theory, thereby reducing the toxicity of LGT without reducing or even enhancing its curative efficacy. In TCM theory, sweetness in five flavours of Chinese medicine has the functions of relieving toxicity, relieving food poisoning and alleviating drastic drug properties. GC, Flower Lonicera japonica Thunb. (Caprifoliaceae) (Jinyinhua, JYH), Seed Phaseolus radiatus L. (Fabaceae) (Lvdou, LD), Herba Lysimachia christinae Hance (Primulaceae) (Jinqiancao, JQC) and Radix Paeonia lactiflora Pall. (Baishao, BS) have sweet properties in five flavours of Chinese medicine. Among them, GC, LD and JYH are commonly used to alleviate or prevent poisoning from drugs or foods (Song 1991;Yang 2016;Fei et al. 2017), JQC can be often administered to alleviate poisoning from LGT (Wang et al. 2016b;Liu et al. 2017), and BS is commonly used to antagonize LGT-induced toxicity (Li et al. 2009). In addition, studies have shown that JYH, BS, JQC, GC and LD and the main active extracts and compounds they contain have hepatoprotective and (or) kidney-protective effects (Sohn et al. 2003;Sun et al. 2008Sun et al. , 2010Wang et al. 2012d;Liu et al. 2015;Yagmurca et al. 2015;Jung et al. 2016;Xin et al. 2016;Wang et al. 2016c;Ye et al. 2017;Xie et al. 2018). Therefore, considering it comprehensively, we chose above five medicinal herbs including JYH, BS, JQC, GC and LD for processing LGT to evaluate its detoxification effects. Our previous study ) has evaluated the chemical basis of the detoxification effects of processed-herb LGT and confirmed that the processing could significantly reduce the contents of TP and CEL of the main toxic components contained in LGT, and the chemical total score of 11 different characteristic components including TP and CEL was also significantly reduced. As mentioned earlier, the toxic target organs of LGT are mainly in the liver and kidneys . Therefore, our other study also observed and confirmed the detoxification effects of LGT by processing with medicinal herbs under physiological conditions . Further, considering that LGT can be used in many kinds of cancers (Liu et al. 2009;Cao et al. 2015a;Wang et al. 2016a), and that in the clinic, the drug is administrated to patients instead of healthy people, our intent was to observe the detoxification effects of LGT via processing with medicinal herbs including JYH, GC, JQC, LD and BS under the pathological state of tumour.
In addition, it has been reported that LGT-induced toxicity is related to oxidative stress and inflammation damage Wang et al. 2015), while JYH, BS, JQC, GC and LD all have been reported to be of antioxidant and anti-inflammatory properties (Gu et al. 1988;Lee et al. 2005;He and Dai 2011;Wu et al. 2011;Chen et al. 2012;Guo et al. 2014;Luo et al. 2016;T oth et al. 2016). Therefore, we speculated that these five herbs may probably detoxify LGT through antioxidant and antiinflammatory processes. Moreover, considering nuclear factor (erythroid-derived 2)-like 2 (Nrf2), an antioxidant key transcription factor, plays an important role in both antioxidant and anti-inflammatory processes ), we investigated the detoxification mechanisms of LGT via processing with medicinal herbs based on Nrf2 antioxidant and antiinflammatory defences.
Experimental animals
Considering that there is a gender difference in the toxicity of LGT, and that oestrogens in female animals may probably interfere with the effects of drugs , male mice are used as research animals in this study. Kunming (KM) male mice (18-22 g) were obtained from Experimental Animal Center of Henan Province (Zhengzhou, China). Animals were given rodent laboratory chow and water ad libitum and maintained under controlled conditions with a temperature of 22 ± 1 C, relative humidity of 60% ± 10% and a 12-h light/dark cycle (lights on at 7:00 AM). All the procedures were in strict accordance with the P.R. China legislation on the use and care of laboratory animals and guidelines formulated by the Institute for Experimental Animals of Henan University of Chinese Medicine and were approved by the university committee for animal experiments.
Cell lines
Mouse S180 tumour cells were collected from S180 tumourbearing mice which were purchased from the Institute of Chinese Materia Medica, China Academy of Chinese Medical Sciences (Beijing, China). Mouse S180 tumour cells were maintained in the peritoneal cavities of male KM mice in the Laboratory of Experimental Animals of Henan University of Chinese Medicine (Zhengzhou, China).
Plant material and preparations of LGT-processed products
LGT was obtained from Taining County of Fujian Province (Taining, China). JYH, BS, JQC and GC were all purchased from a traditional drug store, Henan Materia Medica Chain Co., Ltd. (Zhengzhou, Henan province, P.R. China). LD was purchased from Zhengzhou Wal-Mart Supermarket (Manhattan Store).
The method of concocting LGT is detailed in our recent study ), which is briefly described below. Separately weigh the appropriate amounts (such as 10 g) of five kinds of processed auxiliary materials (JYH, BS, JQC, GC and LD), place them in different stainless steel pots, add 20 times the amount of water (w/v ¼ 1:20) for 0.5 h, then boil 2 times, each time 15-20 min. Subsequently, the decoctions were combined and concentrated to processed auxiliary liquids each with a concentration of 0.05 g/mL. Weigh several portions of LGT raw materials (such as 60 g) separately, add 0.05 g/mL each of the processed auxiliary liquids (such as 200 mL), place them in a stainless steel pot, cook them with slow fire until the medicine is thoroughly saturated and drained, remove them and dry in the far-infrared constant temperature oven (60 C). Finally, we obtained LGT-processed products, namely, JYHprocessed LGT (JYHLGT), BS-processed LGT (BSLGT), JQCprocessed LGT (JQCLGT), GC-processed LGT (GCLGT), LDprocessed LGT (LDLGT). Note: For each 60 g LGT decoction pieces, use auxiliary materials JYH, BS, JQC, GC, LD each 10 g.
We further determined the difference in chemical composition of LGT before and after processing by HPLC and combined the methods of principal component analysis and grey correlation analysis to evaluate the chemical basis of the detoxification effects of LGT ). The contents of TP and CEL in LGT and its processed products JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT were determined as 0.345, 0.038, 0.062, 0.030, 0.118, 0.052 mg/g (for TP), and 6.399, 0.973, 0.652, 0.235, 0.362, 0.834 mg/g (for CEL), respectively, by the HPLC analysis ). In addition, we also found 11 different characteristic components (including TP and CEL) that changed before and after processing, and reduced the dimensions of the 11 features by principal component analysis and calculated it as a single chemical score that can basically represent the composition of the components. The chemical scores of the processed products were reduced by 170.2, 88.6, 128.0, 59.4 and 153.8%, respectively, compared with the LGT raw product ).
Animal experiment-related treatment protocol
Considering that ethyl acetate can dissolve LGT-contained major bioactive compounds such as TP and CEL, and ethyl acetate is often used for an extraction solvent to extract LGT in a variety of researches (Tao et al. 2001;Bai et al. 2003;Tao et al. 2006), so we selected ethyl acetate for extraction solvent to prepare all the extracts through conventional reflux extraction. The preparations of the ethyl acetate extracts from LGT raw product and its processed-herb products with JYH, BS, JQC, GC and LD were described as follows. Ethyl acetate extracts (EAEs) from LGT and the processed-herb products including JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT were obtained by reflux extraction once 2 h for repeated three times. The ethyl acetate was recovered under reduced pressure to a thick paste, and the thick paste was placed on a water bath at 85 C (a temperature slightly above the boiling point of ethyl acetate) to heat off the odour of ethyl acetate and then dried under vacuum at 60 C to obtain a dry extract without ethyl acetate. The contents of TP and CEL in extracts from LGT and its processed products JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT are determined as 1.306, 0.609, 0.581, 0.366, 1.029, 0.454 mg/g (for TP), and 27.060, 7.804, 5.378, 2.084, 3.574, 5.766 mg/g (for CEL), respectively, by the HPLC analysis.
According to the literature method, S180 ascites tumour cells were inoculated into the right armpit of mice to prepare S180 solid tumour model ). One day after inoculation, mice, except for the normal (non-tumour-inoculated) animals, were randomly divided into eight groups of 10 mice each. The control (tumour-inoculated) groups of mice received daily oral administration of 0.5% (5 g/L) sodium carboxyl methyl cellulose as a suspending agent (Meler and Wendt 1990) (CMC-Na; 0.2 mL/10 g). The other six groups received LGT raw product and processed-herb products (JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT) by intragastric administration (ig) for 14 d starting from 24 h after tumour inoculation. Our previous study ) had confirmed that LGT raw product could cause toxicity in S180 tumour-bearing mice when administered at a dose of 60 mg/kg (equivalent to LGT crude drug about 2 g/kg). Therefore, all doses of LGT for raw and processed products in this study were set at 60 mg/kg, ensuring that all doses were comparable at the same level. After treatment, mice were sacrificed by cervical dislocation after peripheral blood samples, livers, kidneys and tumours were collected at 24 h after the last administration. Serum samples were collected for the analysis of biochemical indicators including ALT, AST, Cr and BUN, and liver and kidney tissues were used for the analysis of the histological observation, GSH, TNF-a and IL-10 levels, determination of glutathione-related and antioxidant enzymes. The tumours were weighed, arrayed in line on paper, and taken pictures. The tumour inhibition rate was calculated by the formula of IR ¼ [(C À T)/C] Â 100%, where C and T are the mean tumour weights of the control group (CMC-Na) and the treated group, respectively.
In addition, it was worth noting that in order to overall reflect the attenuation and synergistic effect of the herbal-processing technology on LGT through the effective half dose (ED 50 ) value, we calculated the dose of TP according to the TP content in each processed product and obtained the ED 50 value according to the biological activity and the converted TP dose of each processed product.
Assay for serum ALT, AST, Cr and BUN
Blood samples were obtained from mice of all groups (ten mice per group) for the determination of serum biochemical biomarkers. Serum ALT, AST, Cr and BUN were assayed by the commercial kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer's protocols.
Histological observation
After fixation in 10% formalin, the livers and kidneys were examined for size, colour changes and haemorrhage. Slices of liver and kidney were cut into small pieces and histological sections were stained with haematoxylin and eosin (H&E) for the observation under the 200 Â light microscopy.
Assay for levels of protein, antioxidants, IL-10 and TNF-a Liver and kidney tissues were homogenized in cold physiological saline, and their total protein concentrations were measured by the commercial Bradford Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer's protocol. Hepatic and kidney tissues levels of antioxidants including GSH, GST, GPx, SOD and CAT and inflammatory mediators including IL-10 and TNF-a were assayed, respectively, by the commercial kits (for antioxidants Nanjing Jiancheng Bioengineering Institute, Nanjing, China) and (for inflammatory mediators Boster Biological Technology, Wuhan, China) according to the manufacturer's protocols, and the results were all expressed based on tissue protein concentrations which were measured by Bradford Protein Assay.
Western blot analysis
Liver and kidney proteins were separated as described in the Nuclear Extraction Reagent Kit (Pierce, USA). The protein concentrations were measured, and all the samples in the same experiment were normalized to the equal protein concentration. Protein samples were isolated by SDS-PAGE gel electrophoresis and transferred onto a PVDF membrane, and then incubated with the appropriate combination of primary and secondary antibodies, followed by ECL detection and quantification using an image analysis program. The grey densities of the protein bands were normalized by using Lamin B density as internal control, and the results were further normalized to normal control.
Statistical analysis
The results were presented as mean ± standard deviation of mean (SD). The differences among experimental groups were compared by one-way ANOVA (analysis of variance), followed by least significant difference (LSD) when the variance is equal or by Dunnett T3 when the variance is not uniform, using the SPSS (Statistics Package for Social Science) program version 17.0. p < 0.05 indicated statistical difference.
Results
Processing with herbs reversed LGT-induced hepatotoxicity and nephrotoxicity In this study, compared with the normal (Nor) group, inoculation of S180 tumour in the control (Con) group significantly elevated serum ALT, AST, Cr and BUN levels (all p < 0.01) (Figure 1(A-D)), while administration of LGT raw product (unprocessed) further raised the levels of the above indicators in S180 tumour-bearing mice, among which three indicators including ALT (58.4 U/L), Cr (44.8 lmol/L) and BUN (8.7 mmol/L) are statistically different (all p < 0.01) (Figure 1(A,C,D)), suggesting that LGT raw product treatment for 14 d evoked subacute hepatotoxicity and nephrotoxicity in S180 tumour-bearing mice. In contrast, treatment with the processed-products of LGT (processing with JYH, BS, JQC, GC and LD, respectively) for 14 d effectively weakened LGT-induced hepatotoxicity and nephrotoxicity with each ED 50 according to the converted TP in above processed products for serum ALT, AST, Cr and BUN of 9.3, 16.6, 2.5 and 4.2 lg/kg, respectively. Further, histological evaluation of the livers and kidneys removed from S180-bearing mice administrated with LGT raw product demonstrated swelling-like degeneration and (or) inflammation of hepatocytes and nephrocytes ( Figure 2) as shown in black solid arrows indicating swelling-like degeneration of hepatocytes or nephrocytes, and black dotted arrows indicating inflammation of nephrocytes. After treatment with processed-herb products including JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT, these abnormal changes conspicuously decreased or even disappeared (Figure 2). These results suggested that processing with herbs including JYH, BS, JQC, GC and LD all reversed hepatotoxicity and nephrotoxicity induced by LGT in S180 tumour-bearing mice.
Processing with herbs reversed LGT-decreased antioxidant levels in liver and kidney
In this study, compared with the Nor group, inoculation of S180 tumour in the Con group significantly reduced GSH and GST levels in mice liver (p < 0.01 and p < 0.05, respectively) ( Figure 3(A,B)) and kidney (both p < 0.01) (both p < 0.01) (Figure 3(D,E)) without obvious effects on GPx (Figure 3(C,F)), SOD (Figure 4(A,C)) and CAT (Figure 4(B,D)), either in the liver or in the kidney. Compared with the Con group, administration of LGT raw product further significantly reduced the levels of GSH, GST, GPx, SOD and CAT both in liver and kidney of the S180-bearing mice (all p < 0.01) (Figures 3(A-F) and 4(A-D)). Compared with the LGT raw product group, processing with herbs including JYH, BS, JQC, GC and LD all significantly reversed the excessively low levels of all above indicators (all p < 0.01) induced by LGT in liver and kidney (Figures 3 (A-F) and 4(A-D)) of S180 tumour-bearing mice. The findings shown in Figures 3 and 4 suggest that treatment with raw LGT decreases the amount of antioxidants and antioxidant enzymes, which results in cellular damage, while treatment with processed LGT reverses the high toxicity associated with LGT treatment to some degree.
Processing with herbs reversed LGT-induced abnormal levels of TNF-a and IL-10 in liver and kidney
We next examined the effects of processed LGT treatment on TNF-a (inflammatory cytokine) and IL-10 (anti-inflammatory cytokine) levels in the liver and kidney of LGT-exposed S180 tumour-bearing mice, in an attempt to link anti-inflammatory reactions to the apparent detoxification effect of treatment with processed LGT. In this study, inoculation of S180 tumour significantly increased kidney pro-inflammatory cytokine TNF-a (p < 0.01) (Figure 5(C)) and reduced kidney antiinflammatory cytokine IL-10 (p < 0.05) (Figure 5(D)) without significant effects on liver TNF-a ( Figure 5(A)) and IL-10 ( Figure 5(B)) in mice. Administration of LGT raw product significantly increased TNF-a levels and reduced IL-10 levels (all p < 0.01) in liver and kidney of mice ( Figure 5(A-D)), while treatments of its processed-herb products including JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT all significantly reversed the above excessively elevated TNF-a and decreased IL-10 levels (all p < 0.01) induced by LGT in S180 tumour-bearing mice ( Figure 5). These results indicated that the anti-inflammatory reactions could involve the detoxification effects of LGT via processed with herbs including JYH, BS, JQC, GC and LD.
Processing with herbs upregulated the expression of Nrf2 and HO-1 In this study, inoculation of S180 tumour had no significant effects on the protein expression of Nrf2 (Figure 6(A-C)), and mRNA expression of Nrf2 (Figure 6(D,E)) and HO-1 (Figure 6(F,G)) in liver and kidney of mice. Administration of raw LGT significantly downregulated the protein expression of Nrf2 ( Figure 6(A,C)) (p < 0.05) in kidney of S180-bearing mice without obvious effects on the protein expression of Nrf2 ( Figure 6(A,B)) in liver, the mRNA expression of Nrf2 and HO-1 in liver and kidney (Figure 6(D-G)). Processing with herbs (JYH, BS, JQC, GC and LD), all significantly upregulated the protein expression of Nrf2 (Figure 6(A,B)) (all LGT processed with JYH, BS, JQC, GC and LD was used to treat LGT-exposed S180 tumour-bearing mice. The serum ALT (A), AST (B), Cr (C) and BUN (D) levels were subsequently examined. Data are presented as mean ± SD (n ¼ 10). Significant differences compared with the normal (Nor) group were designated as DDp < 0.01, with the control (Con) group as #p < 0.05 and ##p < 0.01, and with LGT raw product group as à p < 0.05 and Ãà p < 0.01. p < 0.01), and the mRNA expression of Nrf2 (Figure 6(D)) (p < 0.01, p < 0.01, p < 0.01, p < 0.01 and p < 0.05, respectively) and HO-1 (Figure 6(F)) (p < 0.01, p < 0.01, p < 0.01, p < 0.01 and p < 0.05, respectively) in liver of S180-bearing mice, compared with LGT raw product. In addition, processing with herbs (JQC and JYH) also significantly upregulated the protein expression of Nrf2 (Figure 6(A,C)) (p < 0.01 and p < 0.05, respectively), and the mRNA expression of Nrf2 (Figure 6(E)) (p < 0.01 and p < 0.05, respectively) and HO-1 (Figure 6(G)) (p < 0.01 and p < 0.05, respectively) in kidney of S180-bearing mice, compared with LGT raw product, while processing with herbs (BS, GC and LD) had no significant effects on them ( Figure 6(A,C,E,G)). These results suggested that Nrf2 could probably participate in the detoxification mechanisms of LGT via processed with medicinal herbs including JYH, BS, JQC, GC and LD.
Processing with herbs promoted LGT-produced antitumour activity
We further observed the effect of processing with herbs on the antitumour activity of LGT. The results showed that administration of LGT raw product reduced the tumour weight with a 17.2% inhibition rate, while treatments of its processedherb products including JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT all further reduced LGT-decreased tumour weight (Figure 7), with higher inhibition rates of 27.6, 29.2, 40.4, 20.9 and 35.2%, respectively. Among the above five processed products, JQCLGT significantly reduced LGT-decreased tumour weight (p < 0.05) (Figure 7) and increased the tumour inhibition rates by 23.2%.
Discussion
Some potently toxic medicinal herbs, such as LGT (Liu et al. 2009;Bao and Dai 2011;Ge et al. 2013;Wang et al. 2016a), Strychni Semen (Maqianzi) (Guo et al. 2018), Arsenic trioxide (Pishuang) (Song et al. 2018) and so on, have excellent efficacy in the treatment of difficult diseases. How to reduce the toxicity of such toxic herbs with excellent curative effect without reducing their curative effects is one of the major issues of academic interest.
As for LGT, it has excellent curative effects including antitumours, but it is potently toxic and severely limits its clinical Figure 2. Effects of the processed LGT on liver (from A to H) and kidney (from I to P) pathology by H&E Â 200 in LGT-exposed S180 tumour-bearing mice. Black solid arrows indicate swelling-like degeneration of hepatocytes or nephrocytes, and black dotted arrows indicate inflammation of nephrocytes.
application. To this end, under the guidance of TCM theory, we conducted research on the detoxification mechanisms of LGT by processing with medicinal herbs with sweet properties (JYH, BS, JQC, GC and LD) and confirmed that processing with these herbs can all exhibit detoxification effects on LGT-induced hepatotoxicity and/or nephrotoxicity in S180 tumour-bearing mice and the mechanisms probably involved, at least in part, the Nrf2 antioxidant and anti-inflammatory processes.
A significant increase in serum ALT and (or) AST is indicative of liver injury , and elevated serum Cr and (or) BUN are indicative of renal injury . A recent study showed that LGT induced hepatotoxicity and nephrotoxicity in normal mice, evidenced by marked elevation in serum ALT, AST, BUN and Cr levels . In this study, the administration of LGT raw product further significantly increased the levels of ALT, BUN and Cr in the serum of S180 tumour-bearing mice, while there was no statistically significant increase in the serum AST level. As to why there is no significant effect on the serum AST level in tumour-bearing mice after LGT administration, except that this may be the case, we speculate that the cause may be related to the animal's strain, batch and individual differences, dose and time of administration, and so on. Nonetheless, our results suggested that the administration of LGT raw product significantly aggravated the hepatotoxicity and nephrotoxicity of S180 tumour-bearing mice to some extent. Fortunately, LGT-induced hepatotoxicity and nephrotoxicity in S180 tumour-bearing mice were reversed after treatments with its processed products prepared by JYH, BS, JQC, GC and LD, evidenced by the significant reduction of the above four serum biochemical markers and the improvement of liver and kidney pathological lesions.
In fact, studies have shown that JYH, BS, JQC, GC and LD and the main active extracts and compounds they contain have hepatoprotective and (or) kidney-protective effects (Sohn et al. 2003;Sun et al. 2008Sun et al. , 2010Wang et al. 2012d;Liu et al. 2015;Yagmurca et al. 2015;Jung et al. 2016;Wang et al. 2016c;Xin et al. 2016;Xie et al. 2018;Ye et al. 2017). Therefore, the processed detoxification of the above five medicinal herbs on LGT-induced hepatotoxicity and nephrotoxicity is not only related to TCM theoretical support belong to them such as sweet and slow detoxification, antidote poisoning by the sweetness of the five flavours, mutual detoxification of seven emotions and so on, but also may be related to their bioactive properties of hepatoprotection and (or) renal protection. In addition, the TP content in the above five processed products decreased by 89.1,81.9,91.3,65.7 and 85.0%,respectively,and the CEL content decreased by 84.8,89.8,96.3,94.3 and 87.0%, respectively. The results suggested that the content of TP and CEL after processing was obviously reduced, which could be another important factor in reducing toxicity. In addition, both TP and CEL contents in JQC-processed product decreased the most than the other four processed products, which can probably explain to some extent why the JQC-processed product group had relatively lower levels of serum ALT, AST, Cr and BUN than the other four processed groups in this study. This may be further explained GST (B,E) and GPx (C,F) levels in liver and kidney of LGT-exposed S180 tumour-bearing mice. Significant differences compared with the normal (Nor) group were designated as Dp < 0.05 and DDp < 0.01, with the control (Con) group as ##p < 0.01, and with LGT raw product group as ÃÃ p < 0.01. why JQC had a relatively better detoxification effect on hepatotoxicity and nephrotoxicity caused by LGT in this study.
Considering that the toxicity caused by LGT was related to abnormal antioxidant capacity and inflammatory response Wang et al. 2015), and at the same time these medicinal herbs (JYH, BS, JQC, GC and LD) had been reported to have antioxidant and anti-inflammatory properties (Gu et al. 1988;Lee et al. 2005;He and Dai 2011;Wu et al. 2011;Chen et al. 2012;Guo et al. 2014;Luo et al. 2016;T oth et al. 2016), so next we tried to explore the potential mechanism of detoxification by analyzing some antioxidant and inflammation related indicators. In this study, inoculation of S180 tumour significantly reduced the levels of antioxidants GSH and GST in the liver and kidney, and at the same time caused excessive levels of inflammatory mediator TNF-a and too low levels of anti-inflammatory mediator IL-10. To a large extent, it was suggested that inoculation of the tumour caused oxidative damage and/or inflammatory damage of the liver and kidney.
LGT raw product administration significantly reduced the levels of antioxidants GSH, GST, GPx, SOD and CAT in the liver and kidney of S180 tumour-inoculated mice and meanwhile significantly increased the inflammatory mediator TNF-a level as well as decreased the anti-inflammatory mediator IL-10 level. These results suggested that LGT caused or even aggravated oxidative damage and inflammatory damage in the liver and kidney of tumour mice. Fortunately, after taking LGT-processed products (JYHLGT, BSLGT, JQCLGT, GCLGT and LDLGT), compared with the LGT raw product group, the indicators of the above abnormalities caused by LGT were all significantly reversed in the liver and kidney of tumour mice, suggesting that the antioxidant and anti-inflammatory processes could be probably involved in the detoxification mechanism of LGT via processed with medicinal herbs including JYH, BS, JQC, GC and LD. Actually, it was reported that these five herbs had good antioxidant and anti-inflammatory properties (Gu et al. 1988;Lee et al. 2005;He and Dai 2011;Wu et al. 2011;Chen et al. 2012;Guo et al. 2014;Luo et al. 2016;T oth et al. 2016). Therefore, to some extent, the detoxification effects of these five herbs on LGT via processing could be partially attributed to their own antioxi- Figure 4. Effects of processed LGT treatment on primary antioxidant enzymes SOD (A,C) and CAT (B,D) levels in liver and kidney of LGT-exposed S180 tumour-bearing mice. Significant differences compared with the normal (Nor) group were designated as DDp < 0.01, with the control (Con) group as ##p < 0.01, and with LGT raw product group as ÃÃ p < 0.01. dant and anti-inflammatory properties. Besides, the effects of these five processed products on these antioxidant and inflammation-related indicators are generally the same, but to varying degrees.
Moreover, considering Nrf2, an antioxidant key transcription factor, plays an important role in both antioxidant and anti-inflammatory processes , we further observed the detoxification mechanism of LGT by analyses of Nrf2 protein and mRNA expression in liver and kidney of tumour mice. In this study, processing with medicinal herbs (JYH, BS, JQC, GC and LD) promoted Nrf2 nuclear accumulation in LGT-exposed tumour mice, evidenced by significantly upregulated Nrf2 protein levels, and mRNA levels of Nrf2 and its downstream target gene HO-1, suggesting that Nrf2 activation in liver could probably participate in the above medicinal processed-herb detoxification mechanism of LGT. As for expression analysis in the kidney, only JQC and JYH concocted LGT promoted Nrf2 nuclear aggregation in tumour mice, evidenced by significantly upregulated Nrf2 protein levels, and Nrf2 and HO-1 mRNA levels, suggesting that Nrf2 activation in kidney could probably participate in the JQC and JYH concocted detoxification mechanism of LGT. However, in this study, BS, GC and LD concocted LGT did not cause activation of Nrf2 in kidney, suggesting that the detoxification of LGT by processing with medicinal herbs (BS, GC, and LD) may not be regulated by Nrf2 in kidney. In addition, the lack of detection of cytosolic Nrf2 protein expression, as well as the lack of the measurement of protein expression on other related signalling molecules such as HO-1 in the Nrf2 signalling pathway, is one of the limitations of this study.
After confirming that the processing with medicinal herbs can have detoxifying effect on LGT, we further examined the effect of concocting on the antitumour activity of LGT. In this study, administration of LGT raw product reduced the tumour weight with a 17.2% inhibition rate, while processing with medicinal herbs (JYH, BS, JQC, GC and LD) all further decreased LGT-decreased tumour weight, concomitant with the rates of tumour inhibition increased by 10.4, 12, 23.2, 3.7 and 18%, respectively. That is to say, not only the Figure 5. Effects of processed LGT treatment on TNF-a (A,C) and IL-10 (B,D) levels in the liver and kidney of LGT-exposed S180 tumour-bearing mice. Significant differences compared with the normal (Nor) group were designated as Dp < 0.05 and DDp < 0.01, with the control (Con) group as ##p < 0.01, and with LGT raw product group as ÃÃ p < 0.01. detoxification effect on LGT was obtained by processing with medicinal herbs (JYH, BS, JQC, GC and LD), but also the antitumour activity of LGT is also increased, especially the synergistic effect of JQC concocted LGT is the strongest, followed by LD, BS, JYH and GC. Actually, as reported, JQCcontained quercetin and rutin, LD-contained peptides, JYHcontained luteolin and chlorogenic acid, GC-contained glycyrrhizin and isoangustone A, and BS-contained paeoniflorin all exerted antitumour properties (Thirugnanam et al. 2008;Seon et al. 2012;Alonso-Castro et al. 2013;Chen et al. 2017;Yan et al. 2017;Li et al. 2018aLi et al. , 2018bOuyang et al. 2018). Therefore, we speculate that the synergistic effect of LGT by processing with medicinal herbs (JQC, LD, JYH, GC and BS) may be attributed in part to the interaction of the antitumour active ingredients contained in these five herbs with the active ingredients in LGT. In addition, Nrf2-based antioxidant and anti-inflammatory defence processes may also contribute to their synergistic effects. However, the diameter of the tumour was not detected, so that the evaluation of antitumour efficacy was not comprehensive enough. Therefore, this is another limitation of our research. In addition, we did not detect tumour markers to explore antitumour mechanisms, which may be the third limitation of our study.
Together, thorough processing with medicinal herbs (JQC, LD, JYH, GC and BS) exhibited detoxification effect on the hepatotoxicity and nephrotoxicity caused by LGT, and the mechanisms could be at least partly attributed to upregulation of Nrf2 and its downstream HO-1 signal, thereby enhancing antioxidant defences, and inhibiting inflammation. In addition, the antitumour activity of each processed product of LGT increased to different degrees compared with its raw product, among which JQC processed product have the strongest synergistic effect. According to the results we have obtained, we can provide some tips or suggestions for clinical use of LGT. First, clinically, we can consider the use of LGTprocessed products (JYHGT, BSLGT, JQCLGT, GCLGT and LDLGT) to reduce its toxicity, while at the same time its activity cannot be reduced or even can be enhanced, although this requires more evidence to support. In addition, we can also consider the use of these medicinal processed-herb products to reduce the intake of LGT, thereby reducing the risk of poisoning, but its curative effect is not attenuated, although this consideration requires more evidence to support. Finally, due to the strong toxicity of LGT, although the toxicity of LGT-processed products of the present study was reduced, there are still safety risks of medication. Figure 6. Effects of processed LGT treatment on protein expression of Nrf2 (A-C), and mRNA expression of Nrf2 (D,E) and HO-1 (F,G) in liver and kidney of LGTexposed S180-bearing mice. Significant differences compared with the normal (Nor) group were designated as Dp < 0.05 and DDp < 0.01, with the control (Con) group as #p < 0.05 and ##p < 0.01, and with LGT raw product group as à p < 0.05 and Ãà p < 0.01.
Disclosure statement
There is no conflict of interest to disclose.
Funding
This work was financially supported by the Program for Science & Technology Innovation Talents in Universities of Henan Province (No. 16HASTIT032), and the Science and Technology Innovation Talent Fund of Henan Chinese Medicine (No. 2015XCXRC01).
|
2019-07-09T13:05:14.402Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "044a237c6ca16e0daea47435c668b8e0edcf47b1",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13880209.2019.1634106?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "044a237c6ca16e0daea47435c668b8e0edcf47b1",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
54218822
|
pes2o/s2orc
|
v3-fos-license
|
PRIVATE SCHOOLS AND THE PUBLIC GOOD : THE EFFECT OF PRIVATE EDUCATION ON POLITICAL PARTICIPATION AND TOLERANCE IN THE TEXAS POLL
The development of the public school system was prompted in part by the fear that private education would not adequately socialize students to the values required to function in a democratic system. Private, especially Catholic, schools were thought not to be well suited to instilling norms of participation and tolerance in the waves of immigrants an-iving from Ireland and Italy. Much of this anxiety was nothing more than thinly disguised antiCatholicism and xenophobia. Nevertheless, the belief that public schools are better at imparting desired civic values persists despite conscious efforts on the part of Catholic and other private schools to provide a quality civic education and despite a lack of an empirical basis for this belief. It is still widely held that public goals in civic education are best served by public schools, while private schools operate for the benefit of parochial interests. In this paper, the researchers test these hypotheses by examining the effect of public and private education on political participation and tolerance. Specifically, survey data are drawn from a representative sample of Texas residents to determine whether type of education infiuences political attitudes
T he development of the public school system was prompted in part by the fear that private education would not adequately socialize students to the values required to function in a democratic system.Private, especially Catholic, schools were thought not to be well suited to instilling norms of participation and tolerance in the waves of immigrants an-iving from Ireland and Italy.Much of this anxiety was nothing more than thinly disguised anti-Catholicism and xenophobia.Nevertheless, the belief that public schools are better at imparting desired civic values persists despite conscious efforts on the part of Catholic and other private schools to provide a quality civic education and despite a lack of an empirical basis for this belief.It is still widely held that public goals in civic education are best served by public schools, while private schools operate for the benefit of parochial interests.
In this paper, the researchers test these hypotheses by examining the effect of public and private education on political participation and tolerance.Specifically, survey data are drawn from a representative sample of Texas residents to determine whether type of education infiuences political attitudes and behaviors.The Texas poll, a state-wide survey administered annually to 1,000 Texas residents, contains a compendium of questions submitted by state agencies and academic researchers on a wide range of subjects.The survey results presented in this study were gathered in the fall of 1997.This analysis is limited to one state; therefore, caution should be exercised when inteipreting the results.Nonetheless, the findings are generally consistent with other research done in this area (Coleman, Hoffer, & Kilgore, 1982a;Glenn, 1988;Greene, 1998).In comparison to students who attended public school for all 12 years of primary education, those who spent some time in private school demonstrate greater levels of political participation and political tolerance, even after controlling for other factors.
Interestingly, attending private school for all 12 years of education does not seem to produce the same effect as attending for only some of the time.This leads us to hypothesize that there is a substantive difference, one we have not controlled for, between families who choose to send their children exclusively to private schools and those who choose to switch their children from public to private school.Further research is necessary to explain these differences more fully.This evidence, therefore, suggests that promoting civic values is of central concern for private school educators who are often more successful than their public school counterparts.The attention that Catholic and other private schools have paid to serving public ends appears to have produced successful results.
PROMOTING DEMOCRATIC VALUES: PUBLIC VERSUS PRIVATE SCHOOLS
The notion that government should be concerned with education has existed in this country since Thomas Jefferson and Noah Webster (Glenn, 1988).It was Horace Mann, however, who first articulated the idea of the "common school," a public school where children of different cultural and ethnic backgrounds would be taught the basic political values and virtues of U.S. citizenship.Mann's common school was in part a response to the growing numbers of Irish and Italian immigrants.Fearing that the new immigrants' Catholicism would undermine their loyalty to this country and its social and political establishments, many embraced the common school as an important socializing agent, one necessary for inculcating proper allegiances and respect for the values of participation and tolerance.Writing in the impassioned tone of the time, Boston officials in 1850 stressed the consequences for the state should they fail to educate children of foreign-bom parents: ...In our Schools they must receive moral and religious teaching, powerful enough if possible to keep them in the right path amid the moral darkness which is their daily and domestic walk.... Unless we can reclaim this popu-lation in their childhood by moral means, we must control them by force, or support them as paupers, at a maturer period of life.(Glenn, 1988, p. 84) Those who promoted the concept of the common school maintained that if loyalty to the state and acceptance of civic norms were paramount, then the government should naturally be responsible for operating the institutions of value transmission.This idea continues to fiourish.As Secretary of Education Richard Riley has argued.
The "common school"-the concept upon which our public school system was built-teaches children important lessons about both the commonality and diversity of American culture.These lessons are conveyed not only through what is taught in the classroom, but by the very experience of attending school with a diverse mix of students.(1997, p. 1) What is significant in many of the arguments about the superiority of public schools for promoting democratic values is that the source of their advantage cannot be pinpointed with precision.In Riley s words, it is the "v^/T experience of attending school with a diverse mix of students" [italics added].
Implicit in Riley's statement is the suggestion that pubhc schools are more integrated, thus offering a better replication of the polity and teaching through exposure to the values of diversity and tolerance necessary for democratic life.While it is certainly true that for democracy to fiourish in a heterogeneous society the integration of different groups of students is to be desired and promoted, there is very little evidence that public schools are successfully performing this function.In fact, research suggests that, nearly 50 years after the landmark Brown v. Board of Fducation decision, public schools are still highly segregated-and in fact increasingly so (Orfield, 1996).
Perhaps even more surprising, a growing body of research is finding that private schools might be better integrated and more egalitarian than their public school counterparts (Coleman et al., 1982a;Coleman, Hoffer, & Kilgore, 1982b;Greene, 1998;Greene & Mellow, 1998).Public school student bodies are drawn primarily from neighborhood attendance zones which typically replicate existing segregated housing patterns.Public schools, therefore, often simply mirror the distinct racial and class makeup of the area in which they are located.In their landmark comparison of public and private schools, Coleman, Hoffer, and Kilgore (1982b) describe the results of postwar segregated housing patterns by noting that: This stratification has in effect produced a "public" school system which not only no longer integrates the various segments of the population of students, but appears no more egalitarian than private education, and considerably less egalitarian in outcome than the major portion of the private sec-tor in America-the Catholic schools,(p. 196) Since private school attendance is a function of voluntary association, housing is a constraint on attendance only to the extent that transportation becomes a difficulty.For this reason private schools are likely to be equally if not better able to bring together diverse groups of students for the purpose of learning the values of mutual tolerance and respect for difference.
Some have argued, however, that public schools are better conduits for promoting democratic values because they are democratically governed.From this perspective, public schools teach students through example the desirability and efficacy of democracy (Gutmann, 1987).Collective goals and interests are articulated through the democratic process which governs schools with positive results.First, it ensures that schools adhere to the preferences and policies voiced by the majority and so most closely approximate the ideal of common interests in a democratic society.Second, it prepares students to embrace and practice democracy as they mature into tomorrow's citizens.Conversely, private schools are thought to promote only the narrow interests of their sponsoring group (Mann, 1957).Following this line of reasoning, and since these schools typically are not governed by democratic processes, the parochial interests imparted by the school go unchallenged.Moreover, because they govern according to authoritarian principles, private schools shut off even the demonstration of democracy, and this, it is feared, teaches students the values associated with authoritarian rule.
These arguments are theoretical and have yet to be supported by empirical evidence documenting the link between governance and students' political values.What evidence exists in this area suggests that the reverse may be true.Democratic governance has been shown to make public schools particularly unwieldy bureaucracies, a reality that stymies their teaching effectiveness (Chubb & Moe, 1990).It is possible that students who are exposed to the cumbersome nature of bureaucratic action are just as negatively impressed by this aspect of democratic politics as they may be positively impressed by the democracy that produced it.
Similarly, a growing amount of empirical research demonstrates that private schools may be better equipped to impart the values of democracy.For example, the ability to select the school that one's children will attend has been positively correlated with increased levels of parental involvement (Schneider, Teske, Marschall, Mintron, & Roch, 1997).Since private schools are voluntary associations that tend to be much less encumbered by large bureaucracies, it is likely that they more closely represent a true polis, with active community and parental involvement in the life of the school.In this regard, just as the typically smaller, more autonomous structure of private schools aids in their teaching effectiveness, it may also facilitate an informal democratic process.
Finally, there is evidence to refute directly the concern that private schools promote only parochial interests.Catholic schools, which constitute the majority of all private schools, have been found to devote significant attention to the teaching of political values of inclusiveness, individual responsibility, and tolerance for the purposes of encouraging a just and harmonious democracy (Bryk, Lee, & Holland, 1993;Greeley, 1982), while public schools have recently been criticized for their failure to teach adequately the desired civic values (Final Report of the National Commission on Civic Renewal, 1998).It is reasonable to speculate that because private schools offer an alternative education to the public school system, they make extra efforts to impart the types of political values that public schools are expected to teach.
Clearly, the assumption that private schools cannot or will not promote the necessary public virtues is a matter of theoretical dispute that lacks empirical support.Nor is there evidence for the proposition that public schools successfully convey democratic values.Nonetheless, public schools have been widely extolled as "institutions where we learn what it means to be a public and start down the road to common national and civic identity" (Barber, 1997, p. 1).As this quote demonstrates, public schools are sometimes credited with near-mythical ability to forge one from many, yet the mechanisms by which they are able to do so are not clearly indicated.The results of our empirical study contribute to this growing body of contradictory evidence about the advantages of public over private education at promoting civic values.
THE TEXAS POLL: VARIABLES IN THE ANALYSIS
The Texas Poll is an annual survey conducted by the University of Texas.In 1997, the year in which the data were collected, respondents were asked a host of demographic questions as well as a wide range of questions about their political knowledge, interests, attitudes, and behaviors.One thousand people from across the state of Texas participated in the survey.Table 1 presents descriptive statistics, the mean, standard deviation, minimum and maximum value, and N for all of the variables used.
The first dependent variable examined was political participation.Subjects were asked whether they had registered to vote.While this question did not indicate whether a respondent had participated directly in the democratic process (through voting in an election, working for a campaign, etc.), this was a good indicator of a general interest in participation.First, it is a necessary prerequisite to voting, and those who register to vote have indicated some interest and intention to participate in democratic governance.Second, it is a broadly inclusive indicator of a respondent's attitude about participation.
The second dependent variable considered was political tolerance.To construct this measure, the researchers combined survey responses from several different questions that are commonly used to measure this concept (Stouffer, 1955;Sullivan, Peirson, & Marcus, 1979).Respondents were first asked to identify their least liked group from a list that was provided.Groups included the Nazi party, the Ku Klux Klan, gay and lesbian groups, atheist organizations, and environmentalists.They were then asked whether their least liked group should be allowed to hold a public rally in their city.Answers ranged from 1 to 5, with a 1 indicating that they strongly disagreed and a 5 indicating that they strongly agreed that the group should be allowed to rally.Finally, respondents were read one of several statements designed to make them reconsider their original answer.For example, if they indicated that they felt the group should not be allowed to hold a rally, they were read the following statement, "Suppose someone said it would not be fair to allow some groups to demonstrate while denying the right to others.Would you still oppose the rally or would you change your opinion and support the rally being allowed to take place?"A similar type of statement designed to produce reconsideration was read to those originally in favor of allowing the group to hold a rally.
From these questions, we constructed a variable with a nine-point scale.On this scale, a 1 indicated that the respondent strongly disagreed that the group should be allowed to hold a rally and refused to change his or her mind, while a 9 indicated that the respondent strongly agreed that the group should be allowed to hold a rally and refused to change his or her mind.The more a person either is willing to be persuaded out of opposing the rally or is staunch in his or her continued support of the group's right to hold a rally, the more tolerant that person is considered.
The independent variables collected in the Texas poll allow us to differentiate among three types of educational experience.Respondents who attended public school for all of their primary and secondary education were compared to two groups of private school attendees: those who attended only private school for their entire primary and secondary education and those who attended private school for part of their primary and secondary education.
Because it could be argued that the effect of private education on respondents' political values is attributable to other factors, we controlled for a variety of background characteristics that might be associated with private school attendance and also with the two dependent variables of political participation and tolerance.For example, we controlled for the respondent's gender and age.Gender is important because men and women do not necessarily have the same educational opportunities, nor do they necessarily have similar political behaviors.Because older people tend to be more politically active and have been out of school for a longer time, controlling for the possible effect of age is also important.
Similarly, because groups can have different educational opportunities and different political experiences, we controlled for respondents' race and ethnicity with dichotomized variables for African-American, Asian, Hispanic, and other ethnicity.In our analysis.White was a default category against which the other groups were compared.Place of residence affects people's access to private education, and it also shapes political influences.For this reason, we controlled for non-urban versus urban dwelling, and we also controlled for whether the respondent lived in one of the two major metropolitan areas in Texas, Dallas-Fort Worth and Houston.
We also controlled for respondents' family income and the number of years they had lived at their place of residence.Both are measures of socioeconomic status in that low-income families tend to move at higher rates than high-income families.These variables are potentially problematic in that higher income and lower mobility may partially be the product of private education if it is true that private schools tend to cause better educational outcomes (Chubb & Moe 1990;Coleman, et al., 1982aColeman, et al., , 1982b;;Greene, Peterson, & Du, 1998;Hoxby, 1998;Neal, 1997).Whether private schools do, in fact, tend to produce better educational outcomes and therefore higher socioeconomic status later in life, however, is a matter of dispute (Cookson, 1994;Levin, 1998;Smith & Meier, 1995).If private education does contribute to academic and then financial success, then controlling for these two variables may partially control for, and depress, the estimated effect of private education.
The final control variables used in analyses of both dependent variables were a series of dichotomous measures of religion; i.e., whether the respondent identified himself or herself as Catholic, Baptist, traditional Protestant (e.g.. Episcopalian, Congregationalist), or other Protestant.These categories represented the primary religious affiliations of all respondents in the survey.Controlling for the effects of religious identification is important because people of different religious groups may have different attitudes about politics at the outset.Isolating the effect of the type of education from the attitudes which subjects' religious affiliation may have predisposed them to hold measures results produced directly and independently by private schooling.
Standard political behavior models often include a number of additional controls lor items such as respondents' ideological leaning or party identification.We chose not to control for these, because while they may infiuence tolerance and participation, they are also likely to be outcomes of educational experience.Controlling for these items would therefore potentially bias our results.At the same time, these items are not likely to have affected the type of education the respondent received as a child; since the effect does not predate the effect of the primary independent variable with which we are concerned, omitting these types of controls should not pose a problem.We believe that we have controlled for most important factors which are related both to type of education (our primary independent variable) and to participation and tolerance (our dependent variables).Thus we can be reasonably confident of the estimated effect produced by our analyses.
While all of the independent variables were included in both analyses, two additional variables were used in the tolerance model.Respondents were asked to rate, on a scale of 0 to 100, how threatening they felt their least-liked group was, with a 0 representing not threatening and 100 representing maximally threatening.This variable is important because a respondent who is willing to let an opposed group hold a rally yet believes that the group is harmless is not necessarily exhibiting the same level of tolerance as someone who is willing to let an opposed group hold a rally and finds that group to be extremely threatening.(Whether we, as a society, desire that degree of tolerance is a different question.)The second variable added to the tolerance model is a feeling "thermometer" which measures how strongly respondents feel about their least liked group, again, on a scale of 0 {maximally opposed) to 100 (maximally favorable).Similar to the threat variable, the thermometer is designed to capture intensity of feeling, which is important because people who feel neutrally about other groups may not have the same political response as people who feel strongly opposed to other groups.
RESULTS: POLITICAL PARTICIPATION
People who received some of their education in a private school setting are more likely to be registered to vote than those who received all of their primary and secondary education in public schools (see Table 2).Even controlling for other factors, the effect is significant.To illustrate the significance of this relationship, we generated predicted percentages that are registered to vote from our logit model of those who went to some private school and those who went only to public school.These predicted percentages are computed from the logit results by setting the value of all independent variables (besides the primary independent variables) to their means.
When compared to all public school attendance, some private schooling increases the likelihood of participation by 9%.Specifically, we would expect 84.8% of those who attended only public schools to register to vote.In comparison, 93.8% of those who had some years of private schooling are expected to register to vote.Both voter registration percentages appear high, which suggests that there is some degree of over-reporting among respondents (Shaw, de la Garza, & Lee, 1998).However, there is no reason to believe that this over-reporting is systematically biased toward one type of education over the other, and thus it is safe to assume that the difference observed between those who attend public school and those who attend some private school still holds.In other words, even taking into account some degree of over-reporting, private schooling still has a positive and significant effect on political participation.
Interestingly, however, the beneficial effects do not hold for people who received all of their education in private schools.These people are not significantly different from those who spent all of their primary and secondary years in public schools.In fact, while it is not a significant difference, attending only private schools has a negative estimated effect on voting registration.Again, this effect is not significantly different from zero impact, and so the negative effect observed should not be interpreted as anything more than the result of chance.While additional research (including a larger sample size) is necessary to firmly establish the causal linkages, it is clear that there is some substantive factor that differentiates those who attend only private schools and those who attend a mix of public and private schools.We will return to this point in greater detail in the conclusion of this essay.The effects of the other independent variables were consistent with what we would generally expect.Older citizens are more likely to participate than younger citizens.People with higher socioeconomic backgrounds, those whose families had higher incomes, and those who have stayed in one residence for longer periods of time are more likely to be registered to vote.These effects are positive and significant and are completely consistent with general political behavior research, which finds a high degree of correlation between socioeconomic status and participation and between age and participation (Verba, Schlozman, & Brady, 1995).Also consistent with the litera-ture in this area.Latinos and Asians are significantly less likely to register to vote when compared to White citizens.African-Americans are politically indistinguishable from Whites in this regard once income is controlled.
With regard to participation, there appear to be no differences between urbanites and non-urbanites.It is possible that some of these effects are absorbed by other variables such as race and income; Dallas and Houston (and Texas's other urban centers in general) tend to have greater concentrations of poor people and of African-Americans and Hispanics, and so controlling for these items may negate the independent effects of urbanicity.Gender also does not seem to matter in terms of voting registration.
Perhaps most interesting, religion appears to have no independent effect on participation.None of the categories in our analysis (Catholic, Baptist, traditional Protestant, and other Protestant) appear significantly different from each other or from our default category which includes non-religious and other religions (Jews, Muslims, etc.).This is important in that it suggests that the effect observed among those who attended some private school is not an artifact of their religious training.Catholic schools, for example, may encourage political participation beyond the level found in Catholics educated in public schools.
RESULTS: TOLERANCE
As with participation, attending some private school has a strongly positive effect on tolerance.In an Ordinary Least Squares (OLS) analysis of the ninepoint tolerance scale, receiving some private school education increases tolerance by .604when compared to attending only public school (see Table 3).This represents an increase of .23 standard deviations on the tolerance scale.To put the magnitude of this benefit in perspective, we would expect that someone who started out more tolerant than 50% of the population would become more tolerant than 59% of the population if they had attended some private school.The strength of this effect is all the more compelling given that a broad range of factors was controlled.However, as with participation, attending only private school does not seem to have a significantly different effect on tolerance than attending exclusively public school.
Only a few of our control variables appear to have a significant effect on tolerance.Higher income is associated with a greater degree of tolerance, as is being male.Other Protestant groups are also more likely to be tolerant than other religious and non-religious groups.Because the data do not specify which denominational affiliations fall into this category, it is difficult to interpret this result.Neither age nor years in residence has a significant effect on tolerance.
Finally, the thermometer and threat ratings are both significantly correlated with degree of tolerance.Not surprisingly, the more favorably (or the
CONCLUSION
The evidence presented suggests that private school education can contribute to the development of such key democratic values as participation and tolerance.Moreover, because we were able to control for a number of factors which are often associated with type of education and political behaviors or attitudes, we can be reasonably confident that the observed effects of private education are not spurious findings.Controlling for religion, for example, allows us to differentiate between the effect produced by schooling which may take place in a religiously affiliated school and that produced by religious affiliation in general.This is also true for the income and socioeconomic measures; by controlling for these items, we can be fairly confident that we are not misinterpreting the effect of any possible self-selection by higher income families into private education for that of private education itself.These findings fly in the face of many conventional attitudes about public school's superiority and add to the growing body of research on the beneficial effects of private schooling.What is not clear from our research, however, is which attributes of private schools are responsible for their greater degree of effectiveness in promoting desired political values.Moreover, this study applies only to the state of Texas and therefore raises questions of external validity.This shortcoming notwithstanding, it is reassuring that the relationships between schooling and political values identified are consistent with the results of research conducted on a nationwide basis and among specific subsets of the population (Greene, 1998;Greene, Peterson, & Du, 1998).Further research is necessary in order to extend the results to other areas of the country and to gain a more precise sense of the mechanisms by which private schools better promote democratic values.
More research is also necessary to understand what differentiates those who attend a mix of public and private schools from those who attend private schools exclusively.For now, the researchers can only hypothesize about what clearly is a significant substantive difference.It is likely that those whose families chose to send them to private school for all 12 years did so for clear and purposeful reasons.They may have an ideological opposition to the type of educational experience they believe that the government-run public schools provide, for example.An entire private school education may sometimes not be as much an indicator of the educational preferences of the family as an indication of the political rejection of the civic values of participation and tolerance generally attributed to public schooling.In this case, it is possible that families that choose exclusively private over public schooling, as a principle, are interested in exposing their children to a different value system-one at odds perhaps with what is traditionally assumed to be offered by government-run common schools.The irony is that while 12 years of private schooling may represent a rejection of socially desired political values, significantly better civic outcomes are not produced by 12 years of public education.
In contrast, those who went to private school for a number of years, but not exclusively, exhibit a greater commitment to democratic values than either their purely public or purely private school counterparts.One possible interpretation of this is that these individuals grew up in households that were generally in favor of the ideals associated with public schools, but spent some years in private schools in response to specific circumstances.This type of private school experience likely does not represent an outright rejection of the public school system and the political values typically associated with government-provided education.Rather, it may be an isolated response to a perceived problem.Whatever the circumstances prompting the change between public and private schools, the partial exposure to private schooling appears to be associated with promoting democratic values.
While additional research is necessary to understand more fully the differences observed between all and some private schooling, this paper has presented important new evidence to contradict the assumption that private schools are unconcerned with civic education.Along with a growing body of literature in this area, we have found that private schooling can significantly increase public commitment to such democratic values as political participation and tolerance.Understanding how private schools are able to be more effective in this than public schools is the next step.More importantly, however, it is time to stop assuming the superiority of public schools for promoting civic values and begin to test these assumptions with systematic empirical research.
|
2018-12-04T12:25:04.587Z
|
1999-06-01T00:00:00.000
|
{
"year": 1999,
"sha1": "98e785c269ab9c45748baa39170735863af5f2e2",
"oa_license": "CCBY",
"oa_url": "https://digitalcommons.lmu.edu/cgi/viewcontent.cgi?article=1122&context=ce",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d33e35095a98566399804fb21f126dcab9aeaa4d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
242244836
|
pes2o/s2orc
|
v3-fos-license
|
MECHANISM AND TOXIC EFFECTS OF SOME HEAVY METAL ON HUMAN HEALTH
1. Associate Professor, Department of Chemistry, Meerut College, Meerut. 2. Lecturer, Department of Chemistry, Eicher School, Faridabad. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 30 June 2020 Final Accepted: 31 July 2020 Published: August 2020
The most promising challenge of developing industries is the release of heavy metals in the environment. Though there have been much efforts made during past decade to eliminate the release of these metals by the developed countries yet the risk is increasing continuously as these metals can neither be destroyed nor can be recycled easily. Moreover use of new therapies for the cure of many carcinogenic diseases and neurogenetic disorders also require the consumption of heavy metals in one form or other. Because of a high level of toxicity heavy metals like arsenic, mercury ,lead ,chromium and aluminium are of primary concern for public health . Toxic effect that can be caused by these heavy metals depends mainly upon amount of dose consumed , the route of vulnerability and length of exposure to the heavy metal i.e. severe or persistent. Toxicity caused by these metals can lead to oxidative stress due to the genesis of free radicals and may result in excessive health hazards .This review article aims to discuss the sources of exposure ,mechanism of toxicity, health effects along with some common diseases caused in human by these heavy metals.
…………………………………………………………………………………………………….... Introduction:-
Metals may be defined as substances which are good conductors of heat as well as of electricity and possess metallic lustre. Metals are also known to possess certain mechanical properties like high tensile strength, malleability and ductility. The distribution of metals in environment is controlled by various characteristic properties of the metal and by a number of environmental components (1). Metals may be broadly classified into light and heavy metal in terms of their density.Density of Light metals like Li,K,Na,Ca etc is less than 5 gm/cm 3 whereas density of heavy metals like Sn,Pb, As ,Hg is more than 5gm/cm 3 .Among the known metals Li (density .53g/cm3) is lightest and Os (density 22.5g/cm3) is heaviest metal.
Heavy metals are naturally present in earth's crust and are used for various industrial objectives .Heavy metals like iron , lead ,mercury, copper etc are the indicators of human progress .Heavy metals are considered to be the mainstays of all major civilizations .One can not imagine to travel, compute or perform any important task of life without the support of these metals.
S Barium Boron Zinc 4 Iron Germanium Actinium Zirconium 5 Magnesium Gold Cadmium Tungsten 6 Manganese Erbium Chromium Radium 7 Lithium Gallium Hafnium Ruthenium 8 Sodium Holmium Copper Thorium 9 Rubidium Neodymium Indium Thallium 10 Strontium Terbium Lead Titanium 11 Potassium Thulium Mercury Silver 12 Molybdenum Tin Nickel Polonium 13 Ytterbium Platinum 14 samarium Palladium Table:-classification of heavy metals based on toxicity. (Source: U.S. GEOLOGICAL SURVEY 1133, 1995 On the other hand heavy metals are also considered to be the environmental pollutants of major concern because like most other organic pollutants metals are non-biodegradable or perishable. Among the 35 natural occurring metals 23 metals have high specific density i.e more than 5 g/cm 3 and their atomic weight more than 40.04 and possess similar physical and chemical properties (2,3).A few of the heavy metals have functional roles necessary for a number of physiological and biochemical activities of human body when taken in an adequate amount . At the same time these heavy metals exert harmful effect when taken in high amount whereas some others are delirious to human body even if it is taken in small doses and may result into acute and lifelong toxicity . Out of these 23 metals lead (Pb), mercury (Hg) and cadmium (Cd) do not have any biological importance to human body and are known for their extremely toxic effect. Other toxic metals are chromium, copper, manganese, nickel, tin and zinc.
These heavy metals once they are absorbed they remain in the body of a human being for infinite time and do not get degraded .After accumulating at a higher concentration they result in the formation of complexes within various tissues and cells thereby causing a number of diseases (4).
Sources of exposure and mechanism of toxicity of heavy metals
Toxic heavy metals are released into the environment through the municipal and organic wastes , mining, chemical ,electric and leather industries , process of smelting as well as refining of metals ,burning of fossil , vehicular exhaust, thermal power plants ,fertilizers and agricultural wastes. These heavy metals can be transported from place to place by wind depending upon the fact that they are present in gaseous phase or in the form of particulate matters ,by erosion or by acid rain to various loctaions on soils and water bodies . Heavy metals are present everywhere in air we breathe in food that we consume and in water we drink. Although everywhere in the ecosystem these heavy metals are present but their exposure to the humans is through various man made activities. Unlike organic pollutants heavy metals are indestructable poisons. Even a small concentration is capable of disrupting bodys normal metabolism. Exposure to heavy metals can repress the immune system by increasing the toxicants in body (5).
Human being are exposed to the toxic effect of the heavy metals present in environment if they inhale the air adulterated with the dust of metal particles , smoke and small molecule produced by ignitions, consuming contaminated food , eating at the polluted place without washing hands. On intake, the heavy metals become integral part of some body parts like bones. Kidney, liver, brain and accumulate with many years half-life (6).Sources and mechanism of toxicity of some of the harmful heavy metals are discussed below.
Arsenic
Arsenic is a metalloid and it does not exist in nature in free form but is present as a compound in combination of oxygen, chlorine ,hydrogen ,lead or mercury.The two soluble inorganic forms of arsenic are arsenite (trivalent state As 3+ ) and arsenate (pentavalent state As 5+ ) .Their Compounds are lethal to human beings and other organisms present on the earth. Arsenite is hundred times more toxic in comparison to arsenate. Human exposure of arsenic is either through industrial waste or by drinking contaminated water (sources of contamination are pesticides , herbisides,paints and fungisides) (7).The absorbed arsenic gets accumulated in larger concentration in kidney ,heart, liver and lungs in human body whereas it is stored in lower amount in muscles and neurons (8). Aggregation of arsenic (As) in these organs causes many disorders which include diabetes, cancer, hepatotoxicity, cardiac 1142 dysfunction, and neurotoxicity According to WHO the permissible limit of arsenic is 10 μg/l and the utmost permitted limit is 50 μg/l of drinking water (9).
Mechanism of toxicity of arsenic:
Arsenic is capable to interact with the sulfhydryl group present in enzymes and proteins and to replace phosphorus (P) in many biochemical reactions (10). Arsenic in-vitro reacts with sulfhydryl (R-SH) groups present in protein to deactivate some of the enzymes such as dihydrolipoyl dehydrogenase and thiolase thereby inhibiting the process of oxidation of pyruvate and betaoxidation of fatty acids (11).Metabolism of arsenic take place by its methylation as well as by reduction processes . Methylation of harmful inorganic compounds of arsenic takes place by microbes like algae ,bacteria ,fungi as well as by humans to form monomethylarsonic acid (abbreviated as MMA) and dimethylarsinic acid (abbreviated as DMA) (12).Inorganic arsenic species (iAs) through the process of biotransformation are enzymetically converted into methylated arsenicals that are considered to be the final metabolic products and are the biological markers of acute exposure of arsenic, The process of biomethylation is considered to be the detoxification technique and the end products of the process are found to be methylated inorganic arsenic species (iAs) like monomethylarsonic acid i.e.MMA(V) and dimethylarsinic acid i.e. DMA(V) which are excreted out of body through urine. This is the biological indication of the chronic exposure of arsenic .In the process of excretion the MMA(III) formed in the reaction does not get excreted and it persists inside the human body as intermediate .This MMA(III) is detected to be extremely toxic and is responsible for the arsenic induced carcinogenesis (13).
Mercury
Mercury is present in hair dyes ,cosmetics ,dental amalgums and lighting.The other source of mercury is coal fired plants and chlor alkali industry.Mercury is found to be one of the exceptionally harmful metal. Mercury is the only metal that exists in liquid state at room temperature. Various oxidation states of mercury include elemental mercury (Hg 0 ), mercurous ( monovalent state Hg +1 ) and mercuric (divalent state Hg +2 ). Mercury mainly exists in three forms which include metallic elements (Hg 0 ), inorganic salts (mercurous and mercuric) and organic compounds (present in three forms: aryl mercury compounds , short chain alkyl compounds and long chain alkyl compounds of mercury) each of which possesses different toxicity and bioavailability. Each and every form of mercury is toxic. The only difference is the way of absorption and biotransformation into the other states. Since mercury easily vaporizes at room temperature, the main route of absorption is often through inhalation to lungs . Human being can be infected with mercury through the anthropogenic activities like mining,waste water discharge ,agriculture or industry waste (14). Methylmercury may be neurotoxic compound which is accountable for the destruction of microtubule , damage of mitochondrial , peroxidation of lipids and the accumulation of neurotoxic molecules like glutamate ,serotonin and aspartate (15) .The main target organ of mercury is brain yet it can damage any body part resulting into the breaking down of muscles nerves and kidneys as well. It can disrupt the potential of the membrane and may hinder with the intracellular calcium equilibrium. Inorganic mercury salts are found to be nephrotoxic (16). Vapours of mercury can cause acute bronchitis, asthma and temporary respiratory problems. According to WHO the levels of mercury in the water should not exceed one microgramme per litre.
Mechanism of toxicity of mercury:
Mercury ions may cause toxicity by precipitation of protein, inhibition of enzyme and their specific corrosive activity. Mercury links to numerous biological structures blocking their activity . It has a high affinity for sulfhydryl groups (-SH) of aminoacids, proteins, enzymes, and sulfur-containing antioxidants such as N-acetylcysteine (NAC), α-lipoic acid (ALA), and glutathione (GSH) . Glutathione is the most potent intracellular and mitochondrial antioxidant for protecting against oxidative stress, inflammation, and cardiovascular diseases (17).Proteins (including enzymes) with phosphoryl ( −PO 3 2− ) , carboxyl (-COOH ), amide (-CONH2), and amine (-NH 2 ) groups are readily available and highly vulnerable to react with mercury compounds. when get bound to mercury, most of the proteins become deactive. Mercury can bind itself to the metallothioneines causing the replacement of zinc (Zn), copper (Cu), and some other trace metals, and becomes competitor for selenium, decreasing the efficacy of the metalloenzymes . The mercury-selenium (Hg-Se) complex formed is responsible for decreasing the accessibility of selenium (Se) for the formation of the enzyme glutathione peroxidase which is responsible for the break down of hydrogen peroxide (H 2 O 2 ) and other toxicants . elemental mercury vapours are highly soluble in lipid which permits it to cross cell membranes very easily . Elemental mercury can also be easily oxidized to the mercuric state (Hg 2+ ). The divalent mercuric salts are more soluble in comparison to the monovalent mercurous 1143 salts .Therefore the mercuric compounds when ingested gets absorbed at a more rapid rate than mercurous compounds and are responsible for causing greater toxic effect. Just about ten percent of an inorganic salt (whatsoever may be the oxidation state) is consumed in comparison to ninety percent absorption which takes place via the gastrointestinal (GI) track in organic forms. That way the inorganic forms are readily accessible within the gastrointestinal track to apply destructive consequences for the gastrointestinal mucosa. The organomercurial compounds may be present in form of long-chained aryl mercury compounds as well as short-chained alkyl mercury compounds. Out of these two forms the short chained alkyl mercury compounds like methyl mercury causes more hazard. These compounds may be easily and completely absorbed from the gastrointestinal tract and then are diffused to the brain, kidney ,liver and other vital organs. Excretion mainly takes place in faeces.Excretion of aryl mercury compounds takes place in the form of mercuric ions.
Lead
Lead is widely used in paints as a pigment and also to enhance the consistency and durability of paints. Nonbiodegradable nature of lead makes it persistence in the environment for a long time . Lead toxicity results in irreversible health hazards .Human being are mainly exposed to the lead toxicity through cigarette smoke , drinking water, food they eat ,contaminated pollutants of industrial waste and domestic sources. The industrial wellsprings of lead incorporate fuel, house paint, plumbing pipes, lead slugs, faucets ,stockpiling batteries, pewter pitchers and toys (18) . Lead toxicity affects the central nervous, hematopoietic, hepatic and renal system in the human resulting into serious disorders (19) .According to WHO the acceptable limit of lead in water is 10 microgrammes per decilitre(µg/dL).A person suffers from Chronic toxicity of lead when its level in blood reaches to about 40-60 ug/dL. Chronic toxicity is indicated by continuous vomiting, dullness in body , deliriousness, convulsions , encephalopathy and coma ( 20) .
Mechanism of toxicity of lead :
One of the significant mechanism by which lead exerts its poisonous impact is through biochemical procedures that incorporate lead's ability to hinder or copy the activities of calcium and to associate with proteins (21). Lead can interfere in the normal functioning of the biological molecules by a number of mechanisms by binding itself to them . Lead can bind itself to the sulfhydryl (SH ) group and amide (CONH 2 ) groups of enzymes decreasing their activities by changing their composition and structure . Lead may likewise contend with the cations of essential metals for their binding positions restraining the activity of enzyme or changing the transportation of the cations of essential metals like calcium (22).Living cells suffer from lead toxicity by following the ionic mechanism as well as oxidative stress. Oxidative stress in living cells is chiefly a result of imbalance between the generation of free radicals and formation of antioxidants ( under normal circumstance there is a balance between free radical and antioxidants ) for the detoxification of the active reaction intermediates or to restore the resulting injury.
Fig:-Balance Between Free Radicals And Antioxidants Any Deviation From It Can Cause Oxidative Stress Leading To Cell Death.
Antioxidants like glutathione, present in the cell protects it from free radicals such as H 2 O 2 , OH * . The level of ROS (reactive oxygen species ) increases and that of antioxidants decreases under the influence of lead (23) . As glutathione is present both in reduced glutathione (GSH) and oxidized glutathione disulfide (GSSG) forms , the reduced form of glutathione (GSH) forms its reducing correspondents (H + + e − ) from the thiol (-SH) groups of cystein to ROS (reactive oxygen species) in order to stabilize them. The reduced form of glutathione (GSH) preferably binds with other molecule of glutathione in presence of enzyme glutathione peroxidase (GPx) and forms glutathione disulfide (GSSG) after donating the electron. The reduced form of glutathione (GSH) represents for ninety percent of the total glutathione content present whereas the oxidized form of glutathione (GSSG) represents for about ten percent under normal circumstances. However under the condition of oxidative stress, the concentration of glutathione disulfide (GSSG) exceeds the concentration of reduced glutathione ( GSH). An additional biological marker for the oxidative stress is the peroxidation of lipids because the free radicals accumulate electrons from the molecules of lipid that are present inside the membrane of cell and finally causes the lipid peroxidation (24). Reactive oxygen species (ROS ) at a very high concentration may cause the constitutional damage of cells, nucleic acid, proteins , cell membranes and also lipids thereby results in a stressed condition at the cellular status (25).
Chromium
The heavy metal chromium generally exists in nature in a trivalent form .However certain proportion of hexavalent form is also present.This composition is highly unstable and hence it is a powerful oxidizing agent and therefore can cause serious health effects. Chromium is widely used in chromium steel ,fertilizers ,electroplating, metallurgy , paints and pigment industry petroleum and paper industry. Various industrial disposals , use of fertilizers as well as anthropogenic activities may release Cr in the atmosphere (26).The metal chromium does not cause air pollution directly but it can transmit the suspended particulate matter (SPM) in the environment. It is observed during the 1145 recent years that hexavalent chromium is the main cause of environmental pollution (27).According to WHO the level of hexavalent chromium in potable water should not be more than 0.05 mg/L.
Mechanism of toxicity of chromium :
The cell membrane permeability of trivalent chromium is low and is less harmful whereas hexavalent chromium is more active to pass through the cell membrane for isoelectric and isostructural anions like SO 4 2and HPO 4 2channels and these chromates are taken up through the process of phagocytosis. Chromium can transform to the mobile hexavalent state if it is strongly heated with a corrosive oxidizing agent like soda ash. Because of the strong oxidizing nature Cr(VI) can be easily reduced to a transitory species of Cr(V) and Cr(IV) both of which are quite different in nature than Cr(III). Cr(V) which is comparatively a long lived species is stabilized by glutathione and consequently the intracellular reduction of Cr (VI) is a process of detoxification when reduction takes place away from the target area . In case the intracellular reduction of Cr(VI) happens near the target area , it might activate Cr. Chemical reactions between Cr(VI) and the biological reducing agents such as ascorbate and thiols leads to the production of reactive oxygen species (ROS) such as hydrogen peroxide (H 2 O 2 ) ,superoxide ion (O 2 − ) , and hydroxyl radical ( • OH) , finally causing oxidative stress in the cell resulting in the damage of deoxyribos nucleic acid (DNA ) and proteins (28). hexavalent chromium has been proved to be more deadly than the trivalent form of chromium because the hexavalent chromium can more easily enter inside the cells in comparison to the trivalent chromium and it finally gets reduced to Cr(III).Since the hexavalent chromium can easily cause mutation (mutagenic nature ) it has been categorized by the international agency for the research on cancer (iarc) as group 1 carcinogen to humans.
Aluminium
Aluminium is third among the most abundant metals present in earth crust (29).It is imperfectly absorbed following either oral intake or inhalation. In environment Aluminium exists in tripositive (Al 3+ ) oxidation state only. The chief source of aluminium exposure in human are through food additives ,beverages , cosmetics, drinking water and medicines containing aluminium such as buffered aspirin (buffered with magnesium carbonate, calcium carbonate and magnesium oxide) and antacids . Aluminium compounds are absorbed at a poor rate in human . Presence of excessive amount of aluminium in the body of a human being is indicated by the common symptoms such as vomiting , skin rashes ,nausea ,diarrhoea, mouth ulcer ,arthritic pain etc. Aluminium binds itself to various ligands present into the blood and is transported to every organ and the highest concentration is in lung tissues and bones. For healthy individuals the aluminium level in bone tissues varies from 5 to 10 mg/kg whereas serum level ranges from 1 to 3 µg/L in healthy person.
Mechanism of toxicity of aluminium:
Interaction between Al and plasma membrane ,apoplastic and symplastic targets may result in aluminium toxicity (30). Mg 2+ ion and and Fe 3+ ion are replaced by Al 3+ ion in human which may result in disturbances associated with intercellular communication, cell growth and secretory functions. The changes that are evoked in neurons by aluminium are similar to the degenerative injuries as seen in Alzheimer patients. Neurotoxicity effects like neuronic degeneration (Neuronal atrophy ) in the locus ceruleus (LC), striatum (primarily the dorsal striatum) and substantia nigra (SN) are the complications associated aluminium toxicity (31).
1146 Some common diseases related to heavy metal exposure Heavy metals are of common occurrence in our environment and in our diet as well . Many of these are necessary in small amount to maintain a good health and normal functioning of the body .But if these metals accumulate in body in concentration above the required level then these may cause serious damage .The toxicity of metal ions is caused by the chemical reactivity of the ions with enzymes, proteins and cell membrane systems.Accumulation of heavy metals may damage the functioning of vital organs like liver ,kidney, lungs and brain. Exposure to these metals for a long duration may lead to progressive muscular and neurological degenerative processes and result into Parkinson's disease , muscular dystrophy and Alzheimer's disease .Some of the metals on long term exposure may result into cancer (32). Accumulation of the heavy metals on some specific organs depends primarily on the route of exposure and chemical state of metal like its volatility, valency and solubility in lipids etc.
Arthritis:
Inflammation of joints is caused by the exposure to certain heavy metals like Fe, Cu, Cd ,Pb and Hg for a long time . Osteoarthritis (OA) is a common disease that affects bone and cartilage. The metal responsible for Osteoarthritis is lead (33) whereas Rheumatoid Arthritis (RA) is related to high blood copper levels .Level of Cu in the patient of Rheumatoid Arthritis is found to be higher than those in a healthy person and osteoarthritis patients (34).
Alzheimer's diseases (AD):
Alzheimers disease is a chronic neurodegenerative disease which diminishes memory as well as thinking capacity slowly and eventually the capability to accomplish the normal functions of daily need and worsens over the period. Heavy metals like Co ,Cd , and Cu adjust the gene expression , reduces the activity of proteins, affects signal transduction, generate ROS/RNS, alter cellular proliferations and differentiations and death, damage cells of brain, DNA damage of brain tissues, leading to neurodegenerative diseases like parkinson disease, amyotrophic lateral sclerosis (ALS ) and alzheimer disease (35) .
Schizophrenia:
Schizophrenia is a disease related to mental disorder which is induced by oxidative stress generated by the decreased level of the antioxidant resistance enzymes like superoxide dismutase, catalase and glutathione peroxidase (36).Some common symptoms of Schizophrenia are irritation and absurdness in behaviour, perturbed and agitated thoughts and oxidative stress which may be due to the change in levels of some trace metals.In case of schizophrenic patients levels of some of the heavy metals like lead ,chromium and cadmium is increased whereas level of some trace metals which are necessary nutritionally like Iron and selenium is reduced (37).
Epilepsy:
Epilepsy is one of the most prevalent non communicable neurologic conditions and is the main cause of disability and mortality affecting individuals of all ages (38). The high rate of head injury and of contaminations and infestations of the central nervous system NS lke neurocysticercosis, malaria and invasive bacterial contaminations may be imperative causes. Autoimmune neurological disorders induced by mercury can result into epilepsy. One of the major factors responsible for the epilepsy is deficiency and variation in the level of some essential metals like zinc(Zn), calcium(Ca) and magnesium(Mg) etc.
Kidney disease:
The common heavy metals that have toxic effect on renal system are Pb, Hg and Cd (39). About fifty percent of the accumulated Cd is stored in kidneys (40). Exposure to Cd is associated with chronic kidney disease (commonly called CKD) as it causes glomerular damage which results in progressive decline in Glomerular Filtration Rate (GFR means the rate of flow of blood through glomeruli per minute) eventually causing endstage renal failure especially in adults with hypertension or diabetes. If zinc is taken orally it is non-toxic in nature . However, its large quantity may cause dysfunctioning of the system that may result in the diminishing the growth as well as reproduction. The medical symptoms of toxicity of zinc results in kidney failure (41).
Multiple sclerosis:
Overexposure to mercury (Hg) and lead (Pb) ions is proved to be neurotoxic, especially in case of motor neurons (42). From small to medium levels of exposure to lead can result into increased hypersensitivity and can alter cytokine production, which increases risk of inflammation-associated tissue damage (43). Researches have proved that exposure even to a small amount to mercury in genetically sensitive animals stimulate autoimmune disease and results into disturbance in cytokine production .
Arsenocosis:
Arsenocosis may be described as a chronic sickness that results from drinking water with high amount of arsenic present in it for a long duration of time (such as from five years to twenty years) (44).This diseases is also known as arsenic poisoning and may result in various detrimental health impacts which include cancer of kidney, gallbladder and lungs, severe skin problems ,cancer of skin (45) ,blood vessel related diseases of legs ,high blood pressure and reproductive system disorders .
Parkinsons disease :
This disease is a neurological disorder which increases progressively and severely affects the movement. Common symptoms of Parkinson's signs may include tremor ,slow movement , rigid muscles ,loss of automatic movement and change in speech. Certain neurons present in brain gets destroyed or break down because of which they stop producing dopamine (a chemical messenger). Decrease in the level of dopamine causes abnormal functioning of brain which results into parkinson's disease. Oxidative stress is one of the major causes of the Parkinson's disease pathogenesis (46). It is found that the oxidative stress generated due to the free iron can lead to damage of neurons and neurotoxicity. Several epidemiological researches have illustrated a connection between parkinson's disease and exposure to heavy metals like mercury (Hg), manganese (Mn), copper (Cu) , lead (Pb) iron (Fe) , aluminum (Al), thallium (Tl) zinc (Zn) and bismuth(Bi ) (47). Professional exposure to the metals like iron, aluminum, and manganese are found to double the risk of parkinson's disease (48).
Conclusion:-
In this article we have reviewed the mechanism and toxic effects caused by the overexposure of some common heavy metals like arsenic ,mercury ,lead ,chromium and aluminium on human being. These metals are naturally present in earth crust mainly as their sulphides and oxides. Due to multiple uses in day to day life they are released in the environment through several anthropogenic activities . Overexposure to these heavy metals may result into some serious health 1148 consequences including neurological disorders, liver and kidney infections , immunological disorders ,endocrine disruptions and various types of cancers . Due the toxic nature and possible bioaccumulation in various body parts release of these metals by industrial activities should be monitored mandatorily. Some effective strategies are needed at national as well as international level to detect the areas which have a higher level of heavy metal pollution to achieve the target. Some more modern engineering techniques may be used to avoid the occupational exposure of these metals. Failing to manage the exposure of these metals will end in severe health consequences in future .
|
2020-10-28T18:34:24.446Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "cd84ac2eaf5053d66f5f7ce6661434984b75205b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/11597",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "4ed8693782be69927284c66810e0ed0f3e3b4900",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
257796429
|
pes2o/s2orc
|
v3-fos-license
|
World Journal of Gastroenterology
Common bile duct stones are among the most common conditions encountered by endoscopists. Therefore, it is well researched; however, some items, such as indications for endoscopic papillary balloon dilatation (EPBD), safety of EPBD and endoscopic sphincterotomy in patients receiving dual antiplatelet therapy or direct oral anticoagulant, selection strategy for retrieval balloons and baskets, lack adequate evidence. Therefore, the guidelines have been updated with new research, while others remain unchanged due to weak evidence. In this review, we comprehensively summarize the standard methods in guidelines and new findings from recent studies on papillary dilation, stone retrieval devices, difficult-to-treat cases, troubleshooting during the procedure, and complicated cases of cholangitis, cholecystolithiasis, or distal biliary stricture.
INTRODUCTION
Cholangitis is the second or third most common cause of community-acquired bacteremia, with common bile duct (CBD) stones being the most common [1,2].Recurrence of CBD stones is common, with 111 (11.3%) of 983 patients who underwent endoscopic sphincterotomy (EST) recurred during a median follow-up of 7.5 years, and the cumulative recurrence rates at 5, 10, 15, and 20 years were 8.5%, 12.5%, 19.1%, and 24.2%, respectively [3].It is frequently encountered by endoscopists, and it is important to improve short-term outcomes and prevent the long-term recurrence of cholelithiasis.This review focuses on small CBD stones.Although the international definition of small CBD stones has not been established, we have followed the standard of approximately 10 mm in some studies [4,5].We described papillary dilation, stone extraction, difficult cases, troubleshooting during stone extraction in small CBD stones, and complicated cases of cholangitis, cholecystolithiasis, or distal biliary stricture and summarized the European, American, and Japanese guidelines.Moreover, this review addressed the novel literatures on endoscopic papillary balloon dilatation (EPBD) dilation times to prevent postendoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) [6], the duration of direct oral anticoagulants (DOAC) and dual antiplatelet therapy (DAPT) withdrawal to safely perform EST [7,8], EST with balloon dilation (ESBD), and the comparison of the effects of retrieval balloon and basket catheters for small CBD stone extraction [9,10].
COMPARISON OF EPBD AND EST
Papillary dilation is divided into EST and EPBD, and a nationwide administrative database of 61000 hospitalized patients with CBD stones throughout Japan reported that EST was performed in 89% of patients and EPBD in 11% [11].Knowledge of the success rate of CBD stone removal and the incidence of short-and long-term complications is important when deciding between EST and EPBD.
Success rates of CBD stones clearance
A meta-analysis reported that EPBD has a lower incidence of total clearance of CBD stones and more frequent lithotripsy basket use than EST [12].However, 11 of 14 references in this study included cases of CBD stones larger than 10 mm.Conversely, there were no significant differences in total clearance of CBD stones in another meta-analysis by Liu et al [13].
Yu et al [6] reported that both EST and EPBD have obvious effects in the treatment of bile duct stones with minor diameters (< 10 mm) and small numbers (< 3).The EPBD balloons used in that study were mostly 8 and 10 mm in diameter, especially those with 8 mm in diameter.Because a typical papillary dilation balloon is 8 mm in diameter, the indication for EPBD may be CBD stones up to 10 mm in diameter, considering the flexibility of the papillae.However, even for CBD stones > 10 mm, EPBD combined with endoscopic mechanical lithotripsy may have a success rate of stone retrieval comparable to that of EST [13].Therefore, EPBD may be useful in cases of coagulopathy in which CBD stones are larger than 10 mm.
There is lack of evidence for the possibility of very small stone extraction without EST or EPBD.It has been reported that if ESWL results in stone fragment size of 3 mm or less, there is a likelihood that the stone will be spontaneously discharged without EST.Therefore, it is possible that stone extraction can be performed without EST or EPBD if the size is less than approximately 3 mm [14], however, there are no studies that have directly examined this issue.Therefore, in principle, EST and EPBD are recommended for stone extraction of CBD stones, as recommended by the European Society of Gastrointestinal Endoscopy (ESGE) and Japan Gastroenterological Endoscopy Society (JGES) guidelines; however, it is at the endoscopist's discretion whether to perform stone extraction without these procedures for very small stones [15,16].
Incidence of short-term complications
In cases of EPBD compared to EST, post-ERCP pancreatitis increased, bleeding decreased, and there was no significant difference in perforation or post-ERCP cholangitis.PEP and hemorrhage are likely to occur especially in approximately 10% and less than 0.1% of patients in the EPBD group, respectively; and in approximately 3% and 3% of patients in the EST group, respectively [12].The total data in the meta-analysis has variation in the patient's background; however, it is consistent with that of a previous report [17].
PEP: PEP may be a short-term complication when selecting EST/EPBD.ESGE describes the following risk factors for PEP: Patient-related definite risk factors include suspected sphincter of Oddi dysfunction, female sex, previous pancreatitis, and previous PEP.Procedure-related definite risk factors, such as difficult cannulation, pancreatic guidewire passage > 1, and pancreatic injection.Patient-related risk factors include younger age, non-dilated extrahepatic bile duct, normal serum bilirubin, absence of chronic pancreatitis, and end-stage renal disease.Procedure-related risk factors include precut sphincterotomy, pancreatic sphincterotomy, failure to clear bile duct stones, intraductal ultrasound, and biliary balloon sphincter dilation [18].ESGE especially recommends prophylactic pancreatic stenting in selected patients at high risk for PEP (inadvertent guidewire insertion/opacification of the pancreatic duct and double-guidewire cannulation).
In a multicenter randomized control study, 117 patients with bile duct stones were treated with EPBD; after treatment, the incidence of pancreatitis among those patients reached 15.4%, and two patients died from post-treatment complications [19].Incomplete dilation of the papilla, intramucosal bleeding, and local edema were considered the main causes of PEP due to EPBD.Conversely, several reports of randomized control trials or network meta-analyses suggested that there is no direct consequence between PEP risk and EPBD [20,21], and PEP usually occurs in the mild or moderate stage [12].Recently, a network meta-analysis reported that 2 to 5 min of EPBD could decrease the incidence of PEP compared to short-term (< 2 min) EPBD.In addition, it was also reported to reduce PEP without increasing the occurrence of other early complications by extending the duration of balloon dilatation [6].However, the underlying mechanism for this result remains unclear.A possible reason could be that the dilatation with a small diameter balloon or short duration could result in inadequate papilla expansion; thus, the common discharge channel for bile and pancreatic juice tended to be narrow after the operation [6].That study did not examine EPBD longer than 5 min; however, another study found that 5-min EPBD increases PEP compared to EPBD of 0.5-3 min [22].Although this is a study of EPBD combined with small-incision EST, it may be advisable to avoid EPBD for more than 5 min [22].Therefore, we use a 2-3 min EPBD.
In recent years, diclofenac or diclofenac and sublingual nitrates have been reported to be useful for the prevention of PEP [23,24].ESGE also recommends routine rectal administration of 100 mg of diclofenac or indomethacin immediately before ERCP in all patients without contraindications to nonsteroidal anti-inflammatory drug administration [18].These methods were not available in 2004 when EPBD was abandoned by many endoscopists, especially in America, and combining such methods may reduce the incidence of PEP due to EPBD.Furthermore, EPBD may be even safer in Asians, as some race-based studies have shown no increase in PEP in Asian populations [12].
Bleeding: ESGE guidelines suggest that patients should be considered at increased risk of post-EST bleeding if at least one of the following factors is present: anticoagulant intake, platelet count < 50000/mm 3 , cirrhosis, dialysis of end-stage renal disease, intraprocedural bleeding, and low endoscopist experience [18].
ESGE, JGES, and the American Society of Gastrointestinal Endoscopy (ASGE) guidelines treat antiplatelet medications almost similarly for EST/EPBD.DAPT is permitted in EPBD without drug withdrawal, whereas EST requires DAPT withdrawal.Withdrawal regimens are similar across guidelines, with thienopyridine requiring 5-7 day withdrawal and continuation of aspirin or cilostazol monotherapy [18,25,26].However, each guideline treats anticoagulants in a slightly complex and different manner.Although it is necessary to evaluate the risk of embolism and procedural bleeding when antithrombotic agents are stopped in EPBD, warfarin can be continued if the PT-INR is within the therapeutic range.In EST, treatment with warfarin can be continued, whereas the PT-INR is within the therapeutic range in Japan and America.However, in Europe and America, it is recommended to discontinue warfarin 5 d before EST and replace it with heparin 2 d before EST, especially in patients at high risk of embolism in aortic or mitral valve replacement, atrial fibrillation, or any thromboembolic risk.Once hemostasis is confirmed, antithrombotic agents must be restarted postoperatively in America, the next day in Japan, and within 2 d in Europe.Warfarin should be resumed after the procedure, and heparin should be used in combination until the PT-INR returns to the therapeutic range [18,25,26].However, it is difficult to summarize each country's guidelines accurately and concisely; therefore, please refer to each country's guidelines for details.In addition, in DAPT and DOAC, there is a paucity of evidence regarding the ability of guideline-guided withdrawal periods to prevent bleeding [7,8,25,27].
With regards to hemorrhage, Mirjalili and Stringer [28] identified 98 arteries near the major papilla and reported blood vessel distribution on endoscopy.According to their report, blood vessel distri-bution in the 10 to 11 o'clock region was low at 10%-11%; thus, cutting in this region has a low risk of hemorrhage.The ESGE and Japanese EST guidelines have cited this article [16,18].No trials have compared hemorrhage and perforation according to cutting direction; however, adding to the reports that bile ducts tend to run in the 11 to 12 o'clock direction in the papillary region, cutting in the 11 to 12 o'clock direction is considered safe, and thus recommended by Japanese EST guidelines [16].
Others:
The superior sphincter extends to the bile duct on the lateral wall of the duodenum, and cutting beyond this area increases the risk of perforation.In relation to the papilla, it is believed that the superior margin of the papillary bulge coincides with the middle sphincter, which is considered the upper cutting limit (Figure 1).However, anatomical examinations may not necessarily be consistent with actual living bodies, and depending on the cutting direction, perforation can occur even if the superior margin of the papillary bulge is not reached; thus, due care should be exercised [16].Moreover, there is no evidence comparing incision size to the incidence of procedural adverse events or therapeutic outcomes following EST [16].
The incidence of short-term cholecystitis after ERCP could be caused by resistance to initial antibiotics on admission[29], and the incidence of long-term cholecystitis and the recurrence of stones in CBD could be decreased by EPBD compared to EST [6,12].EST causes significant damages to the Oddi sphincter, and post-EST sphincter dysfunction easily occurs [30].Then, the reflux of intestinal contents such as digestive juices, food residue, and bacteria may increase the risk of biliary tract infection and stone recurrence [31,32].
To summarize the characteristics of EST and EPBD (Table 1), EST is superior in terms of PEP reduction and bile duct large stone retrieval, while EPBD is superior in terms of bleeding reduction, long-term cholecystitis, and bile duct stone recurrence.Based on these findings, we consider EPBD in cases of small bile duct stones, bleeding tendency, young age, and even in surgically altered anatomy in which EST is difficult.
ESBD: Ding et al [4] defined a tunnel from the distal bile duct to the papillary orifice as an extraction tunnel (SET).Based on the anatomical structure, the tunnel was divided into two segments, with the distal bile duct and the intradural portion of the sphincter of Oddi comprising the proximal segment, including the proximal ring, and the intraduodenal portion of the distal segment of the papillae, including the distal ring around the orifice.Conventional EST cuts the distal segment almost completely from the orifice to near the duodenal wall, EPBD extends the entire SET, and EST + EPBD (ESBD) shortens the SET by cutting the distal ring and extends the proximal ring.Therefore, this combination technique is suitable for accessing the wide opening of the SET from an anatomical perspective [4].In this study, ESBD was reported to reduce the number of treatments for complete stone removal, procedure time, use of mechanical lithotripters, and bleeding rate, and the incidence of PEP was reported to be comparable to that of EST.It has been reported that a small incision did not increase the risk of bleeding compared with non-EST, which might be attributed to a lower chance of injury to the major vessel in the papillary roof [20].ESBD limits EST to small incisions, which may be the reason for reduced bleeding after ERCP.In a network meta-analysis, ESBD tended to be superior to EST in terms of successful stone removal in the first endoscopic session, the need for mechanical lithotripsy, and the risk of bleeding or perforation.However, none of these variables showed statistical significance [20].Thus, ESBD may be superior to EST in overall efficacy and short-and long-term complications, and ESBD may be recommended over EST in the future; however, there is insufficient evidence to recommend ESBD over EST.Therefore, to justify updating the current guidelines, researchers will require more evidence that ESBD is superior to EST in terms of overall efficacy [20] and that ESBD may reduce the long-term recurrence rate of bile duct stones [33].At this time, it is up to each endoscopist to decide whether to perform ESBD or EST.
COMPARISON BETWEEN BALLOON AND BASKET CATHETER
A recent meta-analysis found that balloon catheters for cholelithiasis were superior to basket catheters for complete stone removal [9].However, there are some limitations in the studies included in this metaanalysis.Three of the four studies included in the review were on small stones (≤ 10-11 mm), and three of these articles used a four-wire retrieval basket catheter.Four-wire retrieval basket catheters are less suited to retrieve small stones than an eight-wire retrieval basket catheters and retrieval balloon catheters.Therefore, we cannot conclude that the basket catheter is inferior to a balloon catheter in the case of small CBD stones [5,[34][35][36].One meta-analysis study only included these three studies, but its conclusions were similar to those of a previous meta-analysis [10].Ozawa et al [5] reported that small stones (maximum diameter, 6 mm) are an independent risk factor for failed stone removal; in their study, the basket failed to grasp a small stone in eight cases, and in four of which, the stones were successfully removed after an exchange with a balloon catheter.Therefore, they suggested that a retrieval balloon catheter may be more appropriate than a basket catheter for removing small stones [5].However, Ozawa et al [5] also used a four-wire basket.Once a stone is captured in a basket, reliable extraction is usually ensured.More reliable traction associated with the basket catheter is cited as the main reason for its preferential use in Japan and Europe [5,9].In the study by Ozawa et al [5], the balloon slipped past the stones and could not provide a sufficient traction force for stone extraction within 10 min in four patients in the balloon group, and the stones were successfully captured and withdrawn after exchange to the basket in all cases.However, a basket with a captured stone may occasionally become impacted at the papilla during extraction if the sphincterotomy is insufficient or if the stone is larger than estimated.According to the ESGE guidelines, the difference between balloon and basket catheters is slightly minimal, so endoscopists can use any of the two; meanwhile, according to the ASGE guidelines, the balloon catheter is highly recommended for safety issues related to basket impaction [18,37].
REMOVAL OF DIFFICULT SMALL BILE DUCT STONES
There are two main operations when retrieving CBD stones with a retrieval balloon or basket.First, the catheter was pulled with the right hand.The other is to apply right rotation and push on the endoscope and use the down angle with dial control, if necessary.The difference between the two is the direction of the force on the retrieval balloon or the basket.In the former, the retrieval balloon or basket faces the forceps hole at the endoscope tip, whereas, in the latter, they face the tip of the endoscope that is pushed in (Figure 2).The important basic rule is that the direction of the force applied to the catheter should coincide with the long axis of the bile duct, and one can choose the easier of the two methods to accomplish this.However, in cases with pockets in the lower part of the bile duct, stone extraction is difficult.Once a stone is impacted at the corner pocket, the balloon passes alongside the stone without removing it, and stone removal is often difficult, even after repeated attempts.Such cases can be handled by pushing the stone up to the middle of the bile duct and then grabbing it with a basket or by using a basket shaped to extract the stone out of the pocket, such as a disposable NT retrieval basket (VorticCatch V: Olympus Medical Systems, Japan) (Figure 3).Furthermore, stones near the bifurcation of the gallbladder duct are difficult to grasp using a retrieval balloon or basket (Figure 4).Surgery is considered in these cases; however, they can be addressed with cholangioscopy, such as when in conjunction with electronic hydraulic lithotripsy (EHL) [15].When it is difficult to grasp a CBD stone, a basket that directly grasps the stone under cholangioscopy is available [38].
In a multicenter retrospective cohort study involving 98 patients (49 EUS-TD and 49 eERCP groups), technical success was achieved in 98% of patients in the EUS-TD group compared to 65.3% of patients in the eERCP group (OR 12.48, P = 0.001).EUS-TD had a significantly shorter procedural time (55 vs 95 min, P < 0.001).However, more complications of mild/moderate severity occurred in the EUS-TD group (20% vs 4%, P = 0.01).The length of stay was significantly longer in the EUS-TD group (6.6 vs 2.4 d, P < 0.001) [40].PTBD is also a useful alternative, with a reported success rate of approximately 97%, but this method of stone removal may cause problems, such as drainage tube trouble or an increased number of sessions [41].
TROUBLESHOOTING DURING STONE REMOVAL
A serious drawback of basket catheters is that during stone extraction, the basket with the captured stone is impacted in the lower bile duct or papilla.When basket impaction occurs, the basket must first be opened and pushed upwards into the hepatic hilum.An attempt was made to curl the basket wires back and disengage the stone (Figure 6).If this technique fails, more complicated techniques, such as mechanical lithotripsy and intra-extracorporeal lithotripsy, are required [5].To use a lithotripter, such as BML-110A-1 (Olympus Medical Systems, Tokyo) (Figure 7), which can be retrofitted to a basket catheter, the basket catheter is cut outside the body, the endoscope is removed from the body, and the wires of the basket catheter from the mouth are wrapped around the lithotripter.However, if the basket cannot be unmated even with a lithotripter, a cholangioscope can be helpful.The basket and grasped stone were visualized under the cholangioscope and crushed by an EHL or YAG laser (Figure 8).
CBD stones complicated with cholangitis
The Tokyo Guidelines 2018 (TG18) and ASGE suggest that bile duct stone removal following EST in a single session may be considered in patients with mild or moderate acute cholangitis [42,43].However, given that hemodynamically unstable or coagulopathy patients might not tolerate procedural bleeding or adverse events, decompression alone should be considered in this group [42,43].PEP does not increase even in cases of complicated cholangitis [43].TG18 suggested that endoscopic nasobiliary drainage (ENBD) or endoscopic biliary stenting (EBS) may be considered for biliary drainage according to the patient's background and preference.It should be borne in mind that if patients experience discomfort from transnasal tube placement, they are likely to remove the tube themselves, particularly in elderly patients.EBS is an internal drainage technique that does not cause discomfort or loss of electrolytes or fluids.In contrast, ENBD is an external drainage technique that allows monitoring or washing of bile via the transnasal tube, particularly if the bile is purulent [42].The ESGE did not provide any recommendations for these [15].We present a table summarizing each guideline, focusing on key points (Table 2).
CBD stones complicated with cholecystolithiasis
In the general population, CBD stones complicated with cholecystolithiasis commonly occurs.The established gold standard for the treatment of symptomatic cholecystolithiasis is laparoscopic cholecystectomy (LC), but the treatment option for CBD stones is yet to be clarified.CBD stones complicated with cholecystolithiasis can be treated with two-session minimally invasive and onesession feasible strategies.The former requires pre-or post-LC ERCP, whereas the latter requires LC plus intraoperative laparoscopic CBD exploration (LCBDE) or LC with intraoperative ERCP [44].As per efficacy, morbidity, or mortality endoscopic and surgical techniques for extracting these stones are equally suitable [45].However, one-session procedures usually result in a shorter hospital stay [15].Moreover, a recent meta-analysis has demonstrated that the one-session procedure has a higher success rate than the two-session procedure [46].For one-session procedures, many surgeons prefer the less invasive and less complicated transcystectomy approach, however, bile duct incision is recommended for dilated CBD, large diameter and multiple stones, impacted stones, and stones with intrahepatic localization [47,48].It is recommended to start with transcystectomy and move unto exploration by bile duct incision if difficult [44,49].Laparoscopic stone removal can be performed fluoroscopically or cholangioscopically.The use of a flexible cholangioscope is the most preferred method because of its accuracy and direct visual control.However, one-session procedure requires advanced laparoscopic techniques, a long learning curve, and specialized equipment, and these qualities may not exist in all treatment facilities[50-52].ESGE recommends that transcystic or transductal exploration of the CBD is a safe and effective technique for removal of CBD stones in patients undergoing laparoscopic cholecystectomy, provided that local expertise and resources are adequate [15].It is of note that results of surgical treatment of CBD stones, which are generally excellent in published reports, are usually from laparoscopic centers of excellence, however, there are hardly reports by less experienced surgeons.Therefore, the ESGE does not clearly state whether one-session or two-session procedure should be preferred.
There are no recent reports on laparoscopic surgery for small CBD stones, however, Huang et al [53], in their report on laparoscopic surgery for small CBD with CBD stones, indicated that it is safe and feasible for small CBD patients to perform LCBDE.
CBS stones complicated with distal biliary stricture
Few reports have been published on CBD stones extraction with distal biliary stricture [54,55], however, plastic stent(s) [56,57], covered self-expandable metallic stent(s) (cSEMS) [56][57][58], balloon dilation [59], and surgery [60] have been used for dilating bile duct stenosis.However, balloon dilation carries the risk of bile duct injury.Therefore, when endoscopic stone extraction is performed for CBD stones with benign biliary stricture, it may be advisable to use multiple plastic stents or cSEMS for several months and perform endoscopic stone extraction after bile duct dilation is achieved [61].Combining them with mechanical lithotripsy may also be useful [54].Ogura et al [55] reported that transluminal stone extraction passing through the EUS-TD route, without passing through the distal bile duct might be useful.Reports of CBD stones with malignant biliary stricture are even more scarce, however, the safety of 6-8 mm balloon dilation for malignant biliary stricture has been reported [62].In malignant biliary stricture with limited prognosis, stenting alone may be sufficient and stone extraction may not be necessary, however, balloon dilation for stone extraction may be considered in cases of short-term stent obstruction.
CONCLUSION
While EST is the standard treatment for papillary dilatation, EPBD is also a viable option for younger patients who wish to reduce the risk of long-term recurrence and patients with coagulopathy.EPBD is considered to have a lower risk of bleeding and perforation than EST.Several methods have been recently proposed to reduce PEP, the greatest weakness of EPBD.We would also like to focus on ESBD, which should be the subject of future research.
For small stones in the CBD, it is not necessary to strictly distinguish between the retrieval balloon and the basket; however, if one device cannot remove the stone, it is recommended to use the other.In cases of pockets in the lower bile duct, Voltic catch V is also useful.It is also important to gain experience in the use of EUS-TD, lithotripter, and cholangioscopy to deal with troubleshooting such as stones stuck in the basket and difficult cases of stone retrieval.
In cases of complicated cholangitis, stone retrieval can be performed in mild or moderate cases in a single session.In severe cases, decompression alone should be considered, and EBS is generally recommended.Cases of CBD stones complicated with cholecystolithiasis that are scheduled for onesession surgical treatment or CBD stones complicated with distal biliary stricture should be treated in facilities with adequate experience and equipment.
Figure 1
Figure 1 The oral protrusion.Endoscopic sphincterotomy incision size.The risk of perforation increases when the incision exceeds the superior margin of oral protrusion.
Figure 2
Figure 2 Basket/balloon catheter operations.A: Direction of force on the retrieval balloon or basket when pulling the catheter with the right hand; B: Direction of the force on the retrieval balloon or basket when applying right rotation and pushing the endoscope.
Figure 3
Figure 3 Stone in the lower common bile duct pocket.Red arrows indicate a stone in the lower common bile duct pocket.A: A case with stone in the lower common bile duct pocket; B: Disposable NT retrieval basket (VorticCatch V: Olympus Medical Systems, Japan).
Figure 4
Figure 4 Stone stuck in the bifurcation of the gallbladder duct.Red arrows indicate a stone stuck in the bifurcation of the gallbladder duct.A: The stone got stuck in the bifurcation of the gallbladder duct, as seen by endoscopic retrograde cholangiopancreatography.The guidewire could not be inserted into the gallbladder duct because of the obstruction by a stone; B: A stone stuck in the bifurcation of the gallbladder duct as observed by cholangioscopy.
Figure 5
Figure 5 Case of total gastrectomy with RY reconstruction.Enteroscopy-assisted endoscopic retrograde cholangiopancreatography was unsuccessful; therefore, an endoscopic ultrasound-guided hepaticogastrostomy was performed.The red arrow indicates bile duct stones.
Figure 6
Figure 6 Release of grasped stones.A: Push the basket catheter up into the hepatic hilum; B: Push further to invert the grasped stone; C: Push and deflect the basket wire; D: Close the basket while pushing the catheter.
Figure 7
Figure 7 BML-110A-1 (Olympus Medical Systems, Tokyo).The authors have obtained the permission for figure using from the Olympus (Supplementary material).
Figure 8
Figure 8 The case in which the basket could not be unmated even with the lithotripter.The basket and grasped stone were visualized under cholangioscopy and crushed using electronic hydraulic lithotripsy (EHL).The red arrows indicate common bile duct (CBD) stones, white arrow basket catheter, and orange arrow EHL probe.A: CBD stone; B: CBD stone grasped by basket; C: CBD stone grasped by basket as seen by cholangioscopy; D: CBD stone crushed with EHL.
|
2023-04-09T05:39:47.306Z
|
0001-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "756df9eda94e29709d61da75557cb5083b53c6c7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v29.i12.1863",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "220a42977a72ad95978d3a7e625136caf8769e13",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
}
|
5632046
|
pes2o/s2orc
|
v3-fos-license
|
FOOD INTOLERANCES AND ASSOCIATED SYMPTOMS IN PATIENTS UNDERGOING FOBI-CAPELLA TECHNIQUE WITHOUT GASTRIC RING
Background Bariatric surgery is considered the only effective method to treat refractory obesity, and especially for those in which clinical treatment was not successful. However, the appearance of food intolerances and clinical manifestations are quite common. Aim To identify food intolerances and associated them to symptoms in patients undergoing Fobi-Capella technique without gastric ring. Methods This was a cross-sectional study of adult patients who had more than one year after surgery. Demographic, anthropometric, weight and preoperative height data were investigated. Nutritional status was classified according to the criteria established by the World Health Organization. It was considered food intolerance the presence of nausea, vomiting, diarrhea or bloating after eating a particular food. Results The sample consisted of 61 patients who attended the nutritional consultation of which 26 (42.6%) had food intolerance, mostly related to red meat (n=12; 34.3%) during the first six months of operation; there was a significant difference between the periods between 0 and 6 months, and 7 to 12 (p=0.02). Among the symptoms reported by patients, nausea was the most recurrent until the 6th month, but without significant differences between the two periods (p=0.06). Conclusions The Fobi-Capella procedure without gastric ring promoted high frequency of intolerance to meat in general, especially for the red, chicken and fish, on this sequence; nausea was the most frequent symptom. These data suggest the need for adequate nutritional monitoring throughout the postoperative period.
INTRODUCTION
G iven the significant global rise in morbid obesity, bariatric surgery has become frequent in many countries. Of all the techniques, the Rouxen-Y gastric bypass 4 stands out because it is effective and has low morbidity, being considered the gold standard for the treatment of the disease 5 .
Despite the successful weight loss and improvement of obesity-related comorbidities, the onset of postoperative food intolerances and clinical manifestations are quite common. They are caused by many factors, such as changes in the gastrointestinal system and the slow adaptation of the body to all the changes made by surgery 8 . Intolerances may appear at any time. However, their intensity subsides and varies between individuals 17 . Salviano et al. 19 found that roughly 53% of the patients submitted to this type of surgery have postoperative food intolerances, most of them due to red meat (44%), pasta/sweets (24%), and milk (20%). Food intolerances may be accompanied by nausea, vomiting, and dumping syndrome 20 .
To minimize possible complications, nutritional follow-up is necessary before and after surgery. First, care is very important to prepare the patients to the upcoming necessary changes in food habits, chewing, serving size, and meal duration 22 .
Postoperative nutritional follow-up is critical to avoid food intolerances, nutritional deficiencies stemming from inadequate food intake, and excessive weight loss, in addition to the much required multidisciplinary followup 17 .
Hence, the objective of the present study is to identify food intolerances and associated symptoms in patients submitted to Roux-en-Y gastric bypass.
METHODS
This study was submitted to the CEP/CONEP system of the Brazilian Platform and approved by the local Human Research Ethics Committee under protocol number 06578412.0.0000.5193/2012, as required by Resolution nº 466/2012 issued by the National Health Council. A signed letter of consent provided the permission to conduct the study at a health facility. All patients signed an informed consent form before entering the study.
This cross-sectional study was conducted at a private clinic in the city of Recife, PE, Brazil. The sample consisted of male and female patients aged 20 to 58 years with or without obesity-related comorbidities submitted to Roux-en-Y gastric bypass no more than one year before the interview.
The exclusion criteria were: lactose intolerance, kidney disease, celiac disease, pregnancy, banded bypass, and refusal to join the study.
Data were collected from patients who visited the nutrition outpatient clinic between August and November 2012 either to start or continue the postoperative followup. Preoperative demographic and anthropometric data were collected. Weight and height had been measured in the last nutritional visit before surgery, allowing the calculation of preoperative body mass index (BMI). The participants' nutritional status was classified as recommended by the World Health Organization (WHO) 24 .
A validated self-administered, easy-to-understand and -fill out food intolerance questionnaire was used for collecting dietary data. The questionnaire consisted of objective and subjective questions for the patients to report their current eating habits. Food intolerance was defined as the presence of nausea, vomiting, diarrhea, and/or abdominal bloating after the intake of a particular food 12 .
The data were saved in the software Microsoft Excel 2007®. The statistical analyses were performed by the programs SPSS (Statistical Package for Social Sciences) version 13.0 and Epi-info version 6.04. The Kolmogorov-Smirnov test investigated whether the data was normally distributed. All continuous variables presented a Gaussian distribution and were expressed as means and standard deviations (SD) or percentages, with additional calculation of the 95% confidence intervals (95%CI).
The groups were categorized according to the time since surgery (0-6 months and 7-12 months) and compared. The categorical variables were expressed as simple frequencies and compared by Pearson's chisquare test or Fisher 's exact test when necessary. The significance level was set at 5%.
Food intolerance-related symptoms subsided in the 7-12-months period, but the difference was not significant ( Table 2).
DISCUSSION
Obesity is a chronic non-communicable disease characterized by an excessive accumulation of body fat. Today it is considered a severe public health problem, reaching epidemic propor tions both in developed and developing countries 14 . Its cause is related to complex endocrine-metabolic, genetic, socioeconomic, e n v i r o n m e n t a l , b e h a v i o r a l , a n d p s y c h o l o g i c a l interactions 7 .
Many diseases can be associated with obesity because of excess body fat, such as diabetes mellitus, high blood pressure, dyslipidemia, metabolic syndrome, and cardiovascular disease. All these factors can worsen health and cause premature death 11 .
Food intolErancES and aSSociatEd SYMPtoMS in PatiEntS UndErgoing FoBi-caPElla tEcHniQUE WitHoUt gaStric ring
The mean postoperative age (37.8 years) and BMI (44.1 kg/m²) of the study sample were similar to those reported by Bregion et al. 1 .
The study sample included considerably more women than men, similar to Quadros et al. 17 , who studied 165 patients of which 128 (77.6%) were women. A possible justification is the unforgiving beauty standards imposed by society and the higher incidence of obese women as opposed to men in the city of Recife, datum reported by the Ministry of Health who found that 17.1% and 12.2% of women and men, respectively, living in the city were obese 10 .
According to the Brazilian Family Budget Survey (POF 2008-2009), the rates of obesity increased in the adult Brazilian population in the last 35 years. The increase was significantly higher in males, going from 2.8% to 12.4%, while in females it went from 8% to 16.9%. Despite the high increase among men, obesity continues to prevail in women 7 .
Most of the study sample was aged 35 to 59 years. Brazilian studies show that ageing is an impor tant determinant of obesity, especially in women. Women gain approximately 6% of their body weight per decade. Thus, roughly 6.9% of women aged 18-24 years are obese; this percentage almost doubles in 25-34-years (12.4%) and almost triples in 35-44-years (17.1%). After age 45 years, obesity in women reaches an even higher incidence, approximately 25% 11 .
Twenty-six individuals (42.6%) experienced food intolerances in the postoperative period, a slightly smaller proportion than those found by Soares & Falcão 21 (46.7%) and Cruz & Marimoto 2 (46.5%), but higher than that found by Silva et al. 20 (37.7%). High-protein foods were the least tolerated, especially red meat in the first six months after surgery (n=12; 34.28%). On the other hand, intolerance to grains and flours was higher in the 7-12-months period (n=4; 57.1%); but the difference was not significant (p=0.40).
These data corroborate a study done with 37 obese patients followed at a university hospital: the frequency of intolerance to high-protein foods, especially red meat (35.3%) and chicken (11.8%), increased in the first three months after surgery. Intolerance to grains and flours, such as rice (11.8%) and cornmeal (14.7%), began three or more months after surgery 13 .
Meat intolerance may stem from the significant gastric resection promoted by surgery, changing the amount of pepsin produced, an enzyme responsible for protein digestion 9 . On the other hand, rice intolerance may stem from impaired amylase activity due to rice hydration and gelatinization, which occur during cooking 23 .
The present study found ver y few women with legume and tuber intolerances in the two study groups, which may be justified by the low intake of high-fiber foods in the first year after surgery 17 .
Some patients do not tolerate lactose well since the intestinal rearrangement promoted by surgery reduces lactase production, resulting in poor lactose digestion 3 . Since the changes promoted by surgery make the first month after surgery the most critical period, more food intolerances occurred in the 0-6-months period.
The frequency of intolerance to sugar and sweets was the same in both study periods, and frequency to deep-fried foods was slightly higher in the first six months after surger y, but the difference was not significant. Gomes et al. 6 found that patients begin to consume higher amounts of foods high in simple sugars and fats six months or more after surgery, so intolerances are more likely to occur then.
The food intolerance-related symptoms reported by the patients were diarrhea, nausea, vomiting, gastroesophageal reflux, postprandial gastric distension, abdominal pain, and dumping syndrome, all of which were more frequent in the first six months after surgery, but the difference was not significant.
Abdominal pain may be more frequent in the first months after surgery because of high food intake and inadequate chewing, impairing digestion 18 .
Pessina, Andreoli, & Vassallo 16 found more frequent complaints of nausea and vomiting in the first six months after surgery. Mottin et al. 15 reported that 48.9% of their patients experienced vomiting in the second month after surger y, coinciding with the introduction of normalconsistency foods, especially rice and meat. Another study of 69 patients found that 37.7% presented food intolerances, and the most common symptoms were vomiting (69%) and diarrhea (12%) 20 .
Regarding dumping syndrome, Deitel 3 states that these symptoms may affect 70% of the patients, especially in the first months after surgery, corroborating the present finding that the incidence of this syndrome was higher in the 0-6-months period (60%), but the difference in relation to the 7-12-months period was not significant.
CONCLUSION
Roux-en-Y gastric bypass caused a high frequency of intolerance to meats in general, especially red meat, chicken, and fish, in this order. Nausea was the most frequent symptom. These data suggest the need of proper nutritional follow-up during the entire postoperative period.
|
2017-07-07T04:23:03.318Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "f10f47dd74d1058f17fa2d30ed6131e0472350cb",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/abcd/v28n1/0102-6720-abcd-28-01-00036.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f10f47dd74d1058f17fa2d30ed6131e0472350cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268713859
|
pes2o/s2orc
|
v3-fos-license
|
Identifying Guarantors of War Veterans Using Robust-SEAL: A Case of the Korean War
Most countries provide veterans with various benefits to reward their sacrifice. Unfortunately, many veterans have failed to prove their status due to loss of military records. Thus, some governments allow the verification of those veterans through "buddy statements" obtained from the people who can vouch for the buddy's participation in the war. However, it is still challenging for veterans to find guarantors directly. With this background, we suggest to utilizing historical war records of combined operations to increase the pool of potential guarantors for the buddy statements. However, a combined operation network among troops can have missing edges and perturbations on attributes of the troop due to inaccurate information. In this study, we learn from some recorded interactions which might be incomplete and noisy, and predict missing linkages among the troops that might have interacted together in the war, by proposing Robust-SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction). It combines two Graph Neural Network (GNN) architectures: robust Graph Convolutional Network which considers the uncertainty of node attributes with a probabilistic approach, and SEAL which improves the expressive power of the GNN with a labeling trick. Our proposed approach was applied to Korean War data with perturbations. For experimentations, we hid some actual interactions and found that Robust-SEAL restores missing interactions better than other GNN-based baselines.
Introduction
In the 20th century, numerous wars broke out, including the Korean War, and the South African Border War.Currently, there are wars taking place in many areas.Many citizens are compulsorily conscripted into the military and must participate in the war.Even after the war ends, veterans suffer from aftereffects of the war such as post-traumatic stress disorder (PTSD) and chronic fatigue syndrome (CFS).Although there exist welfare benefits for the veterans honoring their commitments in some countries (Casler, Fosmire, and Klein, 2019), it is a big hurdle for veterans to get the proof that they are veterans.One of the reasons is that there are many missing military records.In the past, many records were written manually and stored physically so they could easily have been lost.For example, some records of the U.S. Army were lost due to fire (Stender and Walker 1974).This kind of loss makes it difficult to prove the applicant's participation of the war.
Although some countries allow alternative documents from applicants such as medical reports, prescriptions due to the veteran's injury or illness, photographs or letters from the veteran's time in the service, there are still many veterans who do not have such documents.For those who could not submit the alternative documents can have chances to be registered if they get buddy statements from comrades in the war.However, it is also challenging for applicants to find comrades on their own after the war, although the applicants remember some of them.Some comrades might have died during the war, or after the war.While the finding can be accelerated by government organizations who have the military records, military records at an individual level are often not available.
Unfortunately, however, little has been addressed to solve such war veteran's difficulty in proving their identification.Most of the previous studies have dealt with aftereffects of the war on veterans such as problems in their social life and mental health (Ra 2017).Few studies have only mentioned the challenges of veterans to prove their participations in the war especially, non-regular soldiers who have insufficient military records (Nam 2013).
We propose a framework to recommend suitable guarantors for veterans who are not able to find the records or documents for proving their identification.Beyond recommendations of guarantors who were affiliated in the same unit, our framework can recommend guarantors from different units that had participated in the same combined operation or those indirectly connected units by solving a link prediction problem as shown in Figure 1.In particular, we design the Graph Neural Networks (GNN) based link prediction model which has been leading this area (Zhang and Chen 2018;Cai et al. 2021).While many existing link prediction models are vulnerable to noisy data which are common especially in the war records such as the number of deaths in battles for each military unit, we address the issue by employing a probabilistic approach with assuming that node attributes have stochastic components.Furthermore, this approach is applied to the representative link prediction model, SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction).This avoids suffering from limited expressive power in most of GNN-based link predictions, caused by non-awareness of global contexts in a graph such as a distance between target nodes, as they could not be captured within the k-hop neighbored information where k is the number of graph neural layers (Zhang and Chen 2018;Zhang et al. 2021).Our framework, Robust-SEAL, is applied to the Korean War and evaluated by comparing the recommendation performances with those of other GNNbased baseline models.
Veterans Affairs
There have been several efforts to solve war veterans' difficulties.Most studies have focused on what kinds of aftereffects the veterans in their physical, mental and social health have experienced, along with the corresponding compensations or benefits.For example, Kang et al. (2013) emphasized needs for testing and treating PTSD and CFS for veterans based on their finding that many of the U.S. veterans of the Gulf War suffer from those problems.Williamson et al. (2018) also stressed the need for further investigations on mental health and treatment of veterans especially aged 65 and older.Furthermore, Casler, Fosmire, and Klein (2019) addressed the difficulties with respect to their communities, families, in addition to mental health.Nelson et al. (2015) focused on difficulties that veterans experience in employment due to physical and mental problems, and emphasized the necessity of research taking into account the employment-related demands of veterans to resolve the problem.
However, despite health and wellbeing benefits or programs for veterans, registration as a veteran is a huge challenge for some of them.For example, Nam (2013) focused on difficulties of non-regular soldiers who participated in the Korean War in getting their registration as veteran.The author found that most of the non-regular soldiers do not have sufficient evidence for their participation in the war and they suffer from various kinds of disorders.In particular, they had to sacrifice a lot, such as being deprived of educational opportunities in their young lives (Jeong and Kim 2018).However, little is known about a systematic framework to support the registration process.
Graph Neural Network-based Link prediction
A prediction of missing links or potential links to be connected was proposed in 2007 and it has been actively applied to various real-world problems (Kumar et al. 2020).While hand-crafted feature engineering from domain knowledge was required to obtain useful features for a link prediction in earlier research, recent studies have shown that datadriven features could be learned from neural networks (NN).To be specific, GNN utilizes the structural information as well as the node or edge attributes.Scarselli, Gori, and Tsoi (2009) first proposed a GNN that exploits structural information among nodes in NN.After a few years, some researchers introduced various Graph Convolutional Networks which generalize the concept of a convolution filter for graph-structured data (Bruna et al. 2014;Defferrard, Bresson, and Vandergheynst 2016;Kipf and Welling 2017).Since then, many GNN architectures have been developed (Hamilton, Ying, and Leskovec 2017;Veličković et al. 2018;Vaswani et al. 2017;Wu et al. 2020a).Those GNN models have been actively applied to Figure 1: Guarantor recommendation process: Guarantors are recommended within the direct neighbored units in the order of the number of combined operations with the applicant's unit.Guarantors are further recommended from the units that have the highest probability of being linked, if there are insufficient guarantors in the direct neighbored units.
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) problems in various domains such as recommender systems, natural language processing, and computer vision to name a few (Wu et al., 2020b;Yuan et al. 2021).Despite such innovations, some limitations have been reported.For example, most of the GNNs are vulnerable to nodal or structural perturbations as such perturbations affect not only the perturbed node or link, but also direct and indirect neighbors through connected links.It is still an open question, but main approaches include to identify suspicious nodes (Zhu et al. 2019) or links (Zhang and Zitnik 2020) and take less weights to them, or to modify a loss function with the worst-case loss or robust loss (Geisler et al. 2021).
Another challenge is that most of the GNNs have limited expressive power, which means that they fail to capture high-order features in graphs such as distance-based ones which could be critical to a link prediction task (Xu et al. 2019;Zhang et al. 2021).For this reason, many researchers have struggled to improve the expressive power of GNNs (Morris et al. 2021), and higher-order GNNs which utilize subgraphs for each node or labeling tricks for incorporating global contexts in the graph are some of the representative GNN methods towards the improved expressivity.
Data and Methodology
In our framework, we recommend guarantors from war records which include information regarding military units and their interactions via combined operations along with the number of deaths.However, as those records are not complete, we try to restore missing interaction among units via a link prediction model.We recommend guarantors for an applicant not only from veterans who had participated in the same operation together following the war records, but also from those who might have had participated in together or even those who might have had an indirect or higher-order interaction via direct neighbored unit beyond the war records.We expect Government such as Ministry of Patriots and Veterans Affairs to apply the link prediction results to alleviate difficulty required for veterans to find guarantors themselves.Details of data and methods in our framework are presented in the following sections.
Data
In this paper, we introduce a case of the Korean War as an example.We collect the war records to construct a combined operation network among military units.The Korean War was fought for about three years, from June 25, 1950 to July 27, 1953.During this entire period, a total of 376 major combined operations between forces were officially executed (Cho et al. 2017), where combined operations mean joint operations among units, such as small-scale combat and alert operations.As most of the combined operations were carried out in the certain period between September 16 and October 31 in 1950 (Cho et al. 2017), the war records in this period were investigated in this study.
The details of the combined operation data were collected through Min ( 2009) to construct the network for guarantor recommendations.By analyzing operational orders and movement paths between Army units included in the data, we identified cases in which combined operations between units such as shift changes and battles were executed.In particular, the movement route, base occupation, and advancement of each unit were converted into daily data.Hierarchical unit structure and the combined operation data of the Eastern front (capital and 3rd divisions) and the Western front (1st, 6th, 7th, and 8th divisions) were extracted as shown in Figure 2.
Some combined operations were conducted between different regiments.For example, the recapture operation of Yongbyon was conducted in the Western Front between the 3rd unit of the 19th regiment in the 6th division, and all units of the 15th regiment in the 1st division, on October 24, 1950.
Next, the time period in this study was classified into four smaller periods based on the dates when the number of soldiers killed per division per day changed significantly.For instance, the full period over which the Capital division operated was split at September 21, October 1, and October 11, as displayed in Figure 2. The 7th division was organized as a reserve division during this period, and due to infrequent combined operations, the period was not split for this division.
As soldiers did not necessarily participate in the war for all the periods, there exists various clusters of soldiers even within the same unit.For instance, some of them participated only in the first two periods while others did in the last two periods.Thus, we introduce different nodes depending on the periods that the soldiers participated in, and their affiliated unit information composed of the division, regiment, and unit.In detail, 15 groups are defined for each unit as the number of cases to participate in the war for four periods is 15, except for the units in the 7th division.In addition, each division includes three regiments, and each regiment also includes three units.Therefore, 675 (5 divisions × 3 regiments × 3 units × 15 groups) nodes are created for all divisions except for the 7th, for which 9 (3 regiments × 3 units) nodes are created.In total, 684 unit nodes are introduced, where each node represents a group of soldiers who co-participated within a unit at a specific time period.In addition, an edge indicates whether the two groups had interactions or not where those interactions are approximated via whether the groups had the combined operation, while its weight denotes the number of combined operations between the groups.The number of edges in our network is 32,967, and the average weight is 2.3526.The average weighted node degree is 98.2.However, there might be some missing interactions which have not been recorded considering the characteristics of war data.
Moreover, we utilize some nodal features in our analysis such as division, regiment, and the number of deaths in the unit during the periods, collected from the information retrieval system in the War Memorial of South Korea.The division and regiment information are considered as nominal variables, while the number of deaths is reflected as a continuous variable.In particular, the number of deaths can reflect complexity or type (local or total war, etc.) of battles, which provides useful information for the analysis.However, the casualty information was not available at the unit level so we use the daily casualties at a regiment-level which indicates that the nodal information might include some noises.For this reason, consideration of uncertainties in the nodal feature is important in this research.Our dataset covers 191,451 deaths in this period among 588,000 casualties of the Republic of Korea Armed Forces and 538,000 casualties of the United Nation Forces in the War.
Robust-SEAL
A goal of this study is to restore the interaction network by predicting missing or potential interactions between unit nodes in order to recommend guarantors for applicants of veteran's registration.While GNN-based models have made a great success in link prediction tasks in various domains, challenges remain in terms of the vulnerability of the models against perturbations on the nodal attributes and limited expressive power of the GNN.With this background, we propose a robust-SEAL, a hybrid approach of the robust GCN (Zhu et al. 2019) and the SEAL (Zhang and Chen 2018; Zhang et al. 2021).We address uncertainty by employing a probabilistic approach for node representations motivated by the robust GCN, while a labeling trick from the SEAL is applied to improve the expressive power of the GNN through incorporations of global contexts in a graph.An overall framework of the robust-SEAL is shown in Figure 3.
We consider a troop interaction network presented in Data section as = (, , , ), where is a set of the unit nodes, and each node represents the unit during a certain period.X is a node feature matrix including the number of cas-ualties, categorical variables that describe affiliated regiment and division of the unit.E indicates a set of edges where each edge means whether a pair of nodes has interactions.A is the weight matrix representing the number of combined operations in an edge describing the strength of interaction.
While this background, our model aims to predict binary labels of a set of unlinked node pairs ( , ), where 1 ≤ , ≤ and ≠ .We define each node in a node pair as a target node, and the model obtains embedding of the target node pair.A typical GNN-based link prediction model embeds each node representation of the affiliated pair, considering its neighbors within up to k hops via extracting its subgraph.Then, the pair of node representations is combined followed by a non-linear function such as a multi-layer perceptron, dot-product-based operation, and so on, to produce the representation of the node pair.
Our method also follows a similar approach, but we utilize the labeling trick (Zhang et al. 2021), which labels the distance to the target node pair as an additional node features before processing the graph neural layers.In detail, we define the subgraph for target node pairs ( , ) as the network by the union of and 's neighbors within up to k hops.Then, we apply Double Radius Node Labeling to the nodes in the subgraph following (Zhang and Chen 2018).For target nodes and , we assign 1, while for other nodes of the subgraph , denoted as , we assign the value () bigger than 1, where details are presented in the following equation.For node c in subgraph , () = 1 + min( , ) , where = (, ) is distance of node to , ′ = + .The (′//2) and (′%2) indicate the integer quotient and remainder of ′ divided by 2, respectively.This label injects global contexts in the graph such as distance between target nodes to the input node features.Then we use RGCN (Robust Graph Convolutional Networks) (Zhu et al. 2019) as the GNN architecture addressing issues caused by perturbations of data such as node attributes which can lead to inaccurate prediction.The RGCN allows uncertainty of the node attributes by assuming that the node representation is not deterministic and introducing a down-weighting module that takes less weights to the node representations which have high variance.
In detail, we assume that a node representation follows Gaussian distribution.Let ℎ () = ( () , ( () )) is the latent representation of node at the layer , where () is the mean vector, ( () ) is the diagonal variance matrix.
In this first layer, the matrix of means and variances can be obtained by the following equations: where (•) is a non-linear activation function and Η (0) corresponds to the input node features (∀ ∈ ).Moreover, W (0) and W (0) indicate learnable parameter matrices.In subsequent layers, the matrix of means and variances of the nodes at the (l+1)-th layer are defined as: where D ̃= D + I , A ̃= A + I , D is the diagonal weight matrix, I is the identity matrix, and ⊙ means the elementwise product.To be specific, () = (−Σ () ), with hyperparameter , and this part helps the GNN model takes less weights to the high-variance nodes, which makes the model be robust against node perturbations.W and W are parameters to determine the means and variances in the hidden layer.These parameters are learned to detect highly variated nodes in a data-driven way to minimize the training loss.At the last hidden layer L, we sample a node representation for each node, and then compute the score of the target link.
A Real-World Application
The trained Robust-SEAL model is applied to the real-world combined operation network to do link prediction.Then, guarantors can be identified in the following manner from the link prediction results of the Robust-SEAL.
• Find guarantors from the applicant's affiliated unit; otherwise find them from the units that had co-participated with the applicant's unit in the order of the number of combined operations; • If no guarantors are found, recommend guarantors from the units that had been likely to have interactions with the applicant's affiliated unit with the highest probabilities from the Robust-SEAL.
Experiments
For evaluation of the proposed Robust-SEAL to the Korean war data, we put aside some links in the dataset for various testing scenarios, and the other links are used for training.Therefore, we train the model with the incomplete dataset as we set some links aside and recommend top k pairs of military units with the highest probabilities from the trained model.Then, we investigate how well the model recommends the actual links between military units, especially those with highly weighted links representing units that coparticipated many times in the testing data.Furthermore, we also observe how the model performs in various perturbation strategies on nodal attributes or network structure.The recommendation performance of the Robust-SEAL is compared with other representative GNN-based link prediction models such as GCN (Kipf and Welling 2017), GraphSAGE (Hamilton, Ying, and Leskovec 2017), Graph Attention Networks (GAT) (Veličković et al. 2018), SEAL (Zhang and Chen 2018), GraphSAINT (Zeng et al. 2019), robust GCN (Zhu et al. 2019), and Line Graph Link Prediction (LGLP) (Cai et al. 2021).All data and implementation codes can be found in https://github.com/jongin915/Robust-SEAL.
Experimental Setup
Although our dataset is restricted to the Korean War, our aim is to make a generalized model which can be applied for unseen military units, for example, unofficial military units composed of volunteer soldiers such as students, or even other wars beyond the Korean War.Therefore, we evaluate the model not only in a transductive setting but also in a semi-inductive setting.In the transductive link prediction, all nodes can be seen in all phases, training, validation and testing.In contrast, only a certain set of nodes can be seen in training and the model is trained by only links among the visible nodes, and then the model is evaluated for the links that are associated with the unseen nodes in the semi-inductive setting.
In detail, we randomly divide the entire edges into the training, validation, and testing sets by 85:5:10, which is a popular setting for a typical transductive link prediction (Zhang and Chen 2018).For each set, corresponding number of negative samples are also randomly selected from the unlinked node pairs.The training set is used for model training while the validation set is utilized for selecting a bestperforming model, then the selected model is applied to the test set.
On the other hand, the dataset is split in terms of nodes for the semi-inductive setting following the prior literature (Hao et al. 2020).We randomly select 10% of nodes for testing and validation, respectively.The rest of nodes are used for training.Edges between training nodes and the same amounts of negative samples from unlinked node pairs are used for training while edges among validation nodes, and those between training nodes and validation nodes are utilized for validation in the same way to optimize hyperparameters.
Also, we perform additional experiments, where more perturbed situations are considered for evaluation of robustness.Random perturbations on nodal attributes or network structure are employed in training data.In detail, Gaussian noises are added on one of nodal attributes, the number of casualties in a unit.Otherwise, 5% or 20% of edges in the training data are randomly selected and removed.We test how models perform in validation dataset with either of such noises.All experiments are conducted using NVIDIA Tesla T4 computing nodes in the Google Colab environment.
Evaluation Metric
Our link prediction model is used to recommend units that have the highest probabilities of linkage; thus we evaluate the model based on the nDCG (normalized Discounted Cumulative Gain) score, one of representative measures of ranking quality (He et al. 2015).The DCG (Discounted Cumulative Gain) is the evaluation measure to compare the ranked list according to the actual relevance with that over the top-K recommended nodes from a link prediction model.
For example, the DCG of node i for top-K recommendation is defined as follows: where denotes the node with the k-th highest score from the link prediction results so the means the edge weight between the node i and the node which has the k-th highest score for i.
The , is normalized by its maximum , , also called as ideal , , resulting in .Then, across nodes in the testing set, average is calculated to compare the performances of various link prediction models.We consider a small K from 1 to 5 in order not to lose practicality of the recommendations.
Hyperparameters
For all models in both transductive and semi-inductive learning, we set the number of hidden units as 32, hidden layers as 2, and batch size as 32.We train each model for 300 epochs with Adam optimizer (Kingma and Ba 2015) and early stopping strategy is used on the validation set with a patience of 20 epochs.Learning rate is optimized in {0.05, 0.01, 0.001, 0.0005, 0.0001}.These are common hyperparameters across our proposed model and the baselines.
In addition, the model-specific hyperparameters are optimized via a grid search.For GCN, parameters are used as the optimal value set in the original paper (Kipf and Welling 2017).For GAT, 8 attention heads with 4 features for each head are used.For GraphSAGE, we conduct exploration for 1-hop and 2-hop sample sizes, selected in {5, 10, 15}.As every node in our war dataset has at least 15 neighbors, we set the sample size as less than or equal to 15.The selected 1-hop and 2-hop sample sizes are both 15.Regarding GraphSAINT, we explore sampling methods for training graph in {Node, Edge, Random Walk}, the number of expected edges {50, 100, 150, 200, 250, 300}, and dropout rate {0.1, 0.2, 0.3}.For the random walk sampler, we use the length of each walker to 2 following Zeng et al. (2020).
We experiment on both the number of hop h =1,2 for the SEAL, LGLP and Robust-SEAL.For loss function in the LGLP and Robust-SEAL, we set 1 = 2 =0.0005 following Zhu et al. (2019).
Results
Table 1 shows the results of all models in transductive and semi-inductive learning, respectively.The experiments were conducted over 10 runs with different random seeds.
In particular, models employing the SEAL framework significantly outperformed the rest of the GNN-based models at 0.05 significance level, by Welch's t-test.It implies that exploitation of high-order information to nodal feature could contribute to capturing global contexts in a graph resulting in more expressive features for link prediction via The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) the SEAL.Also, the SEAL framework shows its robustness against noises on nodal attributes, or random edge removals according to Table 2.
In addition, variance-based attention used in the Robust GCN and Robust-SEAL shows significantly improved performances in most experiments.In particular, improvement is outstanding in robustness tests based on comparison between the GCN and Robust GCN in Table 2, showing 11-25% of improvements, which is much higher than 9% of improvement in Table 1.
Conclusion
The aim of this study was to reduce the difficulties experienced by veterans in the process of obtaining a buddy state ment by increasing a pool of guarantors to be recommended and investigated.Although combined operation unit network enables the recommendations of guarantors who participated in the same operations or belonged to the same units, the pool is very limited due to various reasons.More over, some combined operation records may have been lost.In that case, our link prediction model can restore the missing links and also recommend links that might have had indirect connections help to find guarantors.
The experimental results verified higher accuracy and effectiveness of the proposed framework than various stateof-the-art baselines in both transductive and semi-inductive settings.The superior performances in the semi-inductive settings imply that our model could be applied to new nodes such as those from the war records additionally found or veterans who were in irregular military units.We expect that veterans who have devoted themselves to the nation can be recognized for their services in a better manner using our proposed approach.
Our study developed robust AI-based recommender systems against noisy data for social good.However, significant challenges have been presented for deployment (Dwivedi et al. 2021), for instance, explainability of recommendations, although some of them can be alleviated by employing several modern approaches (Ying et al. 2019).Another example of the challenges is fairness because current methods could be biased depending on node degrees (Liu, Nguyen, and Fang, 2023), where the prediction on high-degree nodes is usually more accurate than that on low-degree nodes.
Although we could not apply to other war datasets due to data availability, it would be interesting to apply our model to various wars such as the South African Border war, the war between the Ukraine and Russia, to name a few, in a fully inductive setting.Another direction for further research could be generation of the war network from texts, images, and videos in an automatic way.They are left as areas for further research.
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
Figure 2 :
Figure 2: Western and Eastern Front of ROKA (Republic of Korea Army), 1950
Table 2 :
The mean(standard deviation) of nDCG@3 for perturbation experiments with regard to nodal attributes with Gaussian noise and random edge deletion, respectively.
|
2024-03-27T15:37:48.326Z
|
2024-03-24T00:00:00.000
|
{
"year": 2024,
"sha1": "b1fe958d7a62801146e6dd6f62b206a6fd40c301",
"oa_license": null,
"oa_url": "https://doi.org/10.1609/aaai.v38i20.30201",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4a60479d0c513613826e9defcd03925aac835cef",
"s2fieldsofstudy": [
"Political Science",
"History"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
53869349
|
pes2o/s2orc
|
v3-fos-license
|
Transient Effect of the Herbicide Flumioxazin on Physiology of Vitis vinifera L. cv. Pinot Meunier
Bigot Aurélie1, Clément Christophe2 and Vaillant-Gaveau Nathalie2 1Laboratoire d'Eco-Toxicologie, URVVC-SE, EA 2069, Université de Reims Champagne-Ardenne, UFR Sciences Exactes et Naturelles, Bâtiment 18, Moulin de la Housse – BP 1039, F-51687 REIMS Cedex 2 2Laboratoire de Stress, Défenses et Reproduction des Plantes, URVVC-SE, EA 2069, Université de Reims Champagne-Ardenne, UFR Sciences Exactes et Naturelles, Bâtiment 18, Moulin de la Housse – BP 1039, F-51687 REIMS Cedex 2 France
Introduction
Pesticides are widely used to control pests and diseases in crop production. Flumioxazin (fmx), or 2-[7-fluoro-3,4-dihydro-3-oxo-4-(2-propynyl)-2H-1,4-benzoxazin-6-yl]-4,5,6,7tetrahydro-1H-isoindole-1,3(2H)-dione, is a N-phenylphthalimide herbicide registered for pre-emergence control of broadleaved weeds in peanut (Arachis hypogaea L.), soybean (Glycine max L.), sorghum (Grichar, 2005) and as an early pre-plant burndown treatment in cotton (Gossypium hirsutum L.) (Main et al., 2003). Fmx inhibits protoporphyrinogen oxidase (protox) in the chlorophyll biosynthetic pathway, resulting in light-induced membrane lipid peroxidation (Scott et al., 2001). In the presence of protox inhibitors, tetrapyrroles accumulate, especially protoporphyrin IX (proto IX). Protox inhibition leads to the accumulation of its substrate protoporphyrinogen, which is readily oxidized to proto IX by oxidative enzymes. Proto IX is a quite effective photosensitizer that transfers absorbed light energy to molecular oxygen to form singlet oxygen. The singlet oxygen peroxidizes lipids leading to the destruction of cellular membranes (Moreland, 1999). Fmx is a pre-emergence herbicide applied on soil at the end of winter at a concentration of 5 mM. Fmx inhibits the development of redroot pigweed (Amaranthus retroflexus), lambsquaeter (Chenopodium album), jimsonweed (Datura stramonium), morningglory (Ipomoea spp.), nutsedge (Cyperus spp.) and prickly sida (Sida spinosa L.) (Nagano, 1999;Niekamp et al., 1999). Fmx enters plants mainly throughout root and tolerant crop species avoid its injury by rapid detoxication metabolism (Yoshida et al., 1991). Fmx is applied to control adventive plants but the presence of such molecules in the foliage of non-target crops and in the soil has been reported (Jame et al., 1999). Little information is available on the effect of fmx on crop physiology, especially in grapevine, although it is one of the most frequently used herbicides in vineyards. We showed that fmx dramatically affects grapevine physiology in www.intechopen.com vitro (Saladin et al., 2003a, b, c, d). Various concentrations of this herbicide have a negative impact on vine plantlet leaf growth, as revealed by tissue dehydration and cell membrane alteration, decrease in osmotic potential and accumulation of proline. Moreover, fmx treatment results in a reduction of plantlet growth and photosynthesis and further induces some perturbation in leaf carbohydrate partitioning (Saladin et al., 2003b). Proteomic analysis of grapevine under fmx stress suggested that photosynthesis-related proteins, enzymes involved in photorespiration and enzymes of sugar metabolism were impaired (Castro et al., 2005). However, these results were obtained with juvenile plantlets grown in vitro and thus have to be considered cautiously before being extended to the whole plant cultivated in vineyards. The aim of this study is to further determine the effects of fmx treatment on the photosynthetic characteristics of grapevine cutting leaves. The combined measurements of chlorophyll fluorescence and gas-exchange rates were proved to be a useful approach for distinguishing stomatal versus nonstomatal effects, as well as for estimating the importance of various types of energy use, such as thermal dissipation and photorespiration (Hendrickson et al., 2004).
Plant material, growth conditions
Canes of Vitis vinifera L. cv. Pinot Meunier were collected in winter, treated with cryptonol (2% v/v) to prevent contamination and stored in the dark at 4 °C for a minimum of two weeks. They were then cut into fragments to obtain two consecutive fertile buds and one sterile bud (Mullins, 1966;Mullins & Rajasekaran, 1981). The sterile bud was soaked for three min in a 0.1% (w/v) 3-indol butyric acid aqueous solution in order to stimulate rhizogenesis. Then, the cuttings were placed in 300 ml pots containing perlite: sand (1: 2) at 25 °C and 75% relative humidity in the greenhouse, with a 16 h photoperiod at a photosynthetic photon flux density of 400 μmol m -2 s -1 (Lebon et al., 2005).
Fmx treatments
Plants were daily irrigated with a nutrient solution optimized for grapevine culture (Coïc & Lesaint, 1971). After eight weeks, when the cuttings had eight leaves, the fmx solution (commercial herbicide Pledge ® ) was sprayed only one time on the soil with different solutions of fmx in water: 0.5 mM, 5 mM (concentration recommended by the manufacturer) or 50 mM. Simultaneously, the soil of control cuttings was sprayed with water.
Growth measurements
At the end of the experimentation, ten plants per treatment were harvested, separated into shoot and root parts, and their fresh weights were determined.
Measure of gas exchanges
The net photosynthetic rate (Pn), the stomatal conductance (gs), the intercellular CO 2 concentration (Ci) and the transpiration rate (T) were measured using a portable infrared gas analyser (LI-Cor Model 6400, Lincoln, NE, USA). The infrared gas analysis system was equipped with a clamp-on leaf cuvette that exposed 6 cm 2 of leaf area. Light, temperature and humidity were 400 µmol m -2 s -1 , 25±1 °C and 30% respectively.
Photosynthetic light response curves:
Response of Pn to photosynthetic photon flux (PPF) was measured by illuminating the leaf at decreasing PPF (from 2000 to 0 μmol m -2 s -1 ) until Pn was constant. The apparent quantum yield of CO 2 fixation (ΦCO 2 ) was calculated as the slope of the linear portion of the response curves between 0 and 100 µmol m -2 s -1 PPF. CO 2 was maintained at a constant level of 360 mmol l -1 using an LI-6400-01 CO 2 injector (LI-Cor 6400 Lincoln, NE, USA) with a high pressure liquefied CO 2 cartridge source.
Chlorophyll fluorescence measurements
The chlorophyll a fluorescence of the leaves was quantified on attached leaves with an IMAGING-PAM Chlorophyll Fluorometer (Walz, Effeltrich, Germany). The measuring system applies array of blue light-emitting diodes (LEDs) (peak wavelength, 470 nm) for saturating light pulses. The frequency of the pulses was adjusted to 10 Hz. Measurements were carried out at maximal distance between the camera and the leaf, corresponding to a 25 x 34 mm area. The image captured by the charge-coupled device (CCD) camera was composed of 640 x 480 pixels. During the whole experiment, the measurements were systematically performed on the adaxial side on the central parts of the young leaves. The leaves used for measurements were pre-conditioned in the dark. The initial fluorescence (F o ) was obtained after 0.5 hour of dark adaptation. Maximal fluorescence (F m ) was obtained with a saturating flash (1 s, 13 000 µmol m -2 s -1 ). The ratio of variable to maximal fluorescence (F v /F m ) was calculated. The protocol for fluorescence measurement was similar to the one described by Genty et al. (1989), but the measurements were performed on attached leaves. The relative quantum yield of PSII (Φ PSII ) at steady state is defined as (F' m -F s )/F' m where F s and F' m are respectively steady-state fluorescence and maximum fluorescence in the light. ΦPSII represents the number of electrons transported by a PSII reaction centre per mole of quanta absorbed by PSII. Both Photochemical (q P ) and total non-photochemical quenching (q NP ) were calculated according to van Kooten & Snel (1990). The Stern-Volmer equation (NPQ) was used as an indicator of the activity of energy dissipation in the pigment bed of PSII. NPQ was proportional to the effective rate constant for energy dissipation in the antennae as well as in the concentration of quenching centres (Demmig-Adams et al., 1996). Fluorescence light response curves: Response of F v /F m , Q P and Q NP to PPF (the light response curve) were measured by illuminating the leaf with actinic light at increasing PPF (0 to 1200 µmol m -2 s -1 ).
Chlorophyll assay
Chlorophyll contents were determined at the end of the experiment. Leaf slices were dissected and pigments were extracted under overnight continuous agitation in 80% (v/v) acetone amended with 0.5% (w/v) MgCO 3 to prevent chlorophyll acidification at 4°C. Crude extract was centrifuged at 10,000 g for 10 min at 4°C, and the supernatant was used to estimate spectrophotometrically pigment concentrations according to the absorbance coefficients determined by Lichtenthaler (1987). Results were expressed in mg g -1 fresh weight (FW).
Statistical analysis
Five replicates plants per treatment and three replicate measurements per plant were carried out. All data were analysed using the Mann & Whitney test at the 0.05 probability level.
Growth
In the event of fmx excess (5 and 50 mM), the plant growth is inhibited (Fig. 1). The leaves growth is more affected than the root growth.
Gas exchanges
Photosynthetic responses of grapevine grown with various herbicide concentrations were analysed to determine whether fmx modifies Pn, gs, T and Ci. The Pn decreased significantly after one day at 0.5, 5 and 50 mM fmx ( Fig. 2A). Fifteen days after spraying with 5 and 50 mM fmx, Pn was steady and equal to zero. gs was significantly affected by fmx (Fig. 2B). Detectable inhibition of gs occurred after one day using 5 and 50 mM fmx and after three days at 0.5 mM fmx. Whatever fmx concentration, the treatment leads to the closure of stomata in grapevine after three days and 15 days after, gs was equal to 0 for 5 and 50 mM. Transpiration of grapevine following fmx treatment was also reduced significantly at 0.5, 5 and 50 mM after two days (Fig. 2D). Ci was not affected with 0.5 mM fmx, but Ci increased at 50 mM after 3 days and at 5 mM fmx after 5 days (Fig. 2C). At 0.5 mM fmx and after 5 days, Pn and T increased and after 45 days, they were equal to control.
Photosynthetic light response curves
To clarify the nature of the mechanisms involved in plant adaptation to the treatment, both CO 2 distribution in the leaf and the capacities of mesophyll to assimilate atmospheric CO 2 were analyzed after ten days. The study of these processes was performed by analysing the light response curve and by measuring Pn variations in response to the increase of CO 2 concentration after ten days of herbicide stress application. The leaves of the treated plants showed a drastically lower photosynthetic capacity than the control leaves. Similarly, saturating PPF and apparent quantum yield of CO 2 fixation (ΦCO 2 ) showed a significant decrease with increasing fmx concentrations (Table 1). Cuttings treated with 50 mM fmx did not respond to the PPF because the plants died. Also, dark respiration and compensation point decreased when the fmx stress increased. In addition, photosynthesis was saturated at 10.86 µmol m -2 s -1 in the control and light-saturated net CO 2 assimilation rate (A sat ) decreased by 15% and 98% using 0.5 and 5 mM fmx respectively. Using the high PPF, the slope of curves was null (Table 1), meaning that there was no photoinhibition. The ratio ΦPSII/ΦCO 2 was inversely correlated to the efficiency of light involvement for carbon fixation. It was higher in leaves grown at 5 mM fmx concentration ( (Rd), compensation point (Γ), light-saturated net CO 2 assimilation rate (A sat ), PPF (μmol m -2 s -1 ) for A sat , slope with PPF > 1500 μmol m -2 s -1 and ratio ΦPSII/ΦCO 2 of grapevine with various flumioxazin concentrations after ten days. The grapevine treated with 50 mM of fmx were dead. Statistical analyses were carried out using the Mann and Whitney test. Means for a considered parameter were not significantly different when followed by the same letter (P ≥ 0.05).
Chlorophyll fluorescence
All chlorophyll fluorescence parameters strongly dropped as the fmx concentration increased (Fig. 3). The Fv/Fm ratio used as the means of maximal photochemical efficiency of PSII was not modified in the controls nor in the 0.5 mM fmx treated cuttings, whereas it dropped down to zero after four days using 5 and 50 mM fmx (Fig. 3A). Similarly ΦPSII slowed down significantly after four days of 5 and 50 mM fmx treatments (Fig 3B).
Quenching was also affected in the same way: q P and q NP decreased significantly after ten days using 5 and 50 mM fmx (Fig. 3C, D). There was no PSII activity after 10 days using 50 mM fmx and after 15 days using 5 mM.
Identification of fmx damage in the leaf
Picture of the fluorescence showed a marked decrease in fluorescence emission when the cuttings were exposed to the highest concentrations of fmx (Fig. 4). Fm images allowed early detection of fluorescence variations than Fv/Fm images. 0.5 mM fmx treatment induced only a slight fluorescence decline in the leaves during the whole treatment (Fig. 3). More drastic modifications appeared in the veins of leaves from four days using 5 mM fmx (Fig. 4). Using 50 mM fmx, the fluorescence decline appeared significant after four days in the veins and next spread rapidly throughout the entire leaf, the damages spread throughout the mesophyll (Fig. 4). Figure 5 presents the changes in light response curves of chlorophyll fluorescence in leaves ten days after fmx treatment. The responses of Fv/Fm, Q P and Q NP to PPF were measured by illuminating the leaf with actinic light at increasing PPF 0 to 1200 µmol m -2 s -1 . Treated plants responded less strongly to the light than the control. Fv/Fm and Q P decreased with increasing light intensity while Q NP increased. The fluorescence kinetics showed that the increase of fmx concentration led to a decrease in the maximal efficiency of PSII photochemistry and a decrease in the coefficients of photochemical and non-photochemical quenchings.
Chlorophyll contents
Fmx leads a decrease in the total chlorophyll, chlorophyll a and b concentration and in the carotenoid concentration. We measured a decline in the ratio chlorophyll a/chlorophyll b (
Discussion
These results provide new insights into the effects of fmx herbicide on grapevine physiology through the analysis of numerous parameters. We have demonstrated a transient fmx effect on Pinot Meunier physiology. The answer of this cultivar was different that observed with the Chardonnay (Bigot et al, 2007). They complement preliminary information on the stress effects of this herbicide on plant physiology in vitro (Saladin et al., 2003a, b, c, d;Castro et al., 2005) and help to further understand how action of herbicide acts on non-target grapevine. The soil-applied herbicide is known to be a peroxidizing agent, through the inhibition of protoporphyrinogen IX oxidase in the chlorophyll biosynthetic pathway (Scott et al., 2001). It appears that fmx affects other metabolic functions i.e. all the photosynthetic parameters we evaluated. It induces a strong net photosynthesis inhibition and a parallel decrease of stomatal conductance and transpiration. The photosystem II activity is also affected. www.intechopen.com
Fmx inhibits CO 2 assimilation
All the photosynthetic parameters of grapevine cutting leaves were significantly reduced after 25 days of fmx treatment. Net photosynthesis and transpiration reduction were associated with decline of stomatal conductance. The photosynthesis decrease in leaves may be caused by stomatal closure. However, the reduction of Fv/Fm and the quantum yield of CO 2 assimilation indicate that the efficiency of photochemistry is also impaired in grapevine treated with 5 and 50 mM fmx. There is a strong relationship between photosynthetic electron transport and carbon fixed by plants (Genty et al., 1989). ΦPSII/ΦCO 2 is an estimate of the relationship between the rate of electron transport and carbon fixation. If four electrons are consumed per mol CO 2 fixed and if the light is equally distributed between the two photosystems, ΦPSII/ΦCO 2 should theoretically be 8 minimum. Experimentally, ΦPSII/ΦCO 2 greater than 8 is obtained, meaning that electrons are also used for other processes than photosynthesis, such as photorespiration, N assimilation, or pseudocyclic electron transport (Genty et al., 1989).
Fmx affects chlorophyll a fluorescence
The fluorescence arising from chlorophyll is almost exclusively associated with PSII (Schreiber et al., 1994). Since PSII functioning is sensitive to a wide range of environmental variations, chlorophyll fluorescence provides numerous information on the effects of stresses on plants (Schreiber et al., 1994). Our results clearly show that fmx significantly inhibits the quantum yield of PSII electron transport (ΦPSII) in grapevine cuttings. We also demonstrate that such a decrease in ΦPSII was associated to the alteration of q P and q NP . A decrease in q P induced by fmx indicates a higher proportion of closed PSII reaction centres, i.e., an increase in the proportion of the reduced state of Q A , (Genty et al., 1989), which probably generates a decrease in the proportion of available excitation energy used for photochemistry (Havaux et al., 1991). Concomitant with the reduction of Fv/Fm, we observed that q NP decreased drastically with increasing fmx concentrations, suggesting that the fmx treatment does not involve non-radiative energy dissipation.
In leaves of grapevine cuttings, the capacity for CO 2 assimilation decreases to almost zero after five days whatever fmx concentration. However, after ten days of 0.5 mM fmx treatment, while there is negligible CO 2 assimilatory capacity, ΦPSII remains at approximately 12% when compared to control leaves. This suggests that a certain rate of non-cyclic electron transport is required to maintain CO 2 assimilation. An alternative way to CO 2 assimilation for electrons would be oxygen reduction by photorespiration and/or a Mehler reaction (Brestic et al., 1995). Changes in fluorescence yield in grapevine leaves are also associated with modifications in the antenna pigments, in the efficiency of excitation trapping at the active centres of PSII, or in changes in the thylakoid membrane (Calatayud & Barreno, 2001). The in vitro application of fmx to grapevine induced, on the one hand, disorganization of internal photosynthetic membranes (Saladin et al., 2003a) and affected, on the other hand, an oxygen-evolving enhancer protein and a LHCII type III chlorophyll a/b binding protein, a major component of light-harvesting antennae complex of PSII (Castro et al., 2005). Fmx treatment on the soil, provokes leaf fluorescence damages that first occur in the veins and next spread throughout the mesophyll. Fmx treatment induces depression of photosynthesis in grapevine and involves also heterogeneity of leaf photosynthesis. Such heterogeneity may be the consequence of patchy stomatal closure and/or collapse of part of the mesophyll due to loss of turgor, associated with a low lateral CO 2 diffusion capacity (Cornic & Massacci, 1996). It also results in decreases in the photosynthetic efficiency and capacity of leaves. These observations further suggest that either fmx or a by-product penetrate the plant throughout the roots and are thus distributed in the whole plant through the veins. These results are consistent with Castro et al. (2005) who found significant changes in root and shoot proteome and who suggest that the herbicide could act systemically in grapevine tissues, probably via root uptake.
Conclusion
We have demonstrated a transient fmx effect on grapevine physiology characterized by strong increases of Pn, gs and ΦPSII at 0.5 and 5 mM fmx after 45 days. The grapevine was able to partially overcome the damages caused by herbicides (Saladin & Clément, 2005). In the vineyard the herbicide caused mild stress (Saladin et al., 2003c, d). It may be explained by a detoxification of the herbicide in the rootstock and/or a low fmx uptake by the roots, which is due to a deeper root system or different soil adsorption characteristics (Saladin et al., 2003d). Moreover, in the vineyard fmx was applied at the end of the winter, when canes have no leaf and when the sap flow is low.
|
2018-11-18T06:00:26.843Z
|
2011-01-08T00:00:00.000
|
{
"year": 2011,
"sha1": "11b56e00c5475a65947523579e96560bac5da3e3",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/12582",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "143934e8e28f09ea9244e3f1eba694884a863330",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
}
|
233322901
|
pes2o/s2orc
|
v3-fos-license
|
Effect of filter media and hydraulic retention time on the performance of vertical constructed wetland system treating dairy farm wastewater
This study deals with the dairy wastewater treatment using laboratory scale vertical flow (VF) constructed wetlands with the Canna indica plantation over wetland beds due to phytoremediation capabilities. Three laboratory scale VF CWs (CW-A, CW-B and CW-C) each with an area of 0.135 m 2 filled with gravel (CW-A: 20 mm; CW-B:10 mm gravel) and sand (CW-C) receiving 0.04 m 3 d -1 dairy wastewater were operated for the wastewater purification. Each unit was operated at three hydraulic retention times (HRTs) i.e. 12 h, 24 h and 48 h for assessing its effect on wastewater purification . Among all units, removal rates fluctuated as: total suspended solids (TSS): 64.2–74.5%; biochemical oxygen demand (BOD): 45.3 – 63.1%; ammonium nitrogen (NH 4 –N): 29.6 – 56.5% and phosphate phosphorous (PO 4 –P): 20.5 – 57.8% at different HRTs. Increase in HRT showed better removal of pollutants in all CWs. Moreover, maximum removal of pollutants excluding TSS and NH 4 -N was achieved in CW-B at 48 h HRT. CW-B with similar HRT provided maximum removal of PO 4 -P (57.8%), BOD (63.1%) and chemical oxygen demand (COD): 67.4%. Increase in the size of filter media, from sand (0.25 mm) to 20 mm gravel resulted in higher removal of NH 4 -N from wastewater.
time (HRT), BOD/P/N ratio, bed depth, arrangement of beds (parallel or series) and surface 1 partition at bed surfaces [9]. In VF CW system, filter media and plants play an important role in 2 P removal [10]. These potent filter media such as sand and gravels have the capacity to bind 3 phosphate and thus precipitate the phosphate content from wastewater. Along with this, plant 4 such as Arundo donax absorb the readily available phosphate which are attached to the filter 5 media surface [11]. 6 The present study focusses on the removal of pollutants i.e. NH 4 -N, TSS, PO 4 -P, BOD and 7 COD using laboratory scale vertical flow CW systems filled with different filter materials and The present study was conducted at Graphic Era University, Dehradun (30.3165° N, 78.0322° E), 15 Uttarakhand, India. Laboratory scale CW systems were built using round shaped plastic 16 container with a depth of 50 cm (surface area: 0.135 m 2 ). Three such containers were labelled as 1 As per previous studies, different filter materials such as gravels of varied sizes, sand, etc. may 2 be used in a CW unit [13]. Sand, being fine in size acts as the best filter material and treats 3 wastewater using physical, biological and chemical processes [14] but is associated with problem 4 of clogging. On the other hand, gravels too have its own advantages of excellent nitrification 5 potential because of greater pore space for air availability and better adheration surface for 6 microbial films which provides good mineralization of organic nitrogen and oxidation of 7 ammonium ions [15]. This helps in maintaining aerobic conditions on the bed surfaces which 8 supports the pollutant removal processes. Moreover, the gravel filled systems are less susceptible 9 to clogging during their operation. 10 As the wastewater percolates slowly through the filter material, various physical, 11 biological and chemical processes occur in combination resulting in the removal of pollutants 12 from wastewater. Phosphorous gets attached to sand /gravel surface resulting in adsorption and 13 precipitation processes in the CW unit [15]. These filter materials act like a natural home for 14 variety of microbes such as Bacillus, Micrococcus, Pseudomonas etc. which contribute to 15 organic matter degradation as well as nitrification and denitrification [16]. These filter materials 16 serve as efficient units for removal of pollutants from wastewater (BOD, COD, NH 4 -N, PO 4 -P 17 and fecal coliforms). To the best of our knowledge, none of the study has been conducted for treatment of dairy farm 8 wastewater at shorter HRTs.
9
Based on previous studies related to filter materials and HRT in CWs, this research work 10 was designed to analyze the performance of laboratory scale vertical sub-surface flow 11 Constructed Wetland systems operated with different filter material and HRTs and their 12 combined effect on removal of organic and inorganic pollutants from dairy wastewater. were taken for the experiment. 20 mm and 10 mm gravels were filled throughout the container A 22 (CW-A) and container B (CW-B), respectively while container C (CW-C) was filled from top to 1 bottom with washed sand (0.25 mm). Control systems for all the three CW systems were also 2 considered for the study. The control CW systems had the same materials and other 3 specifications as CW-A, CW-B and CW-C, but were operated at zero HRT unlike CW-A, CW-B 4 and CW-C.
5
On the basis of retention time, flow rates, bed surface area and concentrations of organic 6 matter and nitrogen, the decomposition constant coefficients (k) for wastewater treated in VF 7 beds were calculated. First order equation form presented in (Eq. (1)), which uses k v rate and the 8 HRT for VF beds: Similarly, other parameters were measured using methods such as TSS (colorimetric method), 10 BOD (3-days incubation method), COD (reactor digestion method), NH 4 -N (salicylate method) 11 and PO 4 -P (molybdovanadate method). The efficiency of the vertical sub-surface constructed wetland system for removal of pollutants 15 was calculated in terms of removal rate using the following equation [5]. Average pH in the influent was recorded as 7.2 ± 0.4 which changed between 7.23 ± 0.2 in 1 control, 7.3 ± 0.3 to 7.5 ± 0.3 in CW-A, 7.2 ± 0.4 to 7.5 ± 0.2 in CW-B and 7.4 ± 0.3 to 7.7 ± 0.2 2 in CW-C unit (Table S1). Although no significant change in pH was observed at different HRTs HRT was increased from 12 h to 24 h however, when HRT was further extended to 48 h, the 21 NO 3 -N concentration showed a decrease in effluent (Table S1). This might be due to 22 denitrification process which occurs at longer HRTs. It has also been reported that at longer HRT an increase of 5.1% at 48 h ( Fig. 2(b)). In CW-C (Sand filled unit), PO 4 -P removal rates was 14 recorded as 25.9, 44.1 and 46.1% at 12 h, 24 h and 48 h HRTs respectively (Fig. 7). Among all al. [32], in his study, explained that P may remain bound to the media components as a result of 19 precipitation and adsorption reactions with ions (Ca, Fe or Al) present in sand or gravels. The 20 particle size of substrates which are suitable for P removal may vary significantly.
13
Our results showed a higher PO 4 -P removal as compared to a study conducted by Arias et al.
13
The average BOD 3 decrease at outlet amongst the three setup was most prominent in 14 CW-B outlet (10 mm gravel filled). The BOD 3 removal rate was observed as: 46.3, 60.9 and 62% 15 in CW-A at 12 h, 24 h and 48 h HRTs respectively ( Fig. 6(a)). In CW-B (10 mm), the removal 16 rate was recorded as 48.1 and 62.2% at 12 h and 24 h HRTs, respectively while no further 17 change was observed at 48 h HRT (Fig. S1). In CW-C, the BOD 3 removal rate was found to be in COD removal rate when HRT was increased from 12 h to 48 h in CW-A unit ( Fig. 6(a)). In In CW-C, the percent removal was minimum among all the three filter materials and was The study showed good removal of pollutants in all filter materials, however further study is 3 recommended to validate findings on real-scale CW systems. We recommend assessment of 4 planted vegetation for phytoremediation potential and biofilm formation on filter media in future 5 studies for better understanding on pollutant removal mechanisms in CW.
|
2021-04-20T15:54:01.878Z
|
2021-01-13T00:00:00.000
|
{
"year": 2021,
"sha1": "c287a9cabfdd870cb5caab4f4ae54170aba8b6a2",
"oa_license": "CCBYNC",
"oa_url": "http://www.eeer.org/upload/eer-2020-436.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3523f567e4c2ba8767faed40a0b74e8e7d930eb4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
247148931
|
pes2o/s2orc
|
v3-fos-license
|
A miniaturized angularly stable dual‐band FSS based on convoluted structure and complementary coupling
A miniaturized dual‐band frequency selective surface (FSS) with high angular stability is proposed. The unit cells of the proposed FSS have a convoluted pattern and hexagonal contour, which are arranged into array compactly. The FSS is implemented on single substrate with two metallic layers which has a low profile. The top and bottom patterns are complementary which generate two passbands separated by a transmission zero. A high out‐of‐band suppression is obtained, and the two bands can be tuned simply and flexibly. The FSS is highly stable with respect to different incident angles and polarizations due to the rotationally symmetric structure. An equivalent circuit model is developed, and a prototype working at 2.15 GHz and 3.8 GHz is fabricated, whose dimension is only 0.05λ0 at the lower band. The simulated results agree with measured results very well.
| INTRODUCTION
Frequency selective surface (FSS) is composed of massive periodically arranged units, which exhibits bandpass or bandstop characteristic to the incident electromagnetic wave. The performances of FSS depend not only on the frequency, but also on the incident angle and polarization. As an excellent spatial filter, FSS has been widely used in radomes, antenna reflectors, absorbers, RF shields and so on. FSS with multiband performances is highly desirable in modern multi-functional communication systems. Furthermore, practical implementations demand FSS with miniaturized unit so that large number of unit cells can be contained in a limited space.
Several approaches have been proposed to design multiband miniaturized FSS. A multiband FSS based on multiperiodicity combined elements which could be regarded as a fractal was presented in Reference 1. In Reference 2, a new method based on the perturbations of a single-band element was proposed to design multiband FSS providing wide range of band ratio. Multilayer structures were utilized to implement dual-band or tri-band FSS in References 3-6, these designs realized multiple transmission poles and zeros which possess highlyselective feature, however their structures were complicated. In Reference 7, a three-dimensional FSS was proposed to realize two flat passbands with a sharp rejection. Reference 8 proposed a simple design technique for multi-stopband FSS based on multi-resonator structure. Composite element which provides multiple current paths was used to create dual-band or tri-band FSSs in References 9-14, these elements had a convoluted structure so that closely spaced resonances and miniaturization characteristics were obtained, in addition, they also showed good angular and polarization stability. In Reference 15, the concept of complementary FSS stemmed from Babinet's principle was proposed, which took advantage of the interaction between complementary layers to generate two passbands. Based on this concept, several dual-band FSSs were proposed in References 16-18, in which lumped components loading 16 or convoluted structure 17,18 were employed to obtain miniaturization.
In this article, a miniaturized dual-passband FSS based on complementary structure is proposed. The unit cell of the proposed FSS has a convoluted pattern and hexagonal contour, which can be arranged into array compactly. Due to the rotationally symmetric structure, the FSS shows good angular and polarization stability. Since the two passbands are separated by a transmission zero, the proposed FSS has high out-of-band suppression. Furthermore, the proposed FSS is constructed on a single substrate layer and the two passbands can be tuned independently, which make it easy to design and implement. FSS geometry and equivalent circuit are discussed in Section 2. Section 3 presents simulation results and discussion. Experimental verifications are shown in Section 4, and conclusions are drawn in Section 5.
| FSS GEOMETRY AND EQUIVALENT CIRCUIT
The geometry of the proposed FSS is shown in Figure 1. The unit cell of the FSS is composed of two layers of Figure 1A is complementary to the bottom pattern shown in Figure 1B, which means the conducting part and aperture part on the top and bottom layers are complementary. The two complementary layers couple together and generate two passbands separated by a transmission null. The rotationally symmetric pattern forms a hexagonal contour of the unit, and the units are arranged obliquely as shown in Figure 1E to achieve compactness of the whole FSS. The evolution process of the unit cell is shown in Figure 2. The unit cell evolves from three pairs of conventional dipole. Next, the end of each arm is bent to increase the length of the dipole, and then the dipole convolutes inward continually to form a triangular spiral structure. This convoluted structure extends the length of metallic strips and slots between them, which in turn increases the equivalent inductance and capacitance so that miniaturization is achieved. In addition, the distribution of the three pairs of convoluted dipole is rotationally symmetric, and has a hexagonal contour, which allows the units to be arranged compactly to avoid grating lobes. The rotationally symmetric structure, miniaturized unit cell and compact array ensure the angular and polarization stability of the proposed FSS, which enable the FSS to work at large incident angles. Figure 3 shows the equivalent circuit of the proposed FSS, which consists of two parallel resonant networks representing the top and bottom pattern, respectively. The right network (L 3 , C 2 , C 3 ) is the duality of the left network (L 1 , L 2 , C 1 ) due to the complementarity of the proposed FSS. In the left resonant network, L 1 represents the inductance of metallic strip and C 1 represents the capacitance of slot. A series inductor L 2 is introduced to account for the mutual inductance between adjacent unit cells. 18 The dielectric substrate can be modeled as a transmission line which has been ignored in the equivalent circuit since the thickness of substrate is very small compared to the wavelength. A FSS working at 2.15 GHz and 3.8 GHz is designed by CST Studio Suite, and the geometric parameters are shown in Table 1. The equivalent circuit is simulated by ADS and the values of inductors and capacitors extracted by curve-fitting are also shown in Table 1. S-parameters of the FSS simulated by CST and the equivalent circuit model are shown in Figure 4. It can be seen that the two passbands locate at 2.15 GHz and 3.8 GHz, respectively, and there is a distinct transmission null at 2.8 GHz whose transmission coefficient is less than À60 dB. So two highly selective passbands are obtained and the results of the equivalent circuit model are consistent with the full-wave simulation. To understand the characteristics of the proposed FSS in more detail, the effects of some key parameters on transmission coefficient are investigated. Figure 6A shows the transmission coefficient varying with the number of convolution turns N. When N increases, the equivalent inductance and capacitance also increase which leads to smaller resonant frequency. The effects of metallic strip width w and slot width s are shown in Figure 6B and C, respectively. Since the equivalent inductance decreases with the increasing of w and the equivalent capacitance decreases with the increasing of s, so it can be observed that the two resonant frequencies increase with w and s, and the upper band increases faster. As pointed out in Reference 17, the coupling between the top layer and the bottom layer can be adjusted by layer offset, so that the resonant frequencies can be tuned. An offset o is introduced on the bottom pattern along x-axis as shown in Figure 1D, and its effects on transmission coefficient are shown in Figure 6D. It is worth noting that the upper resonant frequency decreases from 3.80 GHz to 3.29 GHz when the offset varies from 0 mm to 1.2 mm, while the lower resonant frequency almost keeps constant. So this parameter can tune the upper band independently without affecting the lower band. Furthermore, since the top and bottom layers interact strongly at the passband, the two passbands can get closer or further apart by changing the separation distance of the top and bottom layers (substrate thickness).
| SIMULATED RESULTS AND DISCUSSION
From above discussion, it can be seen that the proposed dual-band FSS can be tuned flexibly and
| EXPERIMENTAL RESULTS
A prototype of the proposed FSS containing 30  30 unit cells with a dimension of 220 mm  200 mm is fabricated on the Rogers 4003 dielectric substrate whose dielectric constant is 3.55, loss tangent is 0.0027, and thickness is 0.508 mm. Geometric parameters of the FSS are set as the values in Table 1. Figure 7A shows the schematic diagram of transmission and reflection measurement. The FSS is fixed on a pyramidal absorber screen to minimize diffractions from the edge and is measured using a pair of horn antennas and a Keysight N5235B vector network analyzer. The distance between the horn antenna and the FSS is 1 m to ensure far field condition and plane wave incidence. Fabricated prototype and the measurement setup are shown in Figure 7B. Figure 8 shows the simulated and measured transmission coefficients of the proposed FSS under normal incidence. The FSS exhibits two passbands at 2.15 GHz and 3.8 GHz, and the measured results agree well with the simulated results. The measured insertion losses at the two resonant frequencies are 1.9 dB and 2.8 dB, respectively, which are attributed to dielectric loss and measurement error. Figure 9 presents the measured transmission and reflection coefficients of the FSS under different incident angles up to 60 for TE and TM waves. It can be observed that the center frequencies of the two passbands do not change with incident angle and polarization, which verifies the high stability of the proposed FSS due to its miniaturized and rotationally symmetric structure. It should be mentioned that the maximum incident angle is set to be 80 in the simulation (as shown in Figure 5), however in the experimental measurement, the maximum incident angle is 60 since the incident wave will be blocked by the pyramidal absorber when the incident angle exceeds 60 . Table 2 shows the comparisons between the proposed FSS and previously reported miniaturized dual-band FSS, in which λ 0 refers to the wavelength of the lower resonant frequency. The comparison clearly confirms that the FSS proposed in this study has outstanding performance with respect to the unit size miniaturization and angular stability.
| CONCLUSION
In this article, a convoluted miniaturized FSS based on complementary structure is proposed to achieve dualpassband high selectivity performance. The FSS is implemented on single substrate with two metallic layers, which has a low profile and simple structure. Due to the rotationally symmetric pattern, the FSS maintains good stability under different incident angles up to 80 for TE and TM waves. The two passbands can be tuned flexibly by adjusting the number of convolution turns, widths of metallic strip and slot, and layer offset. An accurate equivalent circuit model is developed to investigate the mechanism of the FSS. A prototype working at 2.15 GHz and 3.8 GHz is fabricated, whose dimension is only 0:05λ 0 at the lower band. The simulated and measured results are in good agreement.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2022-02-28T16:15:17.970Z
|
2022-02-25T00:00:00.000
|
{
"year": 2022,
"sha1": "4cc5a50ce0fa35615dd9a7a9876cef5f6d6de2a9",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/6340130/files/A%20miniaturized%20angularly%20stable%20dual-band%20FSS%20based%20on%20convoluted%20structure%20and%20complementary%20coupling.pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "2acb1a4f1d4ae95baa3a235e3d38dfb056b6878e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
233979320
|
pes2o/s2orc
|
v3-fos-license
|
A Multidimensional Evaluation Approach for the Natural Parks Design
: The design of a natural park is generated by the need to protect and organize, for conserva-tion and/or for balanced growth, parts of the territory that are of particular interest for the quality of the natural and historical–cultural heritage. The necessary tool to support the decision-making process in the design of a natural park are the financial and economic evaluations, which intervene in three successive steps: in the definition of protection and enhancement levels of the park areas; in the choice of the interventions to be implemented for the realization of these levels of protection and enhancement; in determining and verifying the economic and financial results obtainable from the project execution. This contribution deals with aspects and issues relating to the economic and financial evaluation of natural park projects. In particular, an application of the “Complex Social Value” to a concrete case of environmental design is developed on the basis of the elements that can be deduced from a feasibility study of a natural park: the levels of protection and enhancement of the homogeneous areas of the natural park are preliminarily defined, and the choice of the design alternative to be implemented is, therefore, rationalized with multicriteria analysis.
Introduction
The design of a natural park consists in the conception of a territorial system (park system) whose compositional and structural characteristics derive from a careful territory examination and its subdivision into homogeneous areas that constitute the park "subsystems". Within the individual areas, the activities are established on the basis of a summary judgment between the vulnerability degree and the social utility degree of the existing resources and emergencies.
The determination of the vulnerability degree involves multidisciplinary skills, which obviously depend on the nature and characteristics of the resources under study. The determination of the social utility degree is a specifically evaluative problem and can be summed up in the value that the community attributes to resources.
Since these are goods of a purely qualitative nature, not reproducible, belonging to the community and, therefore, by definition not exchangeable and capable of presenting values independent of use, the criterion of "complex social value" can be used for their valuation, corresponding to the "total economic value", grouping in its composition the preferences of all the subjects directly and indirectly involved in the formulation of the value judgment [1].
The complex social value synthesizes both economic needs and those that cannot be connected with objectives of pure efficiency. Therefore, it carries out a cognitive function aimed at revealing the multiple social expectations regarding increasingly scarce natural resources, whose use program, in the processes of territorial redevelopment or development, is aimed at a balanced qualitative-quantitative growth of the socio-economic and ecological-environmental components [2,3].
The search for the optimal degree of integration between various modes of growth cannot be separated from a composite evaluation set that can be formulated with a view to subjecting each sub-system to types of growth selected on "merit values" attributed by the community to existing resources.
Generally, it can be accepted that activities compatible with the objectives of exclusive protection must be localized in the subsystems that have a "high" complex social value. Any exceptions regarding the possibility of providing moderate forms of transformation of the environmental components can only be considered following a judgment of compatibility between the impacts originating from the transformation and the qualitative characteristics of the resources. Similarly, mixed objectives of protection and enhancement and objectives of mere enhancement may be pursued in areas that have a complex social value, respectively, "medium" and "low".
However, the classification of differentiated levels of value can only be carried out after having explained the various aspects of the complex social value of the resources and emergencies present in the park areas. This value, as is known, is given by the sum of two components: the "use value" and the "use-independent value". The use value is connected to the use of a certain resource and arises from the flow of collective utility consequent to the use, even indirect and current (vicarious value) or future (option value, bequest value), of the resource itself. The value independent of use, in turn, is represented by the so-called existence value, which depends solely on the fact that the resource "exists", regardless of its use/enjoyment of a direct and indirect type [1][2][3][4].
To express in economic terms the use value and the use-independent value of a resource, it is possible to consider various valuation methods.
In particular, the value associated with the direct use of a natural resource can be derived from a demand curve constructed through measures of willingness to pay or accept (direct methods), or by using "proxy" variables of value (indirect methods). The "willingness to pay" can also be used to express in economic terms the indirect use value and the rate of the complex social value independent by resource use. The valuation procedure generally applied is the Contingent Valuation (CV), which makes it possible to prefigure a hypothetical market for the asset, from which the value is deducted. The evaluation is thus carried out directly, without resorting to parameters that act as a proxy for the unknown value [5].
However, with economic evaluations only, it is not possible to express the various aspects of the complex social value of a resource. This is because some qualitative environmental components (i.e. the landscape, aesthetic, cultural or ecological) generally escape a monetary representation. It, therefore, becomes necessary, in order to correctly estimate the complex social value, to develop a disaggregated qualitative-quantitative evaluation with respect to a certain number of criteria.
The significance of the complex social value will obviously depend on the congruity of the individual assessments and the ability to compose them according to a multidimensional profile.
In order to define the protection levels and enhancement to be achieved in the park areas, it will then be necessary to identify a priority ranking among the sub-systems that takes into account the complex social value of each one. In this way, it will be possible to compare the different values attributed to the resources and emergencies present in the individual sub-systems, and subsequently to establish the methods of environmental protection to be implemented [2][3][4].
In the framework outlined, it must be said that among the major weaknesses of sustainable development is that it is not always possible to adequately measure the level of sustainability achieved by a particular activity or government/institution. There is a lack of knowledge on which environmental issues should be incorporated into the economic calculation and on how sustainability can be measured.
Sustainability is a multidimensional concept: the economic, social and environmental aspects must be considered simultaneously. This can be adequately considered through the complex social value, which is expressed through a set of multidimensional indicators. For this reason, the research question of the study consists of an attempt to combine the aspects of sustainability with assessments from the point of view of the community (economic, social, environmental), putting them in relation to each other.
Literature Review
With reference to decision-making processes, valuation systems can assume different meanings especially if they are related to spatial planning.
The issues with value in planning were examined by Campbell [6], who analyzed how planners can make ethical or qualitative judgements based on a critical understanding of the decision context considered.
Planning issues require evaluation methods based on complex value-focused thinking: this helps to articulate values, identify decision opportunities and create alternatives [7].
The "complex social value" of a context and its resources was considered by Fusco Girard [8]. Further, this value expresses a system of immaterial relations, its specific character and identity [9].
Another concept of value complex was formulated by Zeleny [10], conceiving it as a metacriterion: an expression of a cognitive equilibrium integrated and rooted in specific contexts [11].
Complex systems can reflect only a specific subset of possible representations [12,13]; thus, the public-decision problems must be used to choose a definition of "value" under an operational profile, although different policy goals may specify different aspects or definitions of value. Additionally, multiple values correspond to as many multiple forms of knowledge [14,15].
With regard to the importance of "social" decision making, the ways in which values, preferences and alternative knowledge are derived from interactions with the social environment, Larner and Le Heron highlighted the context of the decision-making environment, both in spatial and scalar terms [16].
Decisions based on complex values enable a better focus on the decision problem structure [17], where complex issues can also be complicated, not structured, difficult to manage or ambiguous [18][19][20][21].
Again, complex values are connected to the context and the decision framework, and they take shape through physical, environmental, social and economic environments [22].
With regard to the evaluation in planning, Alexander [23] focuses on the concept of planning-evaluation proposed by Lichfield [24]: evaluation is conceived as closely embedded in planning, evolving with it. Then, evaluation method evolution reflects the planning process interaction with the diversity and complexity of knowledge, favoring new approaches and methods focused on complex multimethod evaluation systems [23,25,26].
Among the applications of complex social value for the enhancement of ecosystems, the most representative studies are those of Sherrouse et Al. [27], Fulgencio [28] and Fagerholm et Al. [29]. In particular, Sherrouse et Al. [27] developed a tool to assess, map and quantify nonmarket values perceived by various groups of ecosystem stakeholders; this has two main objectives: evaluate how effectively the value index developed reproduces results from more common statistical methods of social-survey data analysis; examine how the spatial results provide additional information that could be used by stakeholders to better understand more complex relationships among stakeholder values, attitudes and preferences. Fulgencio [28] tries to clarify the understanding of social value in an innovation ecosystem, as a tool to aid science park orchestrators or managers to manage the expectations of social and nonsocial actors. Fagerholm et al. [29] synthesized the existing analysis methods applied to the data collected through participatory mapping approaches, with the aim to guide both novice and experienced practitioners in the field of participatory mapping.
However, in the examined studies, high attention should be paid to the fact that an assessment based solely on economic or social impacts does not always guarantee a fair integration of multidimensional values in the decision-making process, because it does not it take into account the many temporal phases of spatial context transformation.
Materials and Methods
The process of determining and evaluating strategic choices stimulates the search for increasingly objective systems and/or selection criteria that are not influenced by endogenous factors. This problem is particularly relevant if investment projects need to have public funding.
The constraints deriving from economic, social and environmental issues often contrast with the design needs, making choices influenced by value judgments indispensable. It is precisely in the need to make choices and in the opportunity to support them, even scientifically, that statistical tools are inserted, including multicriteria analysis.
Despite the great variety of multidimensional evaluation methods, they all have two elements in common: the existence of multiple evaluation criteria, often conflicting, for which there are different units of measurement; the possibility of a multidisciplinary approach [3,4].
A classification of multidimensional methods enables their subdivision into discrete multicriteria methods and continuous multiobjective methods.
Continuous methods can include an infinite number of choice possibilities (they concern the identification of the best choice within an infinite set of alternatives, given the pre-established constraints), while discrete methods take into consideration a finite and explicit number of feasible decision alternatives (actions, plans, interventions or projects that are alternative to each other). The latter, therefore, is better suited to be used downstream of evaluations when it is a question of comparing a finite number of opposing "alternatives".
The variety of tools offered by Multicrier Analysis includes techniques regulated by simple algorithms (dominance analysis, for example) and techniques that use more complex algorithms, among which the most frequently used are the Concordance Analysis and the Analytical Hierarchy Process (AHP). Other methods are the Electre method, the Evamix method and the Topsis method [30][31][32][33][34][35][36][37][38][39].
Giaoutzi and Nijkamp [35] gave, through an equilateral triangle diagram, a definition of sustainable development in which three dimensions are combined: economic, environmental and social. According to this triangle, sustainable development can be seen as a combination of the position of the economist, the opinion of the sociologist and the attitude of the environmentalist. Making choices will, therefore, mean recognizing and accepting priorities and through them favoring one position over another (establishing criteria).
In application practice, among the most used multicriteria evaluation procedures is the qualitative multicriteria analysis developed by Nijkamp [36][37][38][39], which is very useful, especially in the presence of little information on the effects of projects. This procedure consists in identifying classes of importance and effectiveness, then assigning preference scores and calculating how many times a given design alternative falls into a certain importance/effectiveness class. On the basis of the index found, a table of combined frequencies is constructed, in which each element indicates how many times a design alternative proves to be more or less effective and important. Although considered particularly easy to use, the Nijkamp method has the limit of establishing whether one project is better than another, but not to what extent, like any other method of qualitative evaluation.
Application of the Complex Social Value: Research Steps
After a preliminary overview of the territorial context of interest, the research phases can be summarized as follows: 1.
Subdivision of the park area into homogeneous territorial zones (sub-systems) for morphological, utilization and anthropization characteristics; 2.
Classification of homogeneous areas according to their complex social value; 3.
Definition of the activities to be started for the constitution of the natural park on the basis of the classification referred to in point 1; 4.
Identification of design alternatives; 5.
Determination of the preferability order for the design alternatives.
Territorial Context
The system of the Picentini Mountains extends from the province of Avellino to that of Salerno, in Campania. It is bordered to the west by the Irno river valley, to the east by the Alto Sele valley, to the south by the plain of Battipaglia and to the north by the Ofanto river and the route of the ancient Via Appia. It is, therefore, placed between the "Neapolitan conurbation"-which is a dense urban and semi-urban agglomeration that extends continuously on the coastal strip between Cuma and the west of Naples, and Eboli and the south of Salerno-and the inland areas of Alta Irpinia. The Picentini Mountains include, in a landscape continuum of particular environmental interest, a set of reliefs and valley bottoms with evident and accentuated characteristics of morphological and landscape unity. The peaks of Monte Mai, Polveracchio, Calvello and Accelica are crowned by the higher reliefs, Mount Terminio (1783 m) and Mount Cervialto (1809 m). The system is rich in tall forests and spring waters, which give rise to the Sele, Ofanto, Calore, Sabato, Picentino and Tusciano rivers. The waters are partly used by hydroelectric plants and partly destined for drinking purposes in the Campania and Puglia regions. Of the entire system, the park area extends over approximately 14,000 hectares and is roughly delimited: to the east, by the administrative border of the "Valle dell'Irno" mountain community, coinciding with the ridge limit that determines the natural division and structural of the territory on two sides (western and eastern); to the north, west and partly to the south by the Salerno-Avellino highway and railway, which mark the border strip characterized by strong anthropization; to the south, for the portion not delimited by the highway, by the line of the road connecting the smaller towns. The land surrounding the inhabited centers is covered by vineyards, chestnut groves and mostly tree-lined arable land, managed in the direct economy by the farmers. The zootechnical activity, made up of sheep, goat and cattle breeding, is fragmented into small family farms and is constantly shrinking. The industrial initiatives are mainly located along the southern and western axes of the area, with tanneries in Solofra, spinning mills and small foundries in the valley areas. A good source of income is given by the production of wood and the small industries connected to it.
Homogeneous Territorial Zones
For the case study, the information relating to a feasibility study prepared for the enhancement of the park was used as a reference. In this reference, the park area is already divided into homogeneous zones. The subdivision was made on the elements collected with the specialist investigations carried out on the main environmental components of the park area: Historical-cultural and anthropic environment.
The homogeneous areas identified have the following denominations and characteristics: • Zone A-Area of natural environment. It includes the cacuminal belt of the mountains above the chestnut area and the areas of difficult access between mounts, where environmental resources are in almost optimal conditions. • Zone B-Area of semi-natural environment. It includes the influence the area of a water basin. This area constitutes a defined and limited ecosystem, and it is characterized by an environmental balance determined by careful resource use. It falls within the belt of the mainly western mountain slope, of pre-eminent landscape interest due to the scenery effect it produces on the inhabited centers located along the foothills. • Zone C-Area of agro-forestry and agricultural environment. It includes the flat areas that extend around the inhabited centers, as well as the foothills and the valley floors, which, due to orographic characteristics, allow agricultural land use.
•
Zone D-Area of urban environment. It includes the inhabited centers falling within the natural park perimeter.
The definition of these zones is consistent with the indications provided by European Community for the harmonization, at European level, of the zoning system for protected areas.
The percentage distribution of the 14,000 hectares that make up the park area among the homogeneous areas is identified as follows: •
Protection Levels: Classification of Homogeneous Areas
The complex social value enables the measure of the protection degree to be achieved in the individual park areas. It must, therefore, be determined for each homogeneous zone. The comparison of the results leads to the ranking and classification of the zones: zones with the highest complex social value are those in which it is preferable not to carry out any transformation; for areas with a lower complex social value, modifications of the use characters may be envisaged.
Multicriteria qualitative-quantitative analysis was applied to estimate the complex social value for the single homogeneous zones.
The assessment summary, with respect to the selected criteria, is shown in Table 5, whose rows explain the complex social values for the four homogeneous areas between which the study area was divided. The value judgments contained in Table 5 are expressed through ordinal numbers ranging from 1 to 4, with 1 and 4 equal to the minimum and maximum values, respectively. The attribution of the values corresponding to the criteria from C2 to C6 did not cause difficulties, as it was possible to obtain the results of specialist surveys on the main environmental components of the park area. The definition of the value relating to criterion C1 was more complicated due to the scarce information available. However, this latest information has been integrated with data from a survey carried out through interviews.
Additionally, economic assessments are expressed in ordinal scale, since the multicriteria analysis purpose is to define an ordering of the areas, a ranking according to their complex social value. To define the priority order of the homogeneous zones, it is necessary to assign a "weight" to each of the considered evaluation criteria. Each combination of weights corresponds to a different sorting of the zones. It is, therefore, a question of identifying an overall ranking that takes into account all the possible weighting systems of the criteria.
The overall ordering of the park areas for the possible weight combinations was obtained with the regime analysis developed by Nijkamp and Hinloopen [36,37]. In fact, the regime analysis makes it possible to determine the overall priority of the park areas even if only ordinary information is available. Table 6 shows the zones ordering according to the attributions (w k ) of different weights to the six evaluation criteria considered. The weights are expressed using ordinal numeric symbols. Table 6 highlights that zone A was the one that presented, in almost all cases, the highest preferability and, therefore, the highest complex social value. The preference for zone B was slightly lower, then zone D followed and, finally, zone C, which in all cases had the lowest preference. This priority ranking did not change when we set (w1 = w2 = w3 = w4) > (w5 = w6), and when w1 = w2 = w3 = w4 = w5 = w6 also, assuming that the different criteria have the same weight for the purposes of the overall assessment of each area. The priority order of the park areas, on the other hand, varied substantially when the highest importance was attributed to criterion C5. In this last case, the area that presented the greatest preferability was D. This was also confirmed when we assumed (w5 = w6) > (w1 = w2 = w3 = w4), i.e., when greater importance was assigned to the criteria that expressed the "extrinsic" quality (use value) of the park areas.
The overall priority of the homogeneous zones identified in the park area is, therefore, the following: The above classification represents the least "sensitive" ranking to variations in the weighting system of the evaluation criteria. In it, zone A, "natural environment", was the one with the highest complex social value, in which it is appropriate to preserve or increase the naturalistic values, excluding any type of transformation. The complex social value of zone B, "semi-natural environmen", was lower, for which a generalized protection must consequently be envisaged, which can result in the inhibition of activities that involve irreversible ecosystems modifications. In zones D, "urban environment", and C, "agro-forestry and agrarian environment", with a complex social value, medium and low transformations, respectively, can be introduced, whose impacts are not incompatible with the qualitative components of the area (natural, historical and cultural resources).
Activities Planned for the Natural Park Establishment
The knowledge of the protection degrees assigned to the homogeneous zones of the park area allows us to establish the types of activities that can be carried out in them.
These activities are indicated in Table 7 in the form of active and passive requirements for the individual zones. Table 7. Activities that can be implemented for the establishment of the park and positive or negative prescriptions for the individual homogeneous areas (+ permitted activity, − prohibited activity).
Design Alternatives
The design alternatives for the study area were defined by taking into account indications contained in the Territorial Urban Plan of the Mountain Community "Valle dell'Irno", and proposals made by local public administrations and authorities, as well as by naturalistic associations.
The types of intervention hypothesized are compatible with the activity categories that can be implemented in the homogeneous areas.
Three design alternatives were considered. Articulated into types of intervention that compose them, the design alternatives are indicated in Table 8.
Alternative 1 is the one that most closely matches the status quo. The planned interventions essentially refer to environmental protection (environmental control and restoration) and resource use. The interventions related to resource use are aimed mainly to satisfy a demand with naturalistic reasons.
Alternative 2 provides, in close connection with the protection measures necessary to safeguard the natural environments present in the park area, a set of resource enhancement interventions, aimed at encouraging the correct use of resources by visitors because they exercise demand segments with multiple motivations (naturalistic, cultural, sporting, recreational, etc.).
Alternative 3 differs from the previous ones because it aims to increase, to a greater measure than the others, the production capacity of the park area, enhancing its potential through support structures for existing economic activities and through the promotion of new productive activities.
Order of Preferability for the Design Alternatives: Optimal Alternative Choice
A priority ranking among design alternatives must be constructed by first evaluating each alternative with respect to defined criteria and objectives. The alternative ordering is then determined with the application of specific multicriteria analysis, after assigning weights to the criteria/objectives considered.
The design alternatives defined for the park area under study were evaluated using the following criteria: • Environmental protection criterion. This searches for the solution that minimizes the loss or compromise of resources and irreproducible emergencies. To make this criterion operational, an indicator of the consistency of environmental resources that will be destroyed or compromised with the project realization must be used.
•
Ethical/social criterion. This is the result of searching for the solution that produces the highest increase in occupation degree in the gravitation area of the park. The increase in the number of employees produced by the activities in the park area with the project realization should be taken as an indicator of this. • Criterion of economic valorization territory. This can be expressed through the solution searching which corresponds to the highest economic benefit for the community. The internal economic rate of return of the project can be used as an indicator of this.
•
In other words, for the park area considered, the identification of the optimal alternative must be achieved with the search of the design solution capable of minimizing the environmental cost and, at the same time, maximizing the social and economic benefits.
In the application of the above criteria, both the environmental cost and the social and economic benefit were envisaged for each alternative outlined.
The evaluation result is shown in Table 9. The latter table is an ordinal impact matrix with indexes ranging from 1 to 3 (1 is the minimum preferability; 3 is the maximum preferability). Additionally, for the definition of the overall priority of the three design alternatives, the regime analysis developed by Nijkamp and Hinloopen was applied, already used in the analysis to define the ordering of the homogeneous areas identified in the park area. Table 10 shows the preferability rankings of the alternatives according to the possible ordinal systems of weighting of the criteria. Table 10. Preferability rankings of the design alternatives taking into account the possible ordinal systems of weighting of the criteria. w1 w2 w3 1 2 3 2 2 2 1 2 3 3 2 1 2 1 3 3 1 2 2 1 3 2 3 2 1 2 3 1 3 2 1 2 3 2 3 1 2 1 3 2 2 3 1 2 3 1 2 3 1 2 3 2 1 3 1 2 3 3 3 2 1 2 3 2 3 3 1 2 3 3 2 3 1 2 3 Alternative 3 expressed the highest preferability, and therefore, the highest complex social value, in correspondence with every possible combination of weights assigned to the three project objectives. In general, Alternative 2 was less preferable, followed by Alternative 1. The latter became preferable to Alternative 2 when the greatest importance was assigned to the objective of reducing the environmental cost, i.e., when the objective of greater importance was the maximization of social benefits. In all other cases, Alternative 1 was less preferable than Alternative 2. This was also confirmed when we set w1 = w2 = w3.
Weights Alternatives
The ordering representing the overall priority of the project alternatives hypothesized for the park area is, therefore: Alternative 1 equal to overall priority 1; Alternative 2 equal to overall priority 2; Alternative 3 equal to overall priority 3.
The highest overall priority was expressed by Alternative 3, which consequently constitutes the design solution capable of optimally integrating environmental protection objectives with the objectives of social and economic enhancement of the territory. Alternatives 2 and 1 are less preferable, the reciprocal position of which in the system may nevertheless undergo changes in the event that environmental and ethical/social considerations require less importance to be attributed to the objective of economic development of the territory.
Concluding Remarks
The management of sustainable development is an open and current issue. In the design of natural parks, the examination and definition of the compatibility between qualitative and quantitative growth profiles of the park system, as well as the intervention choice to be implemented to achieve the protection level and enhancement of resources, involve a necessary expansion of the evaluation framework and parameters.
A balanced and sustainable development strategy of the park area implies, in fact, the search for solutions capable of satisfying diversified needs, attributable to economic needs and to aspects more directly related to ecological and social quality. Consequently, the use of evaluation methodologies that lead above all to the identification of the different components of the value of the resources and then to the analysis of the interdependencies between components is indispensable in order to group them according to a multidimensional scheme. A scheme that adequately reconnects, in order to arrive at an overall result, economic, social and environmental valuations.
The complex social value collects the variables of different nature that form the value of the resources and emergencies present in the area delimited for the constitution of the park. It reflects the weight of the economic, ethical-social and ecological variables of the resources being evaluated. Therefore, it understands and expresses the multiple expectations of the community with regard to the use of natural and historical-cultural resources within the planning and implementation processes of the reorganization and growth projects of the territory.
However, the application of the complex social value cannot ignore the multicriteria analysis-the evaluation techniques belonging to this family-taking into account the plurality of quantitative and qualitative components that make up the value of environmental resources. These techniques also make it possible to recognize and explain at qualitativequantitative scales the complex of direct and indirect impacts that can be generated by possible interventions in the economic, social and ecological contexts of reference. Unlike traditional evaluation techniques, they appear to be capable of translating, in a global judgment, the multidimensionality of the aspects and the interdependence of the variables on which the choice of the solutions to be implemented depends in optimal terms.
The prospects for the use of "complex" assessments-undoubtedly linked to the development of landscape planning but even more, in general, to the implementation of planning and requalification policies of the territory-are nevertheless still conditioned by the inadequacy of methodological tools. This concerns, in particular, the processing of data and the objectification of information of a qualitative nature, as well as, above all, the problem of assessing "environmental quality". Some limitations of the approach used in this study are represented by the regulations in force in natural parks and protected areas. Each individual country can provide a legislation that has the defect of sometimes being confused with respect to the European Community legislation or directives from worldwide coordination authorities, although in recent years, there has been numerous legislative novelties on the subject. National regulations can in fact be based on a purely static-conservative conception of the wooded and natural heritage, rather than based on operations to safeguard and transform the soil, as well as to protect the environment. Furthermore, the regulation of protection and enhancement activities, within the individual areas to be protected, may or not enable the establishment of specific management authorities, which, among other things, have the task of harmonizing protection plans and actions with territorial planning guidelines. These are all issues that, very often, determine conflicting aspects between environmental protection and economic operators involved in the processes of territorial transformation and redevelopment.
Author Contributions: V.D.G. contributed to the conceptualization and supervision; P.D.P. contributed to the formal analysis and methodology; P.M. and F.T. contributed to the investigation and supervision; F.P.D.G. contributed to the data curation, software and validation results. All authors have read and agreed to the published version of the manuscript.
|
2021-05-08T00:03:47.988Z
|
2021-02-17T00:00:00.000
|
{
"year": 2021,
"sha1": "716cc7b7e0799195f1af4cf9224dfb55df426814",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/4/1767/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6fe8ce08e9cb9a6bb98e91690bf35966b63de7bd",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
248083865
|
pes2o/s2orc
|
v3-fos-license
|
Venous blood for the analysis of acid–base status in a model of septic shock
Abstract Objective To determine the relationship between arterial and venous acid–base status in a model of septic shock. Methods Paired samples (n = 435) of arterial and femoral venous blood from 57 sheep (47 septic, 10 non‐septic) managed with protocol‐guided ventilation, sedation, parenteral fluids and inotropic support. Results The arterial‐venous difference in acid–base parameters was similar with and without sepsis. There was a consistent arterio‐venous relationship for metabolic (pH, lactate, bicarbonate, base excess), but not respiratory parameters (partial pressures of oxygen, carbon dioxide, and haemoglobin‐oxygen saturation), independent of sepsis. Conclusions Venous blood provides a reliable measure of metabolic but not respiratory disturbance.
Introduction
Assessment of a patient with sepsis includes analysis of acid-base status. This has traditionally relied on pH and blood gas measurement from arterial blood. However, arterial puncture is uncomfortable and technically more difficult than routine venous blood sampling, especially in patients with cardiovascular instability. Analysis of venous acid-base status may be a suitable alternative in the early assessment of septic patients.
Previous studies investigating the relationship between arterial and venous blood have reported a close correlation, and limits of agreement that many emergency clinicians consider acceptable. 1,2 However, few have specifically investigated the arteriovenous relationship in sepsis. 3 We sought to examine this in a controlled experimental model of septic shock.
Methods
This was a post-hoc study of data obtained from previously published studies of a septic shock model. 4 The project was approved by the institution's animal ethics committee.
Sepsis was induced in 47 sheep with intravenous Escherichia coli (10 8 organisms/kg). These animals developed an elevated temperature, increased cardiac index, reduced mean arterial pressure, hyperlactataemia, acute renal dysfunction and a requirement for noradrenaline. 4 Ten sheep were non-septic controls and remained physiologically stable.
Arterial (carotid) and venous (femoral) blood samples were collected at 0, 2, 4, 8, 12, 16, 20 and 26 h. All samples at time zero were non-septic. Partial pressures of oxygen (pO 2 ), carbon dioxide (pCO 2 ), pH, haemoglobinoxygen saturation (Hb-O 2 ) and lactate were assayed on a RAPID-Point 405 Blood-gas Analyser (Siemens, Munich, Germany). Actual bicarbonate and base excess were calculated from standard equations. 5 To account for paired and repeated measures within sheep, a linear mixed effects model was employed with specimen type (arterial, venous) as a fixed effect and random effects for study animal and measurement time. The intra-class correlation coefficient (ICC) was calculated using a variance components approach. Sepsis was then included as an interaction effect with specimen type, and P < 0.01 considered significant (Stata v.17; StataCorp LLC, College Station, TX, USA).
Results
There was a total of 435 paired samples; 312 were taken when animals were septic, and 123 non-septic. There was a significant difference between arterial and venous samples for all variables other than lactate. With the exception of pO 2 , the extent of arteriovenous difference did not differ in the presence of sepsis (Table 1). For any given sheep, correlation between arterial and venous samples was almost perfect for lactate (ICC >0.99), substantial for pH and base excess (ICC >0.95), moderate for bicarbonate (ICC >0.90) and poor for pCO 2 , pO 2 , and Hb-O 2 (ICC <0.6; Fig. 1).
Discussion
In a controlled experimental setting, venous blood provided a reliable assessment of metabolic but not respiratory acid-base status in non-septic and septic sheep. Venous blood lactate was almost equivalent to arterial concentrations, and there was a consistent arterial-venous relationship for pH, BE and bicarbonate. These observations were independent of sepsis.
In contrast, the relationship between arterial and venous pO2, pCO 2 and Hb-O 2 is wide and inconsistent. This is despite controlling ventilation to maintain a fixed end-tidal CO 2 and pulse Hb-O 2 saturation. The poor relationship is consistent with a metaanalysis concluding venous pCO 2 is an unpredictable substitute for arterial pCO 2 , 1 and precludes reliable analysis of the respiratory contribution to acid-base disturbances.
Strengths of this experimental study are that it was specific for sepsis, had non-septic controls, standardised timing of blood samples, replicated many features seen in the clinical setting, applied protocol-guided supports, and analysis accounted for paired and repeated measures.
Limitations include it being an experimental model of sepsis, and translation from non-human models to the clinical environment must be done with caution. The model incorporated supportive treatments, and the arterio-venous relationship may differ in the un-resuscitated state. Finally, this is a post-hoc study, and the model was not designed to specifically assess arterio-venous relationship. It does, however, ensure the principles of animal research ethics to utilise information available from non-human models of disease.
Conclusion
In an ovine model of sepsis, venous blood reliably predicts arterial pH, lactate, bicarbonate, and base excess. However, venous pCO 2 , pO 2 and Hb-O 2 has a poor relationship with arterial blood. The present study supports data from previous uncontrolled clinical studies, and suggests that venous blood can be a reliable measure of metabolic acid-base status in septic shock.
|
2022-04-12T06:22:43.607Z
|
2022-04-09T00:00:00.000
|
{
"year": 2022,
"sha1": "1a496359b7d153609b11b35d2d0ee85be66f886b",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "47bbf52da63f1844ca11954e9de4b37161fc3222",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225494203
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Cadmium Exposure on Zinc Levels in Normal Non-Diabetic and Diabetic Rats Induced by Streptozotocin
2Laboratoire de pharmacodynamie biochimique, Département des biosciences, Université Félix Abstract Diabetic rats induced by streptozotocin and normal non-diabetic rats were exposed to cadmium sulphate in drinking water at a dose of 200 mg / L for 30 days. At the end of the exposure, the blood, kidney and pancreas of each rat were taken for the determination of cadmium and zinc. The results show a significant increase in the level of cadmium in the blood, kidney and pancreas of normal non-diabetic rats and diabetic rats compared to the control group. The cadmium concentration was significantly higher in plasma and tissues in diabetic rats compared to normal non-diabetic rats. Cadmium poisoning resulted in a significant decrease in the level of zinc in the blood and tissues of normal nondiabetic rats and diabetic rats compared to their respective controls. This study reveals that diabetic rats induced by streptozotocin are more sensitive to the toxic effects of cadmium than normal rats.
INTRODUCTION
C admium (Cd) is an important nephrotoxic environmental pollutant, which possess increasing risks to populations in many parts of the world and Long-term exposure of Cd (7 µg cadmium/week/kg b/w) in humans and experimental animals induces kidney toxicity (1). Cd is found naturally in small quantities in air, water and soil and it can be released into the air when household or industrial waste, coal or oil is burned. Cd also can be released from car exhaust, metal processing industries, battery and paint manufacturing units,waste hauling and disposal activities. Higher levels of Cd may be found in soil or water near by the area of industrial and hazardous waste sites. Annual amount of Cd with industrial waste discharged into the environment was more than 680 tons) (2). Cd can enter in to the body from smoking tobacco, contaminated air inhalation and Cd containing food and water intake. Fruits and vegetables, especially grains, potatoes and leafy vegetables like spinach, theymay grow in soils with high levels of Cd, may contain elevated levels of Cd. Cadmium is a heavy metal that also refers to endocrine disrupting chemicals, having a special impact on the functioning of reproductive organs, including testes, placenta (3), and ovaries (4). The mechanisms of toxicity of Cd also include induction of oxidative and endoplasmic reticulum stress, inflammatory response (5, 6), genotoxicity (7,8), LEVELS IN NORMAL NON-DIABETIC AND DIABETIC RATS INDUCED BY STREPTOZOTOCIN and interference with essential metals (especially zinc) (9).
Interactions of Cd with Zn can take place at different stages of the use of this trace element by our organism (absorption, distribution and excretion) and this can thus affect the biological functions of Zn (10). Zn is an excellent antioxidant which prevents the synthesis of free oxygen radicals which are responsible for oxidative stress (11). Zn is involved in the stabilization of the cell membrane, the synthesis of metallo-thionein (MT) and the structure of super oxide dismutase (SOD Cu / Zn) (12). The objective of this study is to assess the concentration of cadmium and zinc in the blood, pancreas and kidney in normal non-diabetic rats and in diabetic induced by streptozotocin.
Animals and treatment
Twenty young male Wistar rats weighting between 209-279 g were obtained from the Ecole Normale-Supérieured'Abidjan animal facility. These animals were housed at the Pasteur Institute animal care facility in plastic cages and a cycle of day/night was maintained (approximately 12 hours of light and 12 hours of darkness) in a ventilated animal room. The rats were acclimated for 14 days to their new environment before the treatment and had free access to sterile distilled water and sterilized standard food. All the animals were handled in accordance with the guidelines and protocols approved by the Care and Use of Animals Committee of Côte d'Ivoire. Diabetes mellitus was induced in rats after one day fasting by intraperitoneal injection of a single dose (13,14). Blood glucose levels were measured from the tail vein using an AccuChek Active ® (Roche, GU, Germany) glucometer before and three days after the STZ injection, rats with blood glucose levels greater than 250 mg/dL were considered to be diabetic and used for experimental studies (15,16). The rats were divided into four experimental groups (control, STZ-treated, Cd-treated and Cd + STZtreated), each group is made of five rats. The control and STZ-treated groups received distilled water and the Cd-andCd + STZ-treated groups had distilled water enriched with cadmium sulphate (CdSO4) at 200mg/l (17,18). After 30 days, the rats were euthanized, the blood, the left kidney and the pancreas of each rat were prelude for the determination of metals (renal and pancreatic) for the determination of zinc and cadmium.
Prepara on of renal and pancrea c homogenates
The left kidney and pancreas of each rat were removed and placed in normal saline. the kidney and pancreas were homogenized with potassium phosphate buffer (0.1 M, pH 6.8) in mortar on an ice bucket. After centrifugation at 10,000 g for 20 minutes at 4 • C, the supernatant was collected and aliquoted in Eppendorf tubes and then stored at -20 • C for the quantification of metals in the kidneys and pancreas (19).
Metals analyses (cadmium and zinc) in the renal and pancrea c homogenates
The cadmium and zinc were assayed at the Institute National Polytechnique Houphouët-Boigny by Flame Atomic Absorption Spectrometry using a Varian AA20 device as described Gbétohet al. (20). The previously thawed samples were digested using a solution of hydrochloric acid (0.1M) in specific assay tubes so that their concentration was within the calibration range. The Air-Acetylene flame at 3000 • C was used for the atomization of the samples. The reading wavelengths of lead and zinc were re-
MANUSCRIPT CENTRAL
ADJA JULIEN ET AL.
spectively 217 nm and 214.8 nm. The detection limit was 0.001mg /L, ie 1µg /mL. The concentration of lead and zinc was determined by means of calibration curves of each metal ion, from standard solutions with respective concentrations of 0.5; 1 and 2mg/L. These solutions were prepared from 1000 ppm multistandard solutions. The assay in a given sample was performed in triplicate
Sta s cal analysis
The statistical analyses were carried out by using the software Graph Pad Prism 5 Demo. The results are presented in the form of average ± SEM. The test of Student and the test of Annova were used for the comparison of the averages. A value of p < 0.05 was regarded as significant
Cadmium Concentra on in plasma, kidney and pancreas
The results of the relative Cd content in the pancreas, the kidneys and the plasmas not being statistically different between the male rats of the same exposure group, the data concerning the males were compiled in Figure 1. The concentration of Cd in plasma, kidney and pancreas of the STZ group was not statistically different from that of the controls. In rats of the Cd group, the concentration of Cd increased highly significantly in the plasma, the pancreas and in the kidney (p <0.01). In rats of the STZ + Cd group, the concentration of Cd in the plasmas and the kidney was significantly high (p< 0.05) compared to the Cd group, while the concentration of Cd in the pancreas was highly significant (p< 0.01) compared to the control group.
Effect of cadmium on zinc levels in plasma, kidney and pancreas
The results of the relative zinc content in the pancreas, the kidneys and the plasmas not being statistically different between the male rats of the same exposure group, the data concerning the males were FIGURE 1: Cadmium concentra on in rat blood, pancreas, and kidneys. Values are presented as means and S.D. Sta s cally significant differences (p < 0.05) compared to control group are indicated by *, and # Cd group. * ,# p < 0,05; * * ,## p < 0,01; * * * ,### p < 0,001 compiled in Figure 2.
The non-diabetic rats contaminated with cadmium (group Cd) recorded a highly significant decrease in the level of zinc in the plasma, the pancreas and the kidney (p< 0.01) compared to the control groups.
In diabetic rats drinking water enriched with cadmium sulphate (group STZ + Cd), cadmium caused a significant decrease in zinc levels in the plasma (p< 0.05), very highly significant in the pancreas (p <0.001) and highly significant in the kidney (p <0.01) compared to the diabetic control rat (STZ group)
DISCUSSION
In this study, rats with streptozotocin-induced diabetes who were drinking water enriched in cadmium sulfate for 30 days had higher cadmium concentrations in plasma, pancreas, and kidney compared to non-diabetic rats contaminated with cadmium. This significant increase in Cd concentration is due to an increase in water consumption and a reduction in the excretion of cadmium in the urine (21) . In addition, cadmium contractions in the kidneys of experimental rats were greater than that of the pancreas. This is explained by the fact that the kidney is the main organ for the accumulation of cadmium in animals (22,23).
Cadmium has caused a significant decrease in the concentration of zinc in the blood and pancreas in non-diabetic and diabetic animals. Indeed, in biological systems, Interactions of Cd with Zn can take place at different stages of the use of this trace element by our organism (absorption, distribution and excretion) and this can thus affect the functions of Zn (10).
In biological systems Cd and Zn are linked to macromolecules, primarily through sulphur (S), oxygen (O) and nitrogen (N) and interact readily with S-, O-and Ndonors. They bind preferentially to the same proteins -albumin in the bloodstream and metallothionein (Mt) and other proteins in tissues. Although both metals have a high affinity to biological structures (proteins, enzymes) containing -SH (sulphydryl) groups, the affinity of Cd to S-ligands as well as to N-donors is greater that of Zn (23,24). Thus Cd 2+ and Zn 2+ ions can compete for uptake into various cells and binding to intracellular sites and Cd may displace Zn in a number of biological processes (25,26). In this way, one of the metals can influence the uptake and action of the other, depending on their levels. The mechanisms of the interactions are widely debated, being competitive (25,26) or non-competitive (25), depending on the experimental model. Several studies have suggested that interactions between Cd and Zn in the organism result in a high degree from an affinity of both metals to Mt and their ability to induce its synthesis. They can induce Mt synthesis in various tissues, especially in the intestine, liver and kidney (27,28). Cd is about eight times more potent than Zn in increasing hepatic Mt concentration (29).
CONCLUSION
Diabetic rats induced by streptozotocin are more sensitive to the toxic effects of cadmium than normal rats.
|
2020-10-28T18:14:24.549Z
|
2020-08-11T00:00:00.000
|
{
"year": 2020,
"sha1": "9b7a143c93728ac4bf81074f5abf43ebe91622cb",
"oa_license": null,
"oa_url": "http://jmbas.in/index.php/jmbas/article/download/236/294",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "836c7a1a8183d9f0f7e118865265ab720d87a9fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17872067
|
pes2o/s2orc
|
v3-fos-license
|
An Importance Sampling Scheme on Dual Factor Graphs. II. Models with Strong Couplings
We consider the problem of estimating the partition function of the two-dimensional ferromagnetic Ising and Potts models in an external magnetic field. The estimation is done via importance sampling in the dual of the Forney factor graph representing the models. We present importance sampling schemes that can efficiently compute an estimate of the partition function in a wide range of model parameters. Emphasis is on models in which a subset of the coupling parameters is strong.
I. INTRODUCTION
We consider the problem of computing the partition function of the finite-size two-dimensional (2D) ferromagnetic Ising model in an external magnetic field. Applying factor graph duality to address the problem has been investigated in [1]- [3]. It was demonstrated in [1] that Monte Carlo methods based on the dual factor graph work very well for the Ising model at low temperature. In contrast, Monte Carlo methods in the original graph, suffer from critical slowing down at low temperature [4]. Monte Carlo methods in the dual factor graph were also proposed in [1] to estimate the partition function of 2D Ising models in the absence of an external field. In [2], an importance sampling scheme was proposed on the dual factor graph to compute the partition function of the 2D Ising model and the 2D Potts model, when the models are under the influence of an external magnetic field. The importance sampling scheme of [2] was specifically designed for models that are in a strong external field.
In the present paper, we continue this investigation to extend the results of [1], [2] to models with a mixture of strong and weak coupling parameters, and show that in order to have efficient Monte Carlo methods in the dual factor graph, not all the coupling parameters need to be strong (corresponding to models at low temperature), but only a restricted subset of them. As in [2], the proposed importance sampling schemes operate in the dual of the Forney factor graph representing the model. Our numerical results show that the schemes can efficiently estimate the partition function in a wide range of model parameters.
In our schemes, samples are drawn in the dual domain, therefore, as was pointed out in [2], the proposed schemes do not suggest a direct method to draw samples according to the Boltzmann distribution. For more details, see [5, Section II.2], [6], [7].
The paper is organized as follows. In Section II, we review the Ising model and its graphical model representation in terms of Forney factor graphs. Section III discusses dual Forney factor graphs and the normal factor graph duality theorem.
Sections II and III mainly follow the introductory sections of [2]. Section IV describes the proposed importance sampling schemes and the corresponding algorithms. Generalizations to the q-state Potts model are briefly discussed in Section V. Numerical experiments are reported in Section VI.
II. THE ISING MODEL IN AN EXTERNAL MAGNETIC FIELD
Let X 1 , X 2 , . . . , X N be a collection of discrete random variables arranged on the sites of a 2D lattice, as illustrated in Fig. 1, where interactions are restricted to adjacent (nearestneighbor) variables. Suppose each random variable takes on values in some finite alphabet X . Let x i represent a possible realization of X i , x stand for a configuration (x 1 , x 2 , . . . , x N ), and let X stand for (X 1 , X 2 , . . . , X N ).
In a 2D Ising model (a.k.a. Lenz-Ising model, see [8] for an introduction and [9] for a historical review), X = {0, 1} and the energy of a configuration x is given by the Hamiltonian [10] H where B contains all the pairs (bonds) (k, ) with non-zero interactions, and [·] denotes the Iverson bracket [11], which evaluates to 1 if the condition in the bracket is satisfied and to 0 otherwise.
The real coupling parameter J k, controls the strength of the interaction between adjacent variables (x k , x ). The real parameter H m corresponds to the presence of an external magnetic field and controls the strength of the interaction between X m and the field.
In this paper, the focus is on ferromagnetic Ising models characterized by J k, > 0, for each (k, ) ∈ B. In ferromagnetic Ising models, configurations in which adjacent variables take the same values, have low energy levels.
In thermal equilibrium, the probability that the model is in configuration x, is given by the Boltzmann distribution Here, Z is the partition function (the normalization constant) Z = x∈X N e −βH(x) and β = 1 kBT , where T denotes the temperature and k B is Boltzmann's constant [10]. arXiv:1404.5666v5 [stat.CO] 13 Feb 2015 The Helmholtz free energy is defined as The Helmholtz free energy is an important quantity in statistical physics as all macroscopic thermodynamic properties of a model follow from differentiating F H (as a function of the temperature); see [10,Chapter 2]. In the rest of this paper, we assume β = 1.
For each adjacent pair (x k , x ), let and for each x m We can then define f : The corresponding Forney factor graph (normal graph) for the factorization in (6) is shown in Fig. 1, where the boxes labeled "=" are equality constraints [12], [13]. In Forney factor graphs variables are represented by edges.
From (6), Z in (2) can be expressed as At high temperature (i.e., small J), the Boltzmann distribution (2) approaches the uniform distribution. Therefore, to estimate Z and other quantities of interest (e.g., the mean magnetization), Monte Carlo methods in the original graph usually perform well [4], [14], [15].
In this paper, we consider a curious case where a restricted subset of the coupling parameters of the model is strong (i.e., large J). To compute an estimate of Z in this case, we propose importance sampling schemes that operates in the dual of the Forney factor graph representing the factorization in (6).
III. THE DUAL MODEL
We can obtain the dual of the Forney factor graph in Fig. 1, by replacing each variable x with its dual variablex, each factor κ k, with its 2D Discrete Fourier transform (DFT), each factor τ m with its one-dimensional (1D) DFT, and each equality constraint with an XOR factor, cf. [12], [16]- [18]. Note that,X also takes on values in X .
After suitable modifications, we can construct the (modified) dual Forney factor graph as in Fig. 2, with XOR factors as where ⊕ denotes the sum in GF(2), with factors attached to each XOR factor as (9), the unlabeled normal-size boxes represent factors as in (10), and boxes containing + symbols represent XOR factors as in (8).
and with factors attached to each equality constraint as Here, J k is the coupling parameter associated with each bond (the bond strength). For more details on constructing the dual factor graph of the 2D Ising and Potts models, see [1]- [3].
In the dual domain, we denote the partition function by Z d and the number of edges by E. For the models that we study here, the normal factor graph duality theorem states that see [16], [17,Theorem 2]. In this paper, the focus is on ferromagnetic models, therefore all the factors as in (10) model, the value of Z is invariant under the change of sign of the external field [19], without loss of generality, we assume H m < 0. With this assumption, all the factors as in (9) will also be positive. In Section IV, we use the dual representation of the 2D Ising model to give an alternative proof for the invariance of Z under the change of sign of the external field.
IV. IMPORTANCE SAMPLING SCHEMES ON THE DUAL FACTOR GRAPH
In this section, we propose two importance sampling schemes in the dual Forney factor graph to compute an estimate of Z, as in (7). Our importance sampling schemes operate in the dual Forney factor graph of the 2D Ising model in an external magnetic field, see Fig. 2.
As in [1], [2], we partition the set of random variablesX, intoX A andX B , with the condition that the random variables inX B are linear combinations (involving the XOR factors) of the random variables inX A . Note that, a valid configuration in the dual graph can be generated by assigning values toX A , followed by computingX B as linear combinations ofX A .
Two examples of such a partitioning are illustrated in Figs. 3 and 4, where we letX A be the set of all the variables associated with the thick edges, andX B the set of all the variables associated with the remaining thin edges. Accordingly, we let B A , a subset of B, contain all the indices of the bonds marked by the thick edges, and For a valid configurationx = (x A ,x B ), supposex A = (ỹ,z), whereỹ contains all the thick edges attached to the small unlabeled boxes, which represent variables that are involved in factors as in (9), andz contains all the variables associated with the thick bonds.
We show that w H (ỹ), the Hamming weight ofỹ, is always even, where the Hamming weight of a vector is the number of non-zero components of that vector [20]. Proof. We consider c, the component-wise XOR ofỹ, as c = N m=1ỹ m . Each XOR factor imposes the constraint that all its incident variables sum to 0 in GF (2). In c, eachỹ m can thus be expanded as the XOR of the corresponding variables associated with the bonds, furthermore, the variables on the bonds each appears twice in this expansion. We conclude that c = 0, i.e., w H (ỹ) is even.
In Lemma 1, the choice of the boundary conditions (free or periodic) is immaterial. Lemma 1 implies that the value of Z d , and equivalently the value of Z, are invariant under the change of sign of the external magnetic field. Indeed, regardless of the sign of the external field H m , i.e., assigned to all positive or to all negative values, N m=1 λ m (x m ) takes on the same positive value, see (9).
In the following, we present two slightly different algorithms for estimating the partition function. Both algorithms use importance sampling and follow the same procedure: to drawx ( ) at each iteration , we first draw a samplex ( ) A according to a suitably defined auxiliary probability mass function, we then updatex can be updated in a straightforward manner.
A. Algorithm 1
The partitioning used in Algorithm 1 is illustrated in Fig. 3. Hence,X A contains all the variables associated with the edges attached to the small unlabeled boxes, which are involved in factors as in (9), as well as all the variables associated with the thick bonds, which are involved in factors as in (10).
Let us define
We use the following probability mass function as the auxiliary distribution in our importance sampling scheme The partition function Z q1 , is analytically available as The product form of (13) suggests that at each iteration , to draw a samplex ( ) , two separate subroutines are required, one to draw theỹ ( ) -part, and the other to draw thez ( ) -part.
To draw theỹ ( ) -part, we apply the following subroutine.
The criteria to acceptỹ ( ) is based on Lemma 1. The . We can then generatex ( ) A as a concatenation ofỹ ( ) andz ( ) , in any predetermined order.
B. Algorithm 2
One can design an algorithm with no rejections by adopting a different choice of partitioning on the dual graph, as depicted in Fig. 4. The only difference between the partitionings in Figs. 3 and 4, is that in Fig. 4 one of the edges attached to the small unlabeled boxes (indicated by "thin edge") is excluded fromX A . For simplicity, let us assume that the excluded edge (variable) is involved in λ N (.).
We thus define Similarly, the auxiliary distribution q 2 (x A ) is as where Z q2 is also analytically available. Again, the product form of (18) suggests that at each iteration , two separate subroutines are required to draw x ( ) To draw theỹ ( ) -part, we apply the following subroutine, which has no rejections.
Drawing thez ( ) -part can be treated in an analogous way as in Algorithm 1 (with the same subroutine). After this, we can generatex is easy. The samples are then used in the following importance sampling scheme to estimate Z d /Z q , Here, q(·) and Λ(·) are selected in accordance with the applied algorithm (Algorithm 1 or Algorithm 2), and Z q denotes the corresponding partition function.
It follows thatr IS is an unbiased (and consistent) estimator Since Z q is analytically available, the proposed importance sampling scheme can yield an estimate of Z d , which can then be used to estimate Z in (7), using the normal factor graph duality theorem, cf. Section III.
The accuracy of the estimator (20) depends on the fluctuations of Λ(x B ). If Λ(x B ) varies smoothly,r IS will have a small variance. From (12), we expect to observe a small variance in Algorithm 1 if for each k ∈ B B , J k is large. With the exception of one factor, similar behavior is expected in Algorithm 2, see Section VI-D.
A few comments on the proposed schemes are in order.
• A good strategy in Algorithm 2 is therefore to exclude a variable (an edge) involved in a factor with a large H m , see (17). We will apply this strategy in the numerical experiments of Section VI-A. • We emphasize that our choices of partitioning in Figs. 3 and 4 are not unique. Fig. 5 shows another example of a partitioning on the dual factor graph. Moreover, model parameters and their spatial distributions suggest which choices are expected to perform better in practice. E.g., a suitable partitioning for models in a strong external field is discussed in [2]. • The proposed schemes are applicable to the Ising model in the absence of an external magnetic field as well. E.g., partitionings in Figs. 3 to 5 are valid even if the external field is not present. We will consider Ising models in the absence of an external field in our numerical experiments in Section VI-B. That being the case, to observe fast convergence in the dual domain, not all the coupling parameters need to be strong, but a restricted subset of them. Therefore, the schemes of this paper can be regarded as supplementary to the schemes presented in [1], where the focus was on models at low temperature (corresponding to models in which all the coupling parameters are strong). • If a restricted subset of the coupling parameters is relatively strong, ideas from annealed importance sampling [21]- [23] can be employed, see Appendix I.
V. GENERALIZATIONS TO THE 2D POTTS MODEL
In a 2D Potts model each random variable takes on values in X = {0, 1, . . . , q − 1}, where q is an integer satisfying q ≥ 2. The energy of a configuration x is given by the Hamiltonian Here, the real coupling parameter J k, controls the strength of the interaction between adjacent variables (x k , x ) and the (24), the unlabeled normal-size boxes represent factors as in (23), and boxes containing + symbols represent XOR factors as in (8), where ⊕ denotes addition in GF(q). real parameter H m corresponds to the presence of an external magnetic field.
Following [2], we can obtain the (modified) dual Forney factor graph of the 2D Potts model in an external field as in Fig. 6, where the unlabeled normal-size boxes represent factors as and the unlabeled small boxes have the following form λ m (x m ) = e Hm + q − 1, ifx m = 0 e Hm − 1, otherwise.
In this paper, we only consider ferromagnetic Potts models in a positive external field, characterized by J k > 0 and H m > 0, respectively. Therefore, all the factors as in (23) and (24) will be positive. To design importance sampling algorithms for the 2D Potts model, we need the following lemma.
Lemma 2. Ifx is a valid configuration in the dual Forney factor graph of the 2D Potts model, then
Again, to draw thez ( ) -part, we can apply the following subroutine.
To draw theỹ ( ) -part, we can apply the following. and 2D ferromagnetic Ising models with spatially varying coupling parameters in the absence of an external field in Section VI-B. We will also compare the efficiency of the importance sampling scheme and uniform sampling. This comparison was carried out for models in a strong external field in [2]. In the absence of an external field, applying Monte Carlo methods (Gibbs sampling and uniform sampling) in the dual domain to 2D Ising models is also discussed in [1], [3]. In Section VI-C, we consider 2D Potts models of size N = 30 × 30 with periodic boundary conditions. Comparisons with Gibbs sampling [24] and the Swendsen-Wang algorithm [25] are discussed in Appendix II.
A. 2D Ising models in an external field
In all the experiments, we set J k Fig. 7, where the estimated free energy per site is about 2.255 (9).
In the second experiment, J k i.i.d.
∼ U[1.25, 1.35] for k ∈ B B . For one instance of the model, simulation results obtained from Algorithm 2 are shown in Fig. 8. The estimated free energy per site is about 2.3556.
B. 2D Ising models in the absence of an external field
We consider 2D Ising models in the absence of an external magnetic field, and set J k experiment. Fig. 9 shows simulation results obtained from importance sampling (solid lines) and from uniform sampling (dashed lines) for one instance of the Ising model. From Fig. 9, the estimated free energy per site is about 2.275.
As was pointed out in [2], uniform sampling could be employed for models at very low temperature (i.e., very large J k ), however, for a wider range of model parameters, it has problems with slow and erratic convergence.
In the second experiment, we set J k, Fig. 10 shows simulation results obtained from importance sampling for one instance of the model. From Simulation results obtained from importance sampling in the dual factor graph are shown in Fig. 11 where the estimated free energy per site is about 6.2165.
D. Discussion
It is numerically advantageous to replace each factor (9) in the dual factor graph by and each factor (10) by The required scale factor to recover Z d , can then be computed by multiplying all the local scale factors in terms of cosh J k and cosh H m .
Note that lim t→∞ tanh t = 1, therefore the factors in (27) tend to constant if J k is large for k ∈ B B , which explains the reason for the fast convergence of the importance sampling schemes in this case. If all the model parameters (i.e., J k and H m ) are large, (26) and (27) both tend to constant factors. In this case, uniform sampling is also expected to exhibit good convergence in the dual domain.
VII. CONCLUSION
Two importance sampling schemes were presented for estimating the partition function of the 2D ferromagnetic Ising model. Both schemes are described in the dual Forney factor graph representing the model. After introducing auxiliary importance sampling distributions, the methods operate by simulating a subset of the variables, followed by doing the computations using the remaining variables. The schemes can efficiently compute an estimate of the partition function in a wide range of model parameters. With our choices of partitioning, this is particularly the case when a subset of the coupling parameters of the model is strong. The methods of this paper should be compared to the method introduced in [2], where the emphasis is on models in a strong external field.
Depending on the value of the model parameters and their spatial distributions, different choices of partitioning yield schemes with different convergence properties. Our schemes once combined with annealed importance sampling are capable of handling more demanding cases, see Appendix I.
The schemes of this paper are applicable to the threedimensional Ising model too; see [2] for a similar approach. For duality results in the context of statistical physics, see [5], [26], [27,Chapter 16], [28,Chapter 10].
APPENDIX I
We explain how to employ annealed importance sampling in the dual factor graph to estimate the partition function of the 2D Ising model, when J k , for k ∈ B B is relatively strong. For simplicity, we assume that the coupling parameters associated with the thick edges, the coupling parameters associated with the thin edges, and the external magnetic field are all constant, denoted by J A , J B , and H, respectively.
We thus denote the partition function by Z d (J A , J B , H), and express Z d (J A , J B , H) using a sequence of intermediate partition functions by varying J B in V levels as Here, unlike typical annealing schemes used in the original domain, (α 0 , α 1 , . . . , α V ) is an increasing sequence, where
APPENDIX II
We compare the performance of Monte Carlo methods in the original and in the dual factor graphs to compute the free energy per site, i.e., 1 N ln Z, of 2D Ising models in the absence of an external field and with constant coupling parameter J.
For different values of J, the convergence of Gibbs sampling [24] and the Swendsen-Wang algorithm [25] in the original factor graph, is compared to the convergence of uniform sampling and Gibbs sampling in the dual factor graph. We have considered the baseline (uniform sampling) in the dual domain. Clearly, convergence of uniform sampling can be significantly improved by applying our proposed importance sampling schemes.
x > 0}. We use the following methods to compute an estimate of the partition function as Method 1: Suppose samples x (1) , x (2) , . . . , x (L) are drawn uniformly and independently from X N . ThenẐ is an unbiased estimator for Z. I.e.,
E[Γ] = 1/Z
In our numerical experiments, drawing samples according to p(x) is done using Gibbs sampling or the Swendsen-Wang algorithm. These samples are then used in (32) to compute an estimate of Z.
We consider 2D Ising models of size N = 5 × 5, with periodic boundary conditions. Note that, such a small value of N , allows us to compute the exact value of the free energy via direct enumeration. In the first experiment, we set J = 0.25 (relatively high temperature). In this case, the exact value of the free energy per site (up to five decimal points) is 0.76006. Fig. 12 shows simulation results obtained from the Swendsen-Wang algorithm (solid lines) and uniform sampling (dashed lines) in the original factor graph, and Fig. 13 shows simulation results obtained from Gibbs sampling (solid lines) and uniform sampling (dashed lines) in the dual factor graph.
In the second experiment, J = 0.75 (relatively low temperature), where the exact value of the free energy per site (up to five decimal points) is 1.53048.
Simulation results obtained from the Swendsen-Wang algorithm (solid lines) and uniform sampling (dashed lines) in the original factor graph are shown in Fig. 14 obtained from Gibbs sampling (dashed lines) and uniform sampling (solid lines) in the dual factor graph are shown in Fig. 15. In both cases (i.e., Gibbs sampling in the dual factor graph and the Swendsen-Wang algorithm in the original factor graph), we have used (32) to compute an estimate of Z. At high temperature (i.e., small J), we observe faster mixing with Monte Carlo methods in the original factor graph. In sharp contrast, uniform sampling and Gibbs sampling in the dual factor graph converge extremely well at low temperature (i.e., large J), while Monte Carlo methods in the original factor graph suffer from slow convergence. Indeed, convergence of Monte Carlo methods in the dual factor graph improves as J increases.
Finally, Jerrum and Sinclair have proposed a randomized algorithm to estimate the partition function of the 2D Ising model, which is polynomial-time for all temperatures [6]. Their algorithm uses the high temperature expansion of the partition function, which coincides with the dual domain representation of the Ising model in terms of Forney factor graph. However, the computational complexity of their algorithm is O(N 3 ). Also their proposed Markov chain seems to be different from the schemes of this paper [6,Section 4]. The computational complexity of our proposed algorithms is O(N ) per sample, which makes the total computational complexity O(N L).
|
2015-02-13T07:48:29.000Z
|
2014-04-22T00:00:00.000
|
{
"year": 2014,
"sha1": "c03effdb7cd5086e009b54f014f42d7ce2738b03",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "09b6cda431d611d3b3777b56e11e4d3b8dab21ae",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Physics"
]
}
|
53455749
|
pes2o/s2orc
|
v3-fos-license
|
Unravelling an extended quark sector through multiple Higgs production?
In many new physics scenarios, the particle content of the Standard Model is extended and the Higgs couplings are modified, sometimes without affecting single Higgs production. We analyse two models with additional quarks. In these models, we compute double Higgs production from gluon fusion exactly at leading-order, and present analytical results in the heavy-quark mass ap- proximation. The experimental bounds from precision electroweak measurements and from the measured rate of single Higgs production combine to give significant restrictions for the allowed deviation of the double Higgs production rate from the Standard Model prediction as well as on the branching ratio for the Higgs decay into photons. The two models analysed eventually present a similar Higgs phenomenology as the Standard Model. We connect this result to the magnitude of the dimension six operators contributing to the gluon-fusion Higgs production.
I. INTRODUCTION
The search for the source of electroweak symmetry breaking has dominated particle theorist's efforts for decades. Now that a particle with many of the right properties to be the Higgs boson of the Standard Model has been discovered [1,2], the efforts turn to understanding the properties of this particle. In the Standard Model, the couplings of the Higgs boson to fermions, gauge bosons, and to itself are firm predictions of the model. In models with new physics, however, these couplings can be different.
The dominant production mechanism for a Higgs boson is gluon fusion, which is sensitive to many types of new physics. The simplest possibility is for new heavy colored scalars [3,4] and/or fermions [5][6][7][8][9][10][11][12][13][14][15] to contribute to Higgs production. However, since the observed Higgs candidate particle is produced at roughly the Standard Model rate, extensions of the Higgs sector beyond the Standard Model are extremely constrained. For example, a model with a sequential fourth generation of chiral fermions predicts large deviations in the Higgs rates [16][17][18][19][20] and is excluded by the limits on Higgs production for any Higgs mass below around 600 GeV [21,22]. The properties of these potential new colored particles are further limited by precision electroweak measurements. Models in which the Higgs boson is composite [23][24][25][26][27][28][29][30][31][32][33][34], along with models which generate new higher dimension effective operators involving the Higgs boson and gluons [35,36], can also induce a single Higgs production rate different from that of the Standard Model. Untangling the source of possible deviations from the Standard Model by measuring the production and decay rates of the Higgs boson will be quite difficult in models where there are only small differences from the Standard Model predictions.
In this paper, we examine the extent to which the gluon fusion production of two Higgs bosons can have a rate very different from that predicted by the Standard Model [37,38], given the restrictions from electroweak precision physics and from single Higgs production.
The observation of double Higgs production via gluon fusion is important in order to measure the cubic self coupling of the Higgs boson [39,40]. In the Standard Model, the rate is small, although the O(α 3 s ) radiative corrections are known in the infinite top quark mass limit and are large [41,42]. For a 125 GeV Higgs particle, the most likely channel for HH exploration is gg → HH → bbγγ [43], where studies have estimated that the LHC at full energy will be sensitive to this process with around 600 fb −1 . Using jet substructure techniques, the HH → bbW + W − and HH → bbτ + τ − channels may be available with about 600 fb −1 [44] and 1000 fb −1 [40]. This is clearly not physics which will be done during the early phase of LHC operations, unless the rate is significantly larger than in the Standard Model [45].
Double Higgs production can further be studied through vector boson fusion, which is also sensitive to the three Higgs self coupling [46]. Vector boson fusion production of two Higgs bosons can be affected by new operators involving the W and Z gauge bosons and the Higgs, but is not sensitive to the new colored particles which contribute to the gluon fusion process. Hence the two production mechanisms can provide complementary information.
Double Higgs production from gluon fusion first occurs at one loop and is therefore potentially modified by the same new heavy colored particles which contribute to single Higgs production. However, as pointed out in Ref. [36], single and double Higgs production are sensitive to different higher dimension effective operators and in principle, the single Higgs production rate could be Standard Model like, while the double Higgs production could be highly suppressed or enhanced. Here, we consider the effects of both heavy vectorlike and chiral colored fermions on the single and double Higgs production rates, and the interplay between them. We will not consider models with extended Higgs sectors, or with higher dimension non-renormalizable operators.
For single Higgs production, it is useful to analyze the effects of non-Standard Model colored particles using a low energy theorem [47]. The theorem can be formulated using the background field method in terms of the traces of the mass matrices of colored objects, which eliminates the need to diagonalize complicated mass matrices [48]. The low energy theorem can be extended to double Higgs production, where new features arise [34]. In models with extended fermion sectors (for example, in little Higgs models [49][50][51][52][53][54][55]) there are contributions to double Higgs production containing more than one flavor of fermion [56]. These diagrams contain axial couplings to the Higgs boson which are non-diagonal in the fermion states and we demonstrate how these effects can be included using a low energy theorem. Low energy theorems are extremely useful for single Higgs production and generally give estimates of the total cross section which are quite accurate. For double Higgs production, however, the low energy theorems provide an estimate of the total rate which typically disagrees with the exact rate by 50% or more. The low energy theorem does not reproduce kinematic distributions accurately, but instead predicts high energy tails which are not present in the full theory [57]. In this paper, we study the effects of heavy colored fermions on the gluon fusion double Higgs production rate and show that agreement with single Higgs production requires the double Higgs rate to be close to that of the Standard Model. We demonstrate how this can be understood in terms of the effective operator approach of Ref. [36] and discuss the limitations of the low energy theorem for gg → HH. Interestingly, composite Higgs models and little Higgs models receive potentially large corrections to the gg → HH process from the non-renormalizable operator ttHH. The observation of such a large effect would be a "smoking gun" signal for such models [33,34,45].
The Standard Model
In the Standard Model, double Higgs production from a gluon-gluon initial state arises from the Feynman diagrams shown in Fig. 1. The result is sensitive to new colored objects (fermions or scalars) in the loops and to the Higgs trilinear self-coupling. The amplitude for where P 1 and P 2 are the orthogonal projectors onto the spin-0 and spin-2 states respectively, s, t, and u are the partonic Mandelstam variables, p T is the transverse momentum of the Higgs particle, and v = ( √ 2G F ) −1/2 = 246 GeV. The functions F 1 and F 2 are known analytically [37,38].
Finally, the partonic cross section is given by where we included the factor of 1 2 for identical particles in the final state. In the Standard Model, the chiral fermions are where i = 1, 2, 3 is a generation index and the Lagrangian describing the quark masses is Here Note that in the Standard Model the Higgs couplings λ u,d i are purely scalar. In the following we will focus on the third generation quarks and use the standard notation u 3 = t, d 3 = b, with λ d 3 ≡ λ 1 and λ u 3 ≡ λ 2 . In the Standard Model, the dominant contributions come from top quark loops. Analytic expansion of the amplitudes in the limit m 2 t >> s yields the leading terms The leading terms in the inverse top mass expansion of Eq. 8 are called the "low energy theorem" result and give the m t -independent amplitudes [37,38] From Eq. 8, we can clearly see that the triangle diagram has no angular dependence and only makes an s-wave contribution. This result is expected since the triangle diagram has a triple-scalar coupling, which has no angular momentum dependence. For the box diagrams, at the lowest order in F box 2 there is angular momentum dependence reflected in p 2 T , which is expected from the spin-2 initial state and spin-0 final state.
there is also an angular momentum dependent piece proportional to p 2 T . Since the initial and final states for the F 1 contribution are both spin-0, this is a somewhat surprising result. To gain insight into the angular dependence of F box 1 and further insight into F box 2 , the functions can be decomposed into Wigner d-functions, d j s i ,s f , where j is the total angular momentum and s i (s f ) is in the initial (final) state spin: Here θ is the angle between an initial state gluon and final state Higgs, In wholly dependent on the initial state spin-2 d-wave function d 2 2,0 , as expected from Eq. 1. In Fig. 2, we compare the total cross section for double Higgs production at different orders in the large mass expansion against the exact result 1 , as a function of the center of mass energy in pp collisions. We use the CT10 NLO PDF set [58] and run the strong coupling constant through NLO from its value α s (m Z ) = 0.118. We fix m t = 173 GeV and m b = 4.6 GeV. The low energy theorem results are quite sensitive to the scale choice, and typically reproduce the exact results to within roughly 50% error. This "agreement" between the infinite mass approximation (LET) and the exact result is not improved by the inclusion 1 The exact result always includes the contributions from both the top and bottom quarks. of higher orders in the large mass expansion. In single Higgs production, the reliability of the infinite mass approximation has been investigated through NNLO [59][60][61][62]. Because of the shape of the gluon parton luminosity, which peaks at large values of x = m 2 H /s and decreases rapidly, the largest contribution to the hadronic single Higgs cross section comes from the region below the top quark threshold, s < 4m 2 t , where the large top mass approximation holds. As a consequence, finite mass corrections to single Higgs production have an effect of less than 1%. On the other hand, for double Higgs production the partonic energy is always s > 4m 2 H and the condition for validity of the low energy theorem, s ≪ 4m 2 t , is typically not satisfied. The inadequacy of the infinite mass approximation for double Higgs production becomes even more apparent when looking at kinematic distributions [57]. Consider for example the invariant mass of the HH system, where S is the hadronic center of mass energy squared, M HH = √ s, and τ = s S . In A similar behaviour has been observed for the differential cross section dσ/dp T in higher order corrections to single Higgs production [63].
Non-Standard Model bottom quark Yukawa coupling
We briefly discuss the role of the bottom quark loops which are omitted when using the low energy theorems. In Yukawa coupling is increased by a factor of 50, this ratio goes to 9, and the low energy theorem is wildly inaccurate.
Additional heavy quarks
A simple extension of the Standard Model with additional quarks of charge 2 3 which can mix with the Standard Model like top occurs in many new physics scenarios, for example little Higgs [49][50][51][52]64] and composite Higgs [23][24][25][26][28][29][30][31][32][33][34] models. There can also be new heavy charge − 1 3 quarks [65,66] and the formulae in this section apply to both cases. We will take the new quarks to be in the fundamental representation of the color group. For an overview of the latest lower bounds on the masses of the additional quarks, see for example Refs. [5,67]. Note however that the experimental analyses always assume the new quarks to decay entirely either through W or though Z. This is not the case in our models, and the experimental limits are therefore weakened [6,68,69].
In addition to the diagrams of Fig with We consider real couplings. Therefore Y ij = Y ji and A ij = −A ji , and only the terms involving two different quarks f i and f j contain pseudo-scalar couplings, In the Standard Model Y ii = m i and A ij = 0.
For arbitrary masses m i and m j , where y and M are the Yukawa and the heavy quark mass matrices from Eq. 13. For the box topologies, the leading terms in the large quark mass expansion are The relative minus sign between the vector and axial contributions comes from Eq. 14.
Although the leading terms of the triangle and box diagrams were calculated in the diagonal mass basis, the cyclicity of the trace and the fact that both M and y rotate according to the same unitary transformations allow one to cast the results in Eqs. 16 and 17 into a basis independent form. Hence the Yukawa and mass matrices can be evaluated both in the mass basis, where M is diagonal, and in the current basis. In the current basis, y = ∂M ∂v . The infinite mass limit of both the triangle and box diagrams can also be obtained via the low energy theorems [47,48].
In our calculations in Sections III A and III B, we retain the full dependence of the leading order amplitude on the quark masses. However, for small mass splitting δ ≡ m 2 j − m 2 i the sub-leading terms have a simple and useful form, Following [70], we consider the infinite quark mass limit of these results and recast them into a convenient form for the calculation of the amplitudes for single and double Higgs production in models with extended quark sectors with respect to the Standard Model amplitudes. In the infinite mass approximation, the leading order amplitudes can be written as (Eqs. 16, 17) where the omitted proportionality terms do not depend on the masses and Higgs couplings of the quarks. In the Standard Model, y tt = m t . The amplitudes only depend on the omitted proportionality factors, which therefore cancel when taking the ratio to the Standard Model result: In Eq. 20 we used the relation y = ∂M ∂v [70]. Eq. 21 is equivalent to the result of Ref. [34].
A. Singlet top partner
We are interested in examining possible large effects in two Higgs production from gluon fusion in models which are consistent with precision electroweak measurements and the observed rate for single Higgs production. Topcolor models [23,28], top condensate models [24][25][26][27], and little Higgs models [49][50][51][52][53][54][55] all contain a charge 2 3 partner of the top quark. We consider a general case with a vector SU(2) L singlet fermion, T 2 , which is allowed to mix with the Standard Model like top quark, T 1 [5,68,69,[71][72][73]. The fermions are, Following the notation of [5], the mass eigenstates are t, T and b = B 1 (where t, b are the observed top and bottom quarks), and can be found by the rotations The chirality projectors are P L,R ≡ 1∓γ 5 2 and the mixing matrices U t L , U t R are unitary and parameterized as, We will abbreviate s L = sin θ L , c L = cos θ L .
The fermion mass terms are where Without loss of generality, the T 2 L T 1 R term can be rotated away through a redefinition of the right handed fields. The model therefore contains three independent parameters in the top sector, which we take to be m t , M T and θ L . The consistency of the model with electroweak precision measurements and its decoupling properties have been studied in many works [5,67,69,[71][72][73]. We will not repeat this analysis here, but use the results of Ref. [5].
It is interesting to note that in the limit θ L ∼ 0 (required by precision electroweak data), the mass terms for the top like quark and its partner become where r = Decoupling of the heavy quark therefore requires s 2 L ∼ r −1 , as it was shown in [5].
Since we are interested in Higgs production from the quark loops, we need the couplings to the Higgs boson, where . These corrections are further suppressed by the small mixing angles allowed by the bounds from electroweak precision data [5]. Both total and differential distributions are very close to the Standard Model (Fig. 8), and one cannot use double Higgs production to obtain information about additional vector singlet quarks. Fig. 8 uses the largest mixing angle allowed by precision electroweak data, and the reduction in the total cross section for the singlet top partner model from the exact Standard Model result is roughly 15%. This is of similar size to the reduction in the gg → H rate found in Ref. [5].
This model is an example of a case which will be extremely difficult to differentiate from the Standard Model.
B. Mirror fermions
As a second example, we consider a model which has a generation of heavy mirror fermions [71,[74][75][76][77]. There are four new quarks T 1 , T 2 and B 1 , B 2 , with charge 2 3 and − 1 3 , respectively. The quarks are in the SU(2) L representations, The first set of heavy quarks has the quantum numbers of the Standard Model quarks, while The mass eigenstates χ q P (P = L, R ; q = t, b) are obtained through unitary rotations and the mass matrices are We will denote the two top-like and the two bottom-like mass eigenstates as T 1 , T 2 and B 1 , B 2 respectively. The Lagrangian parameters λ i can be expressed in terms of the physical quark masses and the mixing angles. We report these relations in Appendix A.
Since all the quarks have different quantum numbers, it is not possible to rotate away any parameter in the Lagrangian. However, the SU(2) symmetry requires that and therefore This relation can be written as where θ R . The couplings of the fermion mass eigenstates to the Higgs boson are where Similar expressions hold in the bottom sector.
The couplings to the electroweak gauge bosons that are needed for the computation of the Peskin-Takeuchi parameters (Sec. III B 1) are reported in the Appendix.
Higgs production using low energy theorems in the mirror model
For single Higgs production through top quark and mirror fermion loops, the low energy theorem of Eq. 20 yields where we introduce the fractional difference ∆ of the single Higgs amplitude from that of the Standard Model.
Both for simplicity and because one expects large corrections to the oblique parameters for a large mass splitting within each chiral doublet, we assume M T 1 = M B 1 = M and M T 2 = M B 2 = M(1 + δ). In this limit, where we impose (see Eq. 36) Given the recent observations at the LHC, we are interested in the case when A gg→H ∼ A SM gg→H . One simple way to recover this limit is to have which for single production gives 3 To get the Standard Model result for gg → H further requires either δ ∼ 0 or θ b R ∼ θ t R ∼ 0, where the constraint on the right-handed mixing angle in the top sector arises from Eq. 41.
The result of Eq. 43 can be understood by inspecting the Yukawa couplings in the limit Similar relations hold for the charge − 1 3 sector. Hence, for δ ∼ 0 or θ t,b R ∼ 0 the diagonal Yukawa couplings go to zero and only the top quark, with its Standard Model Yukawa coupling, contributes to single Higgs production. The off-diagonal couplings of the mirror fermions to the Higgs boson are slightly less suppressed, and could induce deviations in the double Higgs rate from that of the Standard Model.
From the low energy theorem of Eq. 21, the box contributions to gg → HH production (including top quark loops) can be estimated, 3 This relation holds for small δ. where we defined For θ t,b − ∼ π 2 , Eq. 45 yields 4 Note that F box 2 does not contribute in the infinite fermion mass limit. The terms proportional to cos 2 (θ b + ) come from the contributions of the off-diagonal fermion-Higgs couplings. For this simple choice of parameters, the same term governs the deviations from the Standard Model both in single and double Higgs production.
We are interested in determining how large a deviation from the Standard Model gg → HH rate is possible with a minimal deviation in the gg → H rate. With the assumption of no mass splitting within the mirror doublets, there are five independent parameters: the mass scale M, which drops out in the heavy mass limit for the Higgs production rates, the mass splitting between families, δ, and three angles. Using Eq. 40, we replace one of the angles with the fractional deviation ∆ of the gg → H amplitude from that of the Standard Model, We require this deviation to be within 10% and the mass splitting δ between the two mirror families not to be too large (0 < δ < 1), since we expect electroweak observables to put severe bounds on δ. Under these constraints, we perform a scan over δ, ∆, θ t − and θ b + . The values of these parameters for which Eqs. 41 and 48 yield real solutions for θ t + , θ b − are represented by the blue dots in Fig. 9. The red diamonds represent regions where the difference ∆ box in the double Higgs amplitude from the box topology is larger than 15%.
In the following, we fix θ t − = π 2 in order to focus on a region with large ∆ box , and analyse how double Higgs production depends on θ b + and δ for a Standard Model gg → H amplitude, ∆ = 0, and for ±10% deviations from it, ∆ = ±0.1. This analysis is shown in Fig. 10 for a heavy mass scale M = 800 GeV. To qualitatively understand the features of these plots, one 4 In the exact δ = 0 limit the result reads F box can consider the limit of small deviations from the Standard Model single Higgs amplitude and small family splitting δ, For almost degenerate mirror fermions (δ ∼ 0) and small deviations in single Higgs production from the Standard Model case, (which occurs when θ b + = ± π 2 ), the dominant term is ∆ box ∼ −∆. When single Higgs production is suppressed, double Higgs production is always enhanced, while for a slightly enhanced Higgs single production rate, double production can also be suppressed. For ∆ = 0 and small δ, double Higgs production is also enhanced. In all cases, the minimal deviations from Standard Model double Higgs production occurs exactly at θ b + = ± π 2 , while the maximum deviation is at Finally, we note that the results of this section can be written in terms of an effective Lagrangian, which for δ = 0 is
Bounds from electroweak precision data
The new mirror quarks carry electroweak charges, and therefore contribute to the selfenergies of the electroweak gauge bosons [72,74,78]. A convenient way to parametrize these effects is through the Peskin-Takeuchi parameters [79,80], We use the fit to the electroweak precision data given in Ref. [81], S = 0.03 ± 0.10 , T = 0.05 ± 0.12 , U = 0.03 ± 0.10 , with correlation coefficients , The ∆χ 2 is defined as whereX i are the central values of the electroweak parameters from the fit in Eq. 53, X i are the contributions to these parameters from the new mirror fermions and from the Higgs loops, and σ 2 ij ≡ σ i ρ ij σ j , with σ i being the errors given in Eq. 53. We consider the case of no mass splitting within the doublets, while the fractional mass difference between the two heavy families is parametrized by δ, and focus on the regions of parameter space where we expect the largest deviations with respect to the Standard Model gg → HH amplitude, while the single Higgs rate remains very close to the Standard Model value. Following the discussion in the previous section, we therefore fix θ t − = π 2 , ∆ = {−0.1; 0; 0.1} and choose M = 800 GeV. In Fig. 11 we show the 95% allowed regions in the {sin θ b + , δ} parameter space for the three values of ∆ (red bands), along with the regions where the box enhancement is larger than 15% (blue diamonds). The experimental bounds typically require δ to be small. In this limit, the electroweak parameters assume simple expressions, where N C = 3. For δ → 0, θ b − → θ t − = π 2 and ∆ → 0 (Eq. 43). However, a large increase in the double Higgs rate from the box topology can be obtained only for large values of δ. In particular, for ∆ = 0.1 the electroweak precision observables do not allow the mass splitting to be large enough to obtain a significant enhancement, consistently with the results from
Phenomenology of the Mirror Fermion Model and H → γγ
Once the parameters of the model are constrained to reproduce the Standard Model single Higgs amplitude to within ±10% and to be allowed by a fit to the precision electroweak data, there is very little freedom left to adjust parameters. The differential cross section for gg → HH is shown for allowed parameters in Fig. 12 and it is clear that this class of models does not allow for a large enhancement of the HH production rate. The exact cross sections include both Standard Model t and b contributions, while the low energy theorem curves include the infinite mass limit of the heavy quark contribution. The largest allowed enhancement is found for ∆ = −0.1 and in this case, the total cross section, pp → HH is enhanced by ∼ 17% over the Standard Model rate.
The mirror fermions also contribute to the rate for H → γγ 5 . We again consider each mirror family to be degenerate between the charge 2 3 and charge − 1 3 quarks, and the two families to be split by a mass difference Mδ. In the limit m H << m t , M W , M [87], where we impose only the angle relation from Eq. 41 and expand for small δ. In the limit δ = 0 (and therefore θ b − = θ t − from Eq. 41), the branching ratio into photons cannot be larger than in the Standard Model.
We relate the deviations in the photon decay branching ratio to the deviation ∆ from the Standard Model single Higgs production rate 6 , Imposing only the bounds from electroweak precision observables, and performing a general scan over the input parameters δ, θ b + , θ b − , θ t + (fixing θ t − through Eq. 41, M = 800 GeV and δ in the range {−0.5; 2}), we find that the Higgs branching ratio into photons can have large differences from the Standard Model predictions, with suppressions as large as 90% and enhancements up to 10%. Requiring also the single Higgs production rate to be 5 We consider only the contributions of heavy mirror quarks. Heavy leptons can also affect the H → γγ rate [82][83][84][85][86]. 6 This result holds for arbitrary values of the parameters. Fig. 11(a), where θ t − = π 2 and a −10% deviation from the Standard Model prediction for the gg → H rate is allowed, only small enhancements (up to +10%) of the H → γγ rate are allowed. For a +10% enhancement in the single Higgs rate over the Standard Model prediction (Fig. 11(c)), the branching ratio into photons deviates from the Standard Model prediction by at most by a few percent. We show how these deviations depend on the free input parameters δ, sin θ b + in Fig. 13, where we focus on ∆ = 0 and pick two values of δ which are allowed by the electroweak fit over all the range of θ b + (with θ t − = π 2 ). The clear conclusion is that the restrictions from precision electroweak data, combined with a single Higgs production rate close to the Standard Model prediction, do not allow for significant deviations in the H → γγ rate in this class of models.
IV. CONNECTION TO GLUON-HIGGS DIMENSION SIX OPERATORS
An interesting idea [36] is to combine single and double Higgs production to gain insights on the mechanism giving mass to the particles that contribute to these loop-mediated processes. Including contributions up to dimension-6 operators, the effective Lagrangian responsible for the Higgs-gluon interactions can be written as Particles whose mass arises entirely from renormalizable Higgs couplings induce an operator If the particle receives contributions to its mass from other sources as well, an additional arises. In the Standard Model c SM 1 = 0, c SM 2 = 1. The two operators contribute differently to Higgs single and pair production and the different rates in these channels constrain the coefficients c 1 and c 2 . Following [36], one can derive these two coefficients in a background field approach. The Higgs field is treated as a background field, and the masses of the heavy particles become thresholds in the running of α s . Matching the low-and high-energy theories [47,88], where M(H) is the Higgs-dependent mass matrix and δb f = 2/3 for fermions in the fundamental representation of the color group. This yields the effective Lagrangian We write the determinant of the mass matrix as where P is a polynomial of the Yukawa couplings λ i and fermionic masses m i and in general , and all the higher order derivatives vanish before electroweak symmetry breaking, then the Higgs production rates via gluon fusion in the heavy quark limit are exactly as in the Standard Model 7 . This is the case in the singlet top partner model, where F i (H/v) = H/v and therefore c 1,2 = c SM 1,2 . Interestingly, one can have the same single Higgs production rate as in the Standard Model, but a different double Higgs rate, only for F ′′ i (0) = 0. If also the first condition, F ′ i (0) = 1 + F i (0), is not met, then the single Higgs rate is not Standard Model like. In such a case, we note that for F i independent of Yukawa couplings and fermionic masses, the Higgs rates do not depend on the details of the fermion sector [34] and deviations can arise only from changes to the Higgs potential. If F i depends on the Yukawa couplings and fermionic masses, the Higgs rates will in general be related to these parameters. Such a situation occurs for example in the mirror fermion model. In this case We define In terms of the physical parameters, For β b → 0, c b 1 and c b 2 go to twice the Standard Model value. In this limit, the vector contributions to the fermion mass matrix vanish, and the masses come entirely from electroweak symmetry breaking. Since there are two quarks, an extra factor of two arises. In c t 2 one clearly sees the +1 contribution coming from the Standard Model top quark.
The coefficients governing single and double Higgs production are then The two rates depend on the two independent parameters β t , β b from the top and bottom sectors. Even if we require the single Higgs rate, gg → H, to be close to the Standard Model value, we are left with an independent parameter that can yield completely independent variations in the double Higgs rate.
In Fig. 14 In the singlet case, c 1 = 0 and deviations in single and double Higgs rates must be of the same order of magnitude. In the mirror case, c 1 can deviate from zero, which removes the close relationship between single and double Higgs production.
In terms of the parameters of the mirror fermion model, The term in the curly brackets correctly reproduces 1 + ∆ box from Eq. 49 for ∆ = 0, θ t − = π 2 . A large effect in the double Higgs rate requires large c 1 , and, in turn, β t ∼ 1. This is seen in Fig. 15, where we fix β b to reproduce the single Higgs rate within 10% the Standard Model value. However, from Eq. 67 β t → 1 implies δ → −1 or δ → ∞. These are not viable solutions. The first one corresponds to massless quarks. The second one requires non-perturbative interactions with the Higgs (large λ B , λ D ) for heavy quarks (large λ E , λ F ), as in Eq. 66. In the mirror fermion model discussed in this paper, large deviations in the gg → HH rate do not occur.
V. CONCLUSIONS
We analysed double Higgs production from gg → HH in the Standard Model and in models with additional heavy vector or chiral quarks. In the Standard Model, we compared the approximate results in the large top mass expansion with the exact cross section, and analysed the dependence of the production rate on the choice of the renormalization/factorization scale µ and on the PDF sets. As is well known [42,57], the low energy theorems fail to accurately reproduce both the total and differential double Higgs cross sections. The differential distributions are poorly estimated by the low energy theorems and predict a large tail at high invariant masses. The discrepancy is smallest for the scale choice µ = 2m H , yielding a 10 − 25% difference from the exact calculation of the total rate. Further, the predictions of the large top mass expansion depend sensitively on the choice of PDFs. Inclusion of higher order terms in the large mass expansion does not improve the convergence towards the exact results.
We discussed how the combination of single and double Higgs production from gluon fusion might give insight into the mechanism giving mass to quarks. The parameters of models with new heavy fermions are strongly constrained both by the observed rate for Therefore, in the two example of models with new heavy fermions which we studied, the constraints from the observed gg → H rate, combined with precision electroweak data, do not allow large deviations of the gg → HH rate from the Standard Model prediction.
We present here some useful formulae for the mirror fermion model.
The parameters λ i appearing in the mass Lagrangian 31 can be expressed in terms of the physical masses and mixing angles as Similar relations hold for the corresponding parameters in the bottom sector, with M T i → M B i and θ t P → θ b P . The charged current interactions among quarks of charge Q and (Q − 1) are with We can rewrite these relations as The neutral current interactions among quarks of charge Q are where X L T 1 T 1 = cos 2 θ t L X L T 1 T 2 = X L T 2 T 1 = sin θ t L cos θ t L X L T 2 T 2 = sin 2 θ t L X R T 1 T 1 = sin 2 θ t R X R T 1 T 2 = X R T 2 T 1 = − sin θ t R cos θ t R X R T 2 T 2 = cos 2 θ t R . (A6) The same relations, up to an overall minus sign, hold in the bottom sector. In more compact form we can write where the plus sign holds in the top sector and the minus in the bottom sector.
|
2014-10-16T20:18:44.000Z
|
2012-10-24T00:00:00.000
|
{
"year": 2013,
"sha1": "13e1e97a3df5d9b6d7ba5c5ec82ccb7d71bb6695",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.87.014007",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "13e1e97a3df5d9b6d7ba5c5ec82ccb7d71bb6695",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
29780092
|
pes2o/s2orc
|
v3-fos-license
|
Sulfonation and Characterization of Styrene-Indene Copolymers for the Development of Proton Conducting Polymer Membranes
: The aim of this work is to obtain polymer precursors based on styrene copolymers with distinct degrees of sulfonation, as an alternative material for fuel cell membranes. Acetyl sulfate was used to carry out the sulfonation and the performance of the polyelectrolyte was evaluated based on the content of acid polar groups incorporated into the macromolecular chain. Polymeric films were produced by blending the sulfonated styrene-indene copolymer with poly(vinylidene fluoride). The degree of sulfonation of the polymer was strongly affected by the sulfonation reaction parameters, with a direct impact on the ionic exchange capacity and the ionic conductivity of the sulfonated polymers and the membranes obtained from them. The films produced with the blends showed more suitable mechanical properties, although the conductivity of the membranes was still lower than that of commercially available membranes used in fuel cells.
Introduction
Proton conducting polymer electrolyte membranes, also known as proton exchange membranes (PEM), have been receiving great attention due to their application in a variety of electrochemistry-based technologies, such as batteries, supercapacitors, electrochromic windows, displays or sensors, and fuel cells [1][2][3] .The development of fuel cells (FC), in particular, has been the subject of many studies during the last decades partly due to environmental concerns related to traditional energy sources [4,5] .
Proton exchange membrane fuel cells (PEMFC) are well established since the early 1960s, being successfully used as electrical power sources for spacecrafts.More recently, effort is being made to expand their application to mass products, such as electrical vehicles and portable devices [6] , increasing interest in materials used as electrolytes for FC, i.e. permselective ion-exchange membranes, such as Nafion ® .Polymeric materials may behave as electrolytes depending on their ionic conductivity characteristics, which may be manipulated via functionalization, i.e., the attachment of ionizable groups (e.g.sulfonic) to the organic polymer backbone.
Sulfonation is a powerful and versatile technique which can be used to render some polymers proton conductive and hydrophilic [7,8] .The degree of sulfonation can be controlled and adjusted to maximize, for instance, polymer proton conductivity [2,6] .In Nafion ® membranes, sulfonic ionizable groups (-SO 3 H) are covalently attached to the side chains of the main fluorinated chains.These highly acidic groups, when sufficiently hydrated, dissociate producing H + , a mobile counter-ion [9] .
The high cost of Nafion ® is driving the development of new materials, and hydrocarbon resins, such as those comprised of styrene-indene copolymer, may be an alternative for the production of less costly membranes since their molecular structure allow the insertion of polar pendant groups in the chain, increasing their conductivity and hydrophilicity [4] .In addition, a number of recent publications report on the use of sulfonated polystyrene for the production of ion exchange membranes [10] and, in this sense, styrene-indene copolymers could be a viable alternative for that.In this context, this work reports on the synthesis and characterization of sulfonated styrene-indene copolymers and the preparation and characterization of films produced with their blends with poly(vinylidene fluoride) (PVDF).
Experimental
A commercial grade styrene-indene copolymer hydrocarbon resin (Unilene BS 140) supplied by Braskem S.A. was sulfonated with acetyl sulfate, following the procedure described by Makoski and Lundberg [10] .The acetyl sulfate solution was prepared by mixing dichloroethane and acetic anhydride under inert atmosphere (N 2 ).The solution was cooled to 0 °C and sulfuric acid was then carefully added.This reaction mixture was stirred at room temperature until a homogeneous solution was achieved.An excess of acetic anhydride was used to quench any trace of residual water.The acetyl sulfate was prepared just prior to each sulfonation reaction.
Distinct amounts of the copolymer were dissolved in dichloroethane and this solution was heated to 60 °C until thorough solubilization of the samples and purged with N 2 for 30 minutes at 1 atm.After that, the acetyl sulfate solution was added using a separator funnel.The reaction mixture was kept at 60 °C under stirring for a variable period of time (see Table 1).The reaction was interrupted by adding an excess of methanol and, after 30 minutes, the reactional mixture was cooled to room temperature.Finally, the sulfonated material was isolated in hexane, re-precipitated in distilled water and dried in an oven at 70 °C.
PVDF commercial grade (Solef 1008/Solvay) was previously swollen and dissolved in N,N-dimethylfomamide (DMF) using 10 g of PVDF in 3 mL of (DMF) and a particular amount of the sulfonated resin was added to the solution under stirring.All PVDF/ sulfonated blends (20 to 80 wt.(%) of PVDF) were obtained by casting the polymeric solution onto a glass plate and drying at 70 °C for 2 hours or until stabilization of the weight, i.e. the volatilization of all DMF.Even though this temperature was much inferior to the inert atmosphere.TGA was used to analyse the resin samples, before and after sulfonation, and the 50% PVDF/50% sulfonated resin blend.
The sulfonated resins and the PVDF/sulfonated resin blends, of distinct composition, were investigated using impedance experiments.Films obtained from the samples were sandwiched between two stainless steel (SS304) electrodes assembled into an epoxy resin holder following the procedure described in the literature [11] .Thickness of the films varied within 20-32 µm and the area was about 1.5 cm 2 .Impedance measurements at 25 °C and at relative humidity of 70% were performed using an Autolab Pgstat 30/FRA 2 system operating in the 10 kHz-10 MHz frequency range and with 10 mV of amplitude of the sinusoidal voltage.
To evaluate water uptake (ϕ w ), or swelling, three samples of PVDF/sulfonated resins blends were dried under vacuum at room temperature until constant weight (w dry ).They were then immersed in water at room temperature and, after 24 hours, the swollen samples were removed from the water, slightly wiped up with a dry cloth and immediately weighted (w wet ).Swelling was evaluated using ( ) ϕ = − ×100 w wet dry dry w w w [12-14] .Morphology of the blends was investigated using a JSM 6060 (Jeol) scanning electron microscope (SEM) operating at 10 keV.SEM specimens were prepared by cryogenic fracturing the samples and gold-sputtering them prior to the analysis.boiling temperature of the DMF, thorough volatilization of the small amount of solvent is expected to have occurred.
Polymer sulfonation was qualitatively evaluated following the -SO 3 H characteristic absorbance band on the FT-IR (Perkin Elmer Spectrum 1000) spectrum of the sulfonated copolymer.The degree of sulfonation (DS) and the ion-exchange capacity (IEC) of the sulfonated resins were determined [8,11] by dissolving around 0.3 g of the dry polymer in methyl alcohol and titrating it with a standard (0.02 M) NaOH solution, using phenolphthalein as indicator to enable the evaluation of the concentration (mols) of H + released into the solution.DS and IEC of the sulfonated polymers were determined using Thermal stability of the sulfonated resins and the blends was evaluated via thermogravimetric analysis (TGA) using a TA Instrument 2050 analyzer, from 25 to 1000 °C, at a heating rate of 20 °C/min, under
Results
Sulfonation of the styrene-indene copolymer is believed to have occurred in both styrene and indene units [3] , and a possible reaction scheme is shown in Figure 1.The insertion of sulfonic groups (-HSO 3 ) in the polymer chain may favour proton conductivity.The degree of sulfonation (DS) achieved under different reaction conditions is shown in Table 1.DS varied in the 12.3-55.6%range, increasing with the concentration of the sulfonating agent and also with the duration of the reaction.Water sorption varied with the extent of sulfonation, i.e. the higher the DS, the greater the swelling and the water solubility.In this study, sulfonated resins with DS higher than 45% were fully soluble in water at room temperature.
The yield of the sulfonation reaction, also shown in Table 1, varied between 65.4 and 80.2%, according to the specific reaction conditions.It can be observed that yield decreased with the increase in DS.For a DS of 12%, the polymer resin could be easily collected.On the other hand, for DS higher than 45%, recovery of the polymer was hindered by its high solubility in water, decreasing the estimated reaction yield.The IEC varied between 0.99 and 3.55 mEq.g -1 (see Table 1), in a way that the higher the DS, the higher the IEC, as expected.
Figure 2 shows the FT-IR spectra of non-sulfonated and sulfonated styrene-indene copolymers (samples BS-2 and BS-6).The spectra confirmed the presence of sulfonated groups in the copolymers after sulfonation, since the 1156, 1127, 1034 and 1006 cm -1 peaks are all associated with the stretching vibrations of the sulfonic group [1,2,15,16] .The absorption bands at 1006 and 1127 cm -1 are associated with in-plane band vibrations of the aromatic ring para-substituted with the sulfonated group and with the sulfonated anion attached to the aromatic ring, respectively.
Besides, the bands at 1034 and 1156 cm -1 represent the symmetric and asymmetric stretching vibrations of the sulfonic groups, respectively.It may also be seen in Figure 2 that the intensity of these bands increased with the degree of sulfonation (samples BS-2 -DS = 25% and BS-6 -DS = 55%).
The TGA curves obtained for the non-sulfonated and the sulfonated styrene-indene copolymers are shown in Figure 3.The sulfonated resin showed lower thermal stability in comparison with the non-sulfonated.In fact, sulfonated resins showed significant weight loss starting as low as 80 °C, which is related to the loss of absorbed water (moisture) [17] .Hydrophilicity of the sulfonated original polymer.The residue content increased with the sulfonation degree, e.g. from 1.3% for the non-sulfonated copolymer to 18.4% for BS-2 and to 22.4% for BS-6.The decomposition of the sulfonic acid groups may promote carbonization of the polymer, being perhaps responsible for the high residue content of the sulfonated samples in comparison with the non-sulfonated copolymers.
Impedance spectroscopy results of the sulfonated resins and the blends were compared with those of a commercial polymeric membrane used as reference.The Nyquist plots of the BS-4 and BS-6 are given in Figure 4a and those of BS-2 and the commercial membrane in Figure 4b.The BS-4 and BS-6 showed two time constants.In the high frequency range, a capacitive loop was detected and, for lower frequencies, the diffusion of charge transfer in the polymer film was dominant.Moreover, on decreasing the sulfonation degree from 55.6% to 25% (samples BS-6 and BS-2, respectively), the diameter of the capacitive loop substantially decreased, indicating a decrease in membrane resistance.
The obtained EIS spectrum for the reference sample was similar to that found in the literature [18] .Regarding the produced resins, BS-2 showed less ionic resistance than the reference.Figure 5 displays the equivalent circuit (EC) which was found to satisfactorily fit the EIS diagrams shown in Figure 4, corresponding to R 1 .
(CPE. [R b .W]),
where: R 1 represents the polymer/electrode interfacial resistance [19,20] , R b represents the bulk resistance, and CPE is the impedance related to a constant phase element.The capacitance was replaced by a CPE copolymers, which increased with the degree of sulfonation, may be responsible for this early weight loss, i.e. the higher the DS, the higher the weight loss.The weight loss event around 260 °C, which is associated with the thermal degradation of the sulfonic groups [2] , was more significant for samples with higher content of sulfonic groups.The resin degraded around 340 °C and the sulfonated resins showed a similarly high residue content (at 1000 °C) in comparison with the In membranes for fuel cell applications, a continuous network of a proton-conducting phase within the material is essential.For the studied system, the sulfonated resin must be present as a continuous network within the PVDF matrix.Morphology characterization of the blends with low DS (i.e.BS-1 and BS-2) evidenced the presence of a homogeneous phase with nonporous aspect (Figures 6a, b).However, many pores may already be observed for the BS-3 blends (Figure 6c) which may be attributed to the incompatibility between the two compounds that could promote retention of the DMF within the film, later producing pores during solvent evaporation.When DS increased further (as for BS-6), two phases were clearly visible (Figure 6d).In the latter, the hydrophilic ionic phase showed some degree of segregation, indicating higher incompatibility among the blend components.Based on these images, it can be inferred that connectivity and aggregation were dependent on DS.
Conclusions
Hydrocarbon resins of styrene-indene copolymers were readily functionalized, producing a sulfonated resin containing up to 55% of sulfonic groups.Furthermore, the degree of sulfonation was easily tailored within a wide range and showed a direct impact on the ion-exchange capacity and the ionic conductivity of the polymers.
Thermogravimetric analyses showed that the increase in the degree of sulfonation and IEC yielded greater weight loss at lower temperatures, a consequence of the liberation of adsorbed water and the decomposition of sulfonic groups.Besides, conductivity of the sulfonated resin called BS-2 was higher than that of the commercial reference membrane, showing an interesting potential to be used for ion exchange membranes.
Swelling of the sulfonated resins increased with the degree of sulfonation, and this characteristic was significantly reduced after blending with PVDF.The blend containing 70 wt.(%) of BS-2 achieved the highest conductivity values and these results were found to correlate well with the observed blend morphology.Nevertheless, these conductivity values were much lower than that found for the pure sulfonated resin.impedance, which takes into account the effects related to roughness and non-homogeneity of the electrode surface.The CPE impedance is given by , where CPE represents an ideal capacitor for n = 1.0 and a resistor for n = 0.0.The Warburg impedance, W, which is in serial connection with R b , takes into account the diffusion process.The high frequency semicircle is associated with a combination of Rb.CPE and W, corresponding to the bulk properties and the effects related to dielectric relaxations.
Thus, the ionic conductivity of the film can be calculated by where: δ is the ionic conductivity, d the film thickness and S the electrode area contacting the film.The ionic conductivity of the sulfonated polymers was determined from the fitted R b values.The simulated values corresponding to the data given in Figure 4 are shown in Table 2. Analysis of the fitted data indicates that the BS-2 sample (DS = 25%) presented the lowest R b and Warburg impedance values among the tested samples, favoring ionic transport (conductivity) in the membranes, i.e. the resistance to charge transport within the polymer film decreased.For higher DS values (i.e.samples BS-4 and BS-6), resistance increased, which may be explained by the stronger interaction between the sulfonic groups in the highly sulfonated samples, since these groups show greater resistance to dissociation [21] .
Moreover, a CPE element with n close to 0.8 indicates a non-homogeneous electrode surface.This suggests that morphology and roughness of the membranes varied with the composition, affecting the resistance of the polymer films.The calculated ionic conductivity of the BS-2 film (with DS = 25%) was higher than that of the reference (4.2 × 10 -3 and 3.5 × 10 -4 Ω -1 cm -1 , respectively).
Swelling due to water absorption is a key factor regarding mechanical integrity of the membranes and excessively high water content leads to dimensional changes and premature mechanical failure.In order to decrease solubility of highly sulfonated membranes, polymer blends were prepared by mixing the resin with PVDF.The BS-2 sample was chosen to produce the blends because it showed the lowest solubility in water and the highest conductivity.Table 3 shows the measured swelling of the various BS-2/PVDF blends and nearly no swelling was observed for a low content of sulfonated resin, although it considerably increased with the BS-2 content.
Table 2.
Resistance and conductivity of the samples and the parameters calculated from the impedance results (Figure 4) using the equivalent circuit shown in Figure 5.
ABC where: M NaOH is the concentration of the NaOH solution (mol/L), V NaOH is the volume of the NaOH solution used to neutralize the sulfonated polymer solution (mL), W is the sample weight (g), and 110 and 81 are the molecular weights of the repeating unit of the styrene-indene copolymer and of the -SO 3 H group, respectively.
Table 1 .
Sulfonation parameters and respective reaction yield, degree of sulfonation (DS) and ion exchange capacity (IEC) obtained.
|
2017-08-30T14:58:57.110Z
|
2012-10-30T00:00:00.000
|
{
"year": 2012,
"sha1": "fad1a3900603e25b3a7293dfac66326bbc10b113",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/po/a/pKf56tK3JqBdmBtfX6kPqzv/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dc5b2f1cf0455586f141437281796e3676e0c22f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
17728783
|
pes2o/s2orc
|
v3-fos-license
|
Statistical methods for censored survival data.
Methods of statistical analysis of censored survival times are briefly reviewed and illustrated by application to clinical trials data. These include estimation of the survival curce, nonparametric tests to compare several survival curves, tests for trend, and regression analysis. Extensions of the methodology are made for application to epidemiologic case-control studies. These are used to estimate relative risks for leukemia associated with radiation exposures. A final section provides some annotated references to the recent literature.
Introduction
Censored survival data arise in a wide variety of statistical investigations. In clinical trials one measures duration of response from start of treatment until relapse or death due to disease. Observations on response time are censored for those subjects still in remission at the study's end, as they are for patients lost to followup during the course of the study. Animal carcinogenesis studies, such as used by the United States Food and Drug Administration to determine the safety offood additives, provide another example. Here the endpoint is the age at diagnosis of a particular kind of cancer, censorship being imposed by death due to other causes, natural or artificial. In tests of the reliability of airplane components, failure times are measured from the start of testing until failure of the component, with censorship imposed by the failure of other components or the necessity of analyzing the data before all items have failed. Figure 1 illustrates the results for the control group in a clinical trial designed to investigate the effects of combined chemotherapy as an adjunct to surgery and radiation in the treatment of childhood rhabdomyosarcoma (1). The endpoint for analysis was the reappearance of tumor, whether at the site of original treatment or through distant metastasis. Children who remained disease-free at the time the *Department of Biostatistics SC-32, University of Washington, Seattle, Washington 98195. data were analyzed had censored observations. In addition to the control arm IA, there were two groups of children who received the drugs actinomycin-D (AMD) and vincristine (VCR): group IB patients were concurrently randomized with the controls, both these groups having apparently had their tumors completely resected; group IIA consisted of patients with microscopic residual disease at the margin of surgical resection.
Interim data from all three arms are presented in Table 1. Note that the censored observations for arm IA, those in the column labeled "disease-free", are smaller in the table than they are in the figure. This is because the figure was drawn from data computed at a later point in time, when additional follow-up was available for patients who had not already died.
Analysis of such data has several goals. For each of the comparison groups one wants an estimate of the survival curve, the probability of surviving t units of time. Statistical tests are required to determine whether the observed differences between the curves are real, or are simply chance effects. If real, a method of quantifying the nature of the differences is desirable. Finally there may be available concomitant observations, including continuous measurements such as age at diagnosis, whose joint effects on survival are important to determine.
Estimation of Survival Curves
When analyzing several groups of survival times the first step is to form a series of 2 x r tables as 9 12 37 25 3 15 16 19 28 9 18 19 20 29 10 24 20 38 10 36 24 42 15 40 24 45 16 45 30 47 30 31 48 34 50 42 52 44 53 59 62 refer to the total number of subjects in the ijh group who remain "at risk", i.e., alive and under observation, just prior to time tk. The tabular entries dik and Sik denote the numbers of those who die at tk, and survive tk, respectively. Table 3 illustrates the calculation of the first three such tables for the data in Table 1. Here r = 3 and ti = 2, t2 = 3, and t3 = 9 months. Note that the tables for increasing tk refer to a constantly diminishing population "at risk" as additional subjects die or are withdrawn (censored) from further observation. Kaplan and Meier (2) derived the maximum likelihood nonparametric estimate of the survival curve based on censored data. This may be calculated recursively, starting from P(to) = 1, and by using the formula (1): for k = 1, 2, . . ., K. (The group index i has been suppressed for clarity.) In other words, the probability of surviving past tk is estimated as the probability of surviving past tk-l times the conditional probability of surviving past tk given survival to tk. Because of this multiplicative structure, Kaplan and Meier refer to their estimate as the product limit (PL) estimate. In case there is no censorship in the data, it reduces to the familiar empirical distribution function. Table 4 shows the calculation of the relapse-free survival curve from the interim data in Table 1 for treatment group IA. The corresponding curves calculated from final study data for all three treatment curves are shown in Figure 2. Numbers above each curve at annual intervals in this figure refer to numbers of patients still at risk in each group. These are an important means of judging the stability of the estimates, which can in fact be quite unstable in the "tail" ofthe survival distribution where few subjects remain at risk.
The variance of the PL estimate may also be cal- with the understanding that it is applied se;quentially to tied observations. In large samples, P(t) is approximately normally distributed with mean equal to the true survival function P(t) and a variance estimated as shown above (2,3). Note that neither P (t) nor V{P (t)} will change after the last uncensored response time in each group, even though additional subjects continue to be withdrawn from observation. In this region the estimated variance often does not accurately reflect the true variability in the survival curve, which will be substantial unless large numbers of subjects remain on study.
Comparison of Survival Curves
A simple but powerful non-parametric test for the comparison of r survival curves with censored data may also be calculated from the basic data shown in Table 2. This test exploits the fact that, under the null hypothesis of no difference in the underlying survival distributions and conditional upon fixed values for the marginal totals in each 2 x r table, the vector dk = (dlk, . . ., drk)' of observed deaths at tk has an r-dimensional hypergeometric distribution. Consequently the null expectation of the number of deaths in group i at tk is eik = E(dik) = nik (Dk,Nk) (3) i.e., the number at risk in the i-th group times the death rate for all r groups combined (see Table 3 for October 1979 an illustration of this calculation). The covariance matrix Vk ofdk has, under the null hypothesis, an (i,j) component equal to The main idea behind the test is to sum up the statistics calulated from each of the K tables into a vector 0 = Ykdk of observed numbers of deaths in each group, a vector E = :kek of expected numbers of deaths, and a summary covariance matrix V = IkVk. Since the 2 x r tables refer to overlapping sets of subjects they are not, strictly speaking, statistically independent. Nevertheless Cox (4) has shown that V is an appropriate large sample covariance matrix for O-E. Since YOi = ;Ei, i.e. the totals of observed and expected deaths in all r groups agree, V is singular. However, defining 0* and E* to be the first r -1 components of 0 and E, and V* to be the (r -1) x (r -1) upper left hand corner of V, a test statistic for testing equality of the r survival curves is obtained as This is approximately distributed as chi-square on r -1 degrees of freedom under the null hypothesis.
The test Ti was first proposed for survival data by Mantel (5). Cox (6) later derived it from likelihood theory under the proportional hazards (PH) model, in which the instantaneous death rates in the r groups are assumed to be in constant ratio throughout the follow-up period (see below). Peto and Peto (7) argued that it was an asymptotically efficient test under Cox's model and named it the "log rank" test.
A conservative approximation to Ti which requires no matrix inversion is given by the familiar chi-square formula While T2 ' Ti, in fact the two statistics will be quite close, provided that there are few ties among the uncensored survival times (i.e., most of the Dk in Table 2 are unity) and that the patterns of censorship operating in the r groups are not grossly different (8,9). Note that the ½ continuity correction should not be used with survival data. Table 5 illustrates the manner of presentation of the summary and test statistics. Note the calculation of the ratio O/E of observed to expected numbers of deaths in each treatment group. These are very useful as measures of treatment effect, since their ratios, e.g., OiIEl . 021E2, approximate the ratios of death rates in the respective treatment groups (10).
Alternate Weighting Schemes
The summary statistics 0 -E weight the observed differences dkek in each table in a manner which is appropriate to the PH model already mentioned. However this is not the only possible weighting scheme. Multiplying the observed differences by Nk, the total number of subjects in the k-th table, and then summing, gives more weight to the earlier times tk when larger numbers are at risk. This leads to the scores and test statistic T3 = W*'Vw*-lW* (7) (8) (9) where asterisks (*) denote the corresponding r -1 dimensional quantities. A conservative approximation to T3 not requiring matrix inversion is Environmental Health Perspectives ( 1 1) The scores Wi may also be obtained from a pairwise comparison of the observations in the i-th treatment group with those in the remaining r -1 groups. Each such pair is assigned the value + I (or -1) according as the true survival time for the first pair member is known to be smaller than (or larger than) that for the second member. Ties or indeterminate comparisons due to censorship are assigned 0 values. Gehan (11) suggested the use of such scores for the comparison of two samples (r = 2). In this case T4 reduces to the familiar Wilcoxon rank sum test in the absence of ties and censorship. Breslow (12) extended this work to r > 2 samples, proposing the covariance matrix Vw and the statistic T3. These latter statistics, like V and Ti, are valid for situations where the patterns of censorship operative in the r treatment groups are unequal, as in animal carcinogenesis studies where there is differential toxic mortality. The conservative approximation T4, like T2, is strictly valid only where there is equality of censorship.
In practice, the tests Ti and T3 often yield rather similar numerical values (see Table 5). However, this is not always true, and some comment on the proper interpretation when only one statistic is significant is in order. Since T3 weights early values more heavily, it may achieve significance when there is an early separation between the survival curves which later come together or even cross over. Ti gives more weight to the later appearing deaths. A large discrepancy between Ti and T3 generally indicates an interaction between treatment and time on the instantaneous death rates, which is worthy of investigation in its own right.
Testing for Trend
Often the r comparison groups correspond to r levelsxl <X2 < . . . Xr of a quantitative variable such as dose. Global chi-square tests such as Ti through T4 lack statistical power in such situations since they take no account of the natural order of the groups. One needs a single degree of freedom test for trend in survival with increasing dose.
Fortunately such a test is readily calculated from the summary statistics already at hand. In the case of the log rank analysis Vx is a single degree of freedom chi-square for a linear trend of O-E withx. Tarone (13) has suggested using T6 = Ti -T5 as a chi-square on r-2 degrees of freedom for deviations from linearity. An approximation to T5 which only requires calculation of the 0 and E vectors is given by Similarly, when using the W scores, provides a test for linear trend and T9 = T3 -T7 a test for deviations from linearity.
To illustrate these calculations by using the summary data in Table 5, make the fictitious assumption that the three treatment groups IA, IB, and IIA correspond to dose levels xi = 0, X2 = 1, X3 = 2. The statistics for trend are T5 = 9.17 (p = 0.002), T7 = 8.72 (p = 0.003), and T8 = 10.02 (p = 0.002). The deviation chi squares are T6 = 1.93 (NS) and Ts = 1.39 (NS). Hence, this would be a case where the already significant differences are largely explained on the basis of an apparent linear trend in survival with increasing dose.
Adjustment by Stratification
When the r comparison groups differ with respect to factors which influence survival, an analysis which corrects for their possible confounding effects is needed. This may be carried out very simply by dividing the population into strata which are more or less homogeneous internally with respect to the confounding factors. (Of course the number of confounders which may be accommodated simultaneously in this fashion is limited, since if strata become very large in number and small in size a substantial loss of comparative information may result.) Separate survival analyses are performed within each stratum by calculating the summary statistics 0, E, and V defined earlier. These are cumulated by simple addition over strata and used to calculate adjusted test statistics Ti, T2, T5, T6 and T7, in which the cumulated summary statistics replace the stratum specific ones. Likewise adjusted versions of T3, T8, and Ts use the cumulated W and Vw statistics.
Such a stratified analysis was used for a trial of maintenance chemotherapy for children with acute lymphocytic leukemia (14). For this disease it is well known that the diagnostic white blood count (WBC) is an important prognostic factor. The treatment group, consisting of 152 children who received actinomycin (AMD) in addition to standard maintenance drugs, had a median WBC of 10,067; whereas the control group, consisting of 116 children who did not receive AMD, had a median WBC of 14,280. An analysis ignoring this difference in WBC compared the observed number of relapses in the treated and control groups, O1 = 100 and 02 = 81, with expected numbers ofEl = 113.00 and E2 = 68.00. This yielded an (unadjusted) chi-square of T2 = 3.98, which isjust on the borderline of 5% statistical significance.
In order to determine whether the apparent effectiveness of AMD was due, at least in part, to the generally lower WBC's among treated patients, the entire sample of 268 children was divided into three strata as shown in Table 6. A separate calculation of the observed and expected numbers of relapses was made within each stratum, so the totals of O's and E's taken across each row of Table 6 agree. The adjusted expected numbers, El = 110.96 and E2 = 70.04, are now closer to the observed numbers and give an adjusted chi-square of T2 = 2.80, which is no longer statistically significant.
The estimated ratio of relapse rates in the treated vs. control group is 100/113.0 . 81/68.0 = 0.74. When adjusted for WBC in three strata, it is 100/ 110.96 . 81/70.04 = 0.78, again indicating less of a difference between treatment and control group.
Regression Analysis of Survival Data
If the number of confounding variables is large, stratification breaks down since there will be many strata with just one or a few subjects. It may also be of interest to quantify the relationship between survival and several discrete or continuous and con- The usual regression model specifies that the survival times, or some transform such as their logarithm, are equal to a linear combination of the concomitant variables plus a random error term. Unfortunately, to generalize such models for use with censored data is awkward and computationally involved. Thus considerable interest was aroused by Cox (6) when he proposed a model formulated in terms of the effect of the regression variables on instantaneous death rates rather than on times of deathper se. This model turned out to be quite tractable computationally and, as an added benefit, avoided any parametric assumptions about the shape of the underlying survival curve.
Cox's model is defined in terms of the time t specific death rate or hazard function X(tlz) for an individual having ap-vector of covariates z. Specifically he assumes X(tlz) = exp(,f'z)Xo(t), where ,3 is an unknown p-vector of parameters (regression coefficients), while Xo(t) is the unknown hazard or death rate function for an individual with a standard (z = 0) set of convariates. A consequence of this model is that the ratio of hazard functions for two individuals with different sets of covariates, X(tlzi) -= exp {,B' (zl -Z2)} X(tIz2) (15) does not depend on time. Thus it is called the proportional hazards (PH) model.
Several authors (4, 6, 10, [15][16][17] have developed the likelihood analysis of the PH model from rather distinct points of view. Providing that there are no ties in the uncensored data, all derive for the lnlikelihood function of the expression Environmental Health Perspectives where R(tk) is the risk set of subjects still alive and under observation just prior to tk; Zk is the covariate vector for the individual who dies at tk; and the outer summation is over all K true (uncensored) times of death. Different likelihoods arise in the case of ties.
Taking the vector of first partial derivatives of L, setting equal to 0 and solving the resulting nonlinear equations yields a maximum likelihood estimate 3 for the regression coefficients. A covariance matrix for this estimate is obtained in the usual fashion by inversion of the negative of the matrix of second partials of L. The integral Notice that when ,3 = 0 this reduces to the PL estimate of Kaplan and Meier, calculated from the entire set of observations considered as one homogeneous sample. Table 7 illustrates the computer fitting of the PH model to the data on 268 leukemic children. Four regression variables were considered: zi = log (WBC), Z2 = age at diagnosis (years), Z3 = Z2, and Z4 = 1 or 0, according as the patient was treated with AMD or was a control. These four variables were entered into the regression equation sequentially in order to demonstrate their effects on remission duration after adjustment for the preceding variables. A quadratic term in the age variable was required: children in the mid ranges from 2 to 6 years have a better prognosis than at either extreme. The multiplicative effect of treatment on the relapse rate is given by exp (J34) = exp (-0.220) = 0.80, which is quite comparable with the approximate value 0.78 obtained from the simpler stratified analysis. The likelihood ratio test for treatment effectiveness yields a chi-square of 2(-876.95 + 877.99) = 2.09; squaring the standard-ized regression coefficient gives a similar value (_1.45)2 = 2. 11. These are both even smaller than the value of 2.80 obtained after adjustment for WBC in three strata, so that the regression approach has in this case led to an even greater reduction in the statistical significance of the treatment comparison.
As a means of providing a graphical display of the fit of the model, the regression coefficients were used to calculate a prognostic score for each child using the formula S = Z1f31 + Z2f32 + Z3133 + Z4/34 (21) Each of the covariates z was first normalized by subtracting off the mean, so that a score S = 0 represents a "typical" patient. The scores were then used to divide the sample into four groups within each of which PL estimates of the remission duration were calculated. These are plotted in Figure 3 together with predicted remission duration curves, estimated from the model, for specified values ofS. Notice that the predicted curves lie further apart than do the "observed" curves for later days, while the reverse is true for earlier days. This behavior indicates a certain lack of fit of the model, namely that the baseline covariates have more of an effect on early rates of relapse than they do on later ones. The fit could be improved by use of time-dependent covariates z(t) as discussed by Cox (6).
The PH Model in Epidemiology: Applications to RERF Data
The methodology of survival analysis, especially the PH model developed in the last section, is also useful with epidemiologic studies of risk factors for chronic disease. In this context t often represents age, and the endpoint is diagnosis of, or death from, a particular disease in a previously disease-free individual. Xo(t) may then be interpreted as the agespecific incidence or mortality rate for a standard covariate set, while exp (B'z) represents the relative risk or rate ratio (RR) for a subject with covariates z. These are computed from the presumed risk factors under investigation and may themselves depend on age. Application of the previously discussed methodology based on the PH model is straightforward, at least in principle. A slight modification is that the risk sets R(tk) may change with age not only due to the loss of individuals from further observation, as in clinical trials, but also from the enrollment of new subjects in the study at later ages.
Prospective epidemiologic studies involve a cohort of disease-free persons who are enrolled at various ages and kept under continuous surveillance until they either develop the disease, or else are lost to further observation or die from another cause. In order to collect enough cases of rare chronic diseases, tens or even hundreds of thousands of persons may have to be followed over several years. For example, the Radiation Effects Research Foundation (RERF) has kept nearly 100,000 survivors of the Hiroshima and Nagasaki atomic bomb blasts under surveillance for nearly three decades, losing fewer than 0.1% to emigration. With such large cohorts the previously discussed iterative methods for estima-tion and testing of the parameters ,8 and Xo are neither feasible nor necessary.
One method of dealing with large cohorts is to partition the time or age axis into intervals, say ten or twenty, and to postulate a discrete time model for the conditional probabilities of failure within each one. Suppose there areK such intervals (0,tl[, (tl, t2], (tK1, tK], and set Pk(Z) = P[T S tkIT < tk-1, Z] (22) for k = 1, 2, . . ., K, where T denotes the random failure time (age at diagnosis or death), for a subject Environmental Health Perspectives with covariates z. Such an individual contributes a termpk(z) to the likelihood if he develops the disease in the k-th interval, and a term {1 -Pk(z)} if he survives it. If he dies of other causes (is censored) in the interval, Thompson (18) makes the sensible recommendation that he contribute a term 71 -Pk(Z), to reflect his survival disease-free over only a part of the interval.
Cox (6) suggested the use of the linear logistic model logit Pk(Z) = ak + /3'Z (23) where logit (p) = In in/(lp)} (24) This reduces to the PH model in the limit as K increases and the time or age intervals become infinitely small. The term exp (,3' z) in Eq. (23) represents the odds ratio of disease occurrence in each interval, rather than the ratio of instantaneous failure rates.
A similar model was one of several used by Otake (19) to explore the effects of radiation on causespecific mortality using RERF data. He divided the sample into five groups according to age at the time of bomb (ATB) and six classes according to estimated total radiation doses, and considered a variety of causes of death as the endpoint. If nij denotes the number of subjects in age group i and radiation class j, while dij denotes the number of deaths due to specific cause, then his model states logit E(dijln ,) = ai + 1j subject to appropriate constraints on the parameters. This is a linear logistic model for the unconditional probabilities of death over a defined period (in this case 1950-1972) and so does not explicitly account for competing causes ofdeath or differential losses to further observation which may be taking place in the 30 age x dose cells.
Cox's model based on the conditional probabilities would further divide the follow-up period into K intervals, say 1950-1954, 1955-1959, etc. If nik then denotes the number of subjects who were age i ATB, received radiation dosej, and who remained alive at the midpoint of the k-th interval, the model is logit E(diJkln/Jk) = ai +/ + Yk (27) One might include also an interaction term (ay)ik so as to allow age and calendar time to have essentially arbitrary effects on disease incidence. The numbers of deaths dijk may be formally considered to have independent binomial distributions conditionally on the nik (4).
A different approach to this problem, termed by Mantel a "synthetic retrospective study," is to draw a small sample of disease-free "controls" from each of the risk sets R(tk) for comparison with the diseased cases as they arise. The very same theory and methods may be applied also to actual case/control studies in which cases are ascertained as they occur, for example through a population based disease register. The controls, instead of being obtained from a computer file, are samples from the population in which the case arose. They may be patients with different diagnoses in the same hospital, chosen from the same family or neighborhood, or simply sampled at random from the population.
While it is impossible to estimate the incidence rates Xo(t) without studying the full cohort, the PH model assumed for the prospective study implies a probability structure for the sample cases and controls which may be used to estimate the parameter,8 describing the RR associated with the covariates (20). Specifically, suppose m = m(t) cases having covariate vectors zl, ..., Zm are diagnosed (or die) with the particular disease at age t. Suppose also that n disease-free controls with covariates Zm+i, * * * Zm+n are drawn at random from the risk set. The analysis is performed conditionally on fixed values for m, n and the combined set of n + m observed covariate vectors. Using Cox This expression is formally identical to that given earlier for the PH model in survival analysis. However now the risk sets R(tk), instead of containing everyone still alive and disease-free at tk, are replaced by the much smaller sets of mk + nk subjects actually sampled for the retrospective analysis.
In practice, the sets of m + n cases and controls may be matched on other variables x besides age, for example on risk factors already known to be associated with the disease under investigation. The PH model may then be generalized to X(t|z,x) = exp {f,'z} Xo(tlx) (31) which allows the underlying incidence function Xo to depend in an arbitrary way on x as well as t. Interactions between the matching variables and the risk factors continue to be modelled in z. Retrospective sampling carried out within risk sets having similar x and t values leads to the same conditional likelihood given above (20).
A special case of the conditional likelihood occurs when each case is individually matched to exactly R controls, a situation found often in practice. Suppose there are K such sets and denote by ZOk the covariate vector for the case and by Zlk, .. ., ZRk the covariates This approach was taken with an RERF data file consisting of records on 7078 males who were 50+ years old ATB. Thirteen deaths from leukemia occurred (Table 8). For each of these a search was made to determine the risk sets of potential controls who had the same age ATB (exact year), were residing in the same city (Hiroshima or Nagasaki), and who were alive at the end of the same year in which the case died. These ranged in size from 30 to 508 individuals.
In order to illustrate the application of the case/ control methodology, a series of 1, 2, 5, 10, and 20 controls was drawn at random from each of the risk sets. Two covariates were computed from the estimated radiation dose (in rads) for each subject: zl = ln (rads + 1) and Z2 = JiaiTs. The choice of ln (rads + 1) as a covariate in the PH model, which implies there is a linear increase in ln RR with In (rads + 1), was based on two facts: the highly skewed distribution of radiation doses; and Figure 2 of Otake (19), which shows a nonlinear increase in In RR with dose after about 200 rad. The alternate transformation Vrads was used for comparison.
Results of the matched analyses based on the dif- Table 8. Summary data on records used for matched case-control analysis of RERF files.
Risk set Age
Year of Dose, Test for quadratic effect Xi = 7.34 ferent numbers of controls are shown in Table 9. Note the decrease in the estimated standard errors as additional controls are used; however the gain afforded by using twenty rather than ten controls is not great. The regression coefficients of 0.4-0.5 for ln (rads + 1) indicate that, roughly speaking, the risk of leukemia increases by about 5% for every 10% increase in radiation dose. The disparity between the relative risks fitted under the model z = ln (rads + 1) vs. those fitted by z = rads is quite noticeable, especially when one recalls that most doses fall in the 0 -200 range. In order to try to understand better why this might occur, unmatched logistic regression analyses were carried out on the data consisting of all cases and the sets of twenty controls. The estimated slopes and standard errors were similar to those obtained with the matched analysis (Table 10). However the intercepts, corresponding to the estimated log odds of leukemia in the sample at a dose of zero rads, differed markedly between the two models: a = -3.97 for z = ln (rads + 1) vs. a = -3.33 for z = 7rads.
Differences in the estimated relative risk between the two models were thus explained largely by the differences in the absolute risks estimated for the baseline value of the covariate.
A potential hazard of the regression modelling of relative risks is its sensitivity to the choice of scale on which the covariate is measured. Goodness of fit of the model to the data is essential for proper interpretation, and should be explored thoroughly. When both linear and quadratic covariate terms were fitted with the models above, for example, the agreement between the estimated values for a improved substantially. Moreover the fact that the quadratic term was highly significant for the /ads model, and not at all significant for the In (rads + 1) model, showed that the latter gave much better fit to these data (Table 10).
Further Reading
Much of the above material is presented in greater detail in a review article (10) on the PH model and its applications to survival data. Some additional applications of this model to epidemiologic studies are given by Breslow et al. (21,22). Peto et al. (23) present a thorough discussion of its use in the design and analysis of clinical trials.
A computer program for calculating the PL estimate and all the test statistics presented in sections 2-5 above is available (24).
Several authors have pointed out that the W scores defined do not lead to the most efficient generalization of Wilcoxon's test to censored data. They all propose essentially the same statistic as an alternate generalization (7,25,26).
A comparison of the efficiencies of the test statistics using Monte Carlo techniques is made by Lee et al. (27). Efron (17) discusses the efficiency of the likelihood function used with the PH model from a more abstract viewpoint; see also Kalbfleisch (28).
Additional extensions of the PH regression model for use with grouped or heavily tied data are discussed by Cox (6), Kalbfleisch and Prentice (15), Thompson (18), and Prentice and Gloeckler (29).
|
2014-10-01T00:00:00.000Z
|
1979-10-01T00:00:00.000
|
{
"year": 1979,
"sha1": "29fc91ef21e2d85a30183df318ae1adb0a62553a",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.7932181",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29fc91ef21e2d85a30183df318ae1adb0a62553a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249972332
|
pes2o/s2orc
|
v3-fos-license
|
Development and Validation of a Language Screening for Implementation in Pre-School Settings
Background To prevent or mitigate long-lasting learning problems and emotional, behavioral, and social-adaption difficulties associated with language disorders, age-appropriate German language competence at school entry level is essential. Therefore, universal screening of children in their penultimate year of pre-school has been established in Upper Austria. So far, the screenings administered by speech and language pathologists to identify risk of language disorder (LD) were not based on standardized materials. Objective To develop a screening instrument to identify increased risk of LD and to evaluate its validity and feasibility within the constraints of regular universal pre-school language screening. Design A two-component screening instrument including direct assessment of expressive and receptive grammar was used in a sample of 374 children with German as their dominant language attending a public pre-school in their penultimate year (age 4-5 ½ years) in the state of Upper Austria. Assessment by use of standardized German language tests including a variety of linguistic domains was considered reference standard for diagnosing LD. Feasibility was assessed by a self-developed questionnaire completed by the administrators of the screening. Results The combination of the expressive and receptive grammar scales demonstrated excellent accuracy (area under the curve score 0.928). A cut-off of 18 resulted in a failing rate of 21.8% and showed good sensitivity (84.2%) and specificity (85.3%). Acceptance by children and testers, time-economy and sustainability of the screening were mostly rated as high.
INTRODUCTION
The international CATALISE consortium (1) recently addressed the issue of terminology and definition of problems with language development, by defining diagnostic criteria for the newly termed Developmental Language Disorder (DLD) by the CATALISE Consensus. The new term DLD refers to a language disorder (LD) that emerges during development and is not associated with known biomedical conditions. DLD is a heterogeneous condition, which can affect language production and/or comprehension and different linguistic domains (lexical, morpho-syntactic, pragmatic). The new definition of DLD does not preclude the co-occurrence with other neurodevelopmental conditions, the presence of environmental risk factors or require a mismatch between verbal and non-verbal cognition. In addition, the consensus statement agreed on the serious nature of language problems with a significant impact on everyday social interactions or educational progress and poor prognosis of LD. What the consensus statement did not define is the extent of language difficulties in mode (receptive and/or expressive) and linguistic dimension (phonology, vocabulary, morphology, syntax, pragmatic s). Therefore, DLD remains a clinical diagnosis, where professionals need to be able to recognize language deficits associated with functional impairment and the potential of these conditions to become chronic with an increased risk of learning and mental health problems.
With language abilities at least 1.5 SD under those of peers in at least two of the five linguistic domains, Norbury et al. (2) found a prevalence of LD of any origin of about 10% (7.58% specific with unknown origin and 2.34% non-specific with medical diagnosis), which makes LD one of the most common developmental problems in childhood. Similarly, earlier studies that assume language abilities around 1.25 SD below the norm in two linguistic domains, expect a prevalence rate of 5-8% of specific LD in children speaking English (3,4), English or French (5) or German (6).
Children with LD are at high risk of difficulties in academic and vocational qualification (7,8), mental health problems and social adaptation difficulties (9)(10)(11). Early identification of LD may help children to access specialized educational (9), therapeutic (12) and parent-implemented (13) intervention to support them to improve their language skills by school entry and to reduce the risk of neuropsychological sequelae. As a consequence, a system for a universal language checkup has been established in the State of Upper Austria since the mid 90's administered by speech and language pathologists. In Upper Austria, a federal state with a population of 1.45 million inhabitants, all children (about 14.000/year at the time of data collection) are assessed in their penultimate year of preschool for speech and language development every year. Up to this point, speech and language pathologists are faced with the challenge of accurately identifying the children with the highest risk of persisting language difficulties and need of language intervention. The challenges concern the lack of a generally accepted definition of what constitutes a LD and the lack of a standardized and feasible procedure for language screening. Another challenge concerns the high variability of language development during the early years with a high proportion of children with initially poor language catching up before school entry (14)(15)(16) and others manifesting deterioration in the trajectory of language development over time. Whereas some studies have demonstrated relatively stable trajectories of language development from the age of 5-6 years (2, 17), more recent population cohort studies have shown that the degree of variability in child language pathways even after the age of 4 or 5 years might have been underestimated suggesting the necessity of continuous surveillance of language development and environmental risk factors (18,19).
In 2006, Nelson et al. concluded their review for the US Preventive Services Task Force advising against universal language screenings because of many methodological problems they had identified in language intervention and outcome studies (20) provided an update to the (21) systematic review reporting sufficient accuracy of some screening tools for the identification of children with LD but highlighting a lack of studies demonstrating their feasibility in primary care settings. They also reported that some treatments for children 5 years and younger might be effective but criticized the lack of wellconducted studies. As consequence, the US Preventive Service Task Force continued not to recommend universal language screenings for language delay (22). For the German speaking community, the German Institute for Quality and Efficiency in Health Care (IQWIG; Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen) also criticized the lack of evidence for long-term outcomes of language therapy. Following international systematic reviews (21, 23) the implementation of universal language screenings in Germany was not recommended (24).
So far, no standardized language screening instrument validated for use in Austrian pre-schools has been available. In Germany, several federal states commissioned research institutes to generate standardized language measures for the identification of language delayed children [i.e., Sismik & Seldak in Bavaria from (25,26); HASE in Baden-Wuerttemberg from (27); KiSS in Hessia from (28) or Delfin 4 in North Rhine-Westfalia from (29) to name some]. Nevertheless, an analysis of the German Mercator-Institute for language promotion and German as second language ascertained insufficient quality and efficacy for all the screenings, mainly because of lack of sufficient validity and objectivity and the exclusion of multilingualism (30).
In Upper Austria, the request for a standardized procedure to be used within the regular universal check-ups in pre-schools, led to the LOGiK-S (Logopädie im Kindergarten-Screening) project. The new measure assesses language skills in Standard Austrian German, the variety of Standard German spoken in Austria in more formal situations (eg in schools and in the media) and with the highest sociolinguistic prestige. In less formal situations most Austrians use dialectal variations of German (Bavarian and Alemannic). The minor differences between Austrian German and Standard German spoken in Germany relate particularly to vocabulary and idiomatic expressions and less to language structure.
Our aim was to develop an accurate screening tool for the identification of high risk of LD (of unknown origin or associated with other biomedical conditions) in Austrian children and to evaluate its feasibility in the pre-school community setting.
Participant Recruitment
In summer 2012 and summer 2013, the public pre-schools in the city of Linz and in the whole state of Upper Austria were invited to participate in the project LOGiK-S (logopedics in kindergarten-screening) with the aim to develop a standardized instrument for language screening. In total, 31 pre-schools (14 of them well spread over different districts of Linz and 17 in the districts of Upper Austria) agreed to participate in the study. The recruitment of pre-schools in two consecutive years was due to limited human resources in the research team and to avoid overburdening the collaborating pre-schools. The managers of the pre-schools disseminated information about the project to all parents of children in their penultimate year of pre-school (age of 4-5 ½ years; Children attending their penultimate year of pre-school in the school year 2012/2013 are hereafter labeled as Cohort A and children attending their penultimate year of preschool in the school year 2013/2014 are labeled as Cohort B) and asked for written consent for their children's participation. Overall, 423 monolingual children with German as their only language (as reported by the pre-school teachers) were eligible to participate. 97.9% of the parents (total n = 414, n = 208 in Cohort A and n = 206 in Cohort B) gave their written permission for inclusion in the research study. Testing was conducted in the first half of the school year (October 2012 to April 2013 for Cohort A and September 2013-March 2014 for Cohort B). We excluded children with incomplete data on the screening and reference tests (n = 13 in Cohort A and n = 16 in Cohort B), children with a time interval between screening and reference test of more than 60 days (n = 7 in Cohort B) and children outside the target age range (n = 1 in Cohort A and n = 3 in Cohort B). The remaining n = 374 children (n = 194 in Cohort A and n = 180 in Cohort B) were included in this study. Table 1 provides an overview of the sample characteristics. Half of the children were girls (50.0%). The mean age was 55.66 months (SD = 4.01), whereas Cohort B was about 1 month older than Cohort A (t = 2.100, p < 0.05). Compared to the Upper Austrian parent population (29), the share of parents with university degree was overrepresented in the sample [36.1% vs. 25%; χ²(3) = 28.725, p < 0.001], which can be probably be explained in part by the exclusion of children with first languages other than German, whose parents are less likely to have a university degree [(29) Population data on parental education are not available for German-speaking children]. Moreover, there were some differences in parental education between Cohort A and Cohort B (see Table 1), most likely due to different catchment areas of pre-schools. However, these differences were not significant [χ²(3) = 7.604, p > 0.05]. For the analyses of this paper, we used pooled data (i.e. we analyzed cohort A and cohort B together) to maximize statistical power. Data pooling would also increase external validity, as the pooled sample is likely to be more heterogeneous in terms of individual characteristics than the single cohorts.
The study project (cohorts A and B) was approved by the hospital's ethic commission "Ethikkommission Barmherzige Schwestern und Barmherzige Brüder".
Construction of the Screening Measures
At the age range relevant for the current study (4 ½ to 5 years) the primary markers of LD in German are deficits in morphosyntax, such as lacking or incorrect inflection of verbs (31), subject-verb-agreement (30) or use of function words (31). In addition, clinical experience shows that the valid assessment of grammatical skills is less time-consuming than the assessment of vocabulary. An expressive and receptive screening scale was developed because LD can affect the production and comprehension of language structures. In addition, assessments of language reception do not require the child's active production of language and therefore, higher acceptance of the receptive language assessment was anticipated. For both screening scales, grammatical structures that are usually acquired at pre-school age were selected. Based on the available literature on acquisition of German grammar (32)(33)(34)(35)(36)(37), morphosyntactic structures with different degrees of complexity were selected. Children in their penultimate year of pre-school were chosen as the target group by request of the public authorities, following the tradition of universal language screening before the final year of pre-school, when-if necessary-intervention can be implemented before school entry.
Expressive Grammar Screening
The expressive grammar (EG) scale includes sentence completion tasks eliciting spoken phrases from the child with the help of predetermined sentence patterns. The scale includes 17 items. The tester successively presents two pictures, separated by a dividing line. The grammatical pattern structure is introduced with reference to the first picture (e.g., "Look! This is Tobias. He drinks juice."). After that, the child completes the sentence presented along with the second picture eliciting the same grammatical target structure (e.g. "And this is Maria. She . . . "target structure: verb second position). Child utterances are scored as correct, when the child is able to produce the target grammatical structure. Errors beyond the targeted grammatical structure are negligible. To facilitate the scoring (0/1 points) of the expressive language items, a collection of correct and incorrect answers is provided. Notably, in cohort A, the screening scale comprised a total of 27 items. The final set of 17 items for measuring EG was selected based on the item statistics (difficulties, item-scale correlation) and the feedback of speech therapists who administered the screenings.
Receptive Grammar Screening
The receptive grammar (RG) scale includes 14 items, again ranked by anticipated increase in complexity, following German language acquisition research. Single sentences are read aloud by the administrator of the test and the child is asked to point to the corresponding picture from a selection of four with well-chosen semantic and grammatical distractors. The test items assess comprehension of different syntactic (e.g. "The boy slides and the girl swings"-coordination) or morphological structures (e.g. "He gives her the book."pronouns). Similar to the EG scale development, an initial number of 20 items was reduced to 14 items based on results of cohort A.
Reference Language Tests
Without an accurately defined gold standard for LD in the literature, LD was operationalized by significant deficits (- (2) The TROG-D [German version of the Test the Reception of Grammar; (40)] assesses the understanding of German grammar. Although the TROG-D provides norm values for German-speaking children, these norms are based on a substantially smaller number of children than contained in this study and do not include children speaking Austrian varieties of German. Therefore, we used the sample percentiles to identify the bottom 10% (-1.25 SD) of the TROG-D scores. Based on three age groups (48-50 months, 51-56 months, and 57-62 months), percentiles were estimated using a continuous norming approach as implemented in the Cnormj package (41) in jamovi 1.6 (42).
(3) The AWST-R [Revised Active Vocabulary Test for 3-to 5-year-old children, Aktiver Wortschatztest für 3-bis 5-Jährige, Revision; (43)] is a standardized picture-naming test for the age range from 3;0 to 5;5 years. The items are ordered by increasing difficulty. To reduce the length of the assessment, we only used the first of the two picture folders (35 items) for the assessment of expressive vocabulary. As the AWST-R lacks norm values for the reduced version of 35 items, we again estimated norm values based on the study data. We once more applied a continuous norming approach. Screening scores in the bottom 10% were considered atypical.
Based on our definition, children with atypical scores (≤−1.25 SD) in at least two of the reference tests were classified as LD. This applies to 38 children (10.2%).
Feasibility
A short questionnaire (7 items) was developed for screeners to assess time economy, acceptance of the screening materials by children and test administrators, practicability of LOGiK-S within the constraints of the universal screening procedure in the pre-school setting, ease of administration and estimation of sensitivity. Finally, testers were asked whether they would recommend the screening to others. All items were coded by use of three-point Likert scales, except the last one (yes-no answer). Due to the high similarity of the materials and procedures for children with German as their dominant language and children with a first language other than German no separate versions of the feasibility questionnaire were completed by the screeners. Only for information on screening time specific information relating exclusively to the LOGiK-S version for children speaking dominantly German was collected.
Procedures
The screening procedures for both cohorts (A and B) were carried out by the speech and language pathologists, who usually conduct the annual universal language screening for children in their penultimate year in pre-school. The assessments were performed with each child individually in a separate room of their preschool. The RG scale was introduced by a practice item to ensure the child's comprehension of the task and it was administered first, because it is usually perceived as less demanding or threating as no language production by the child is required. Within a maximum of 90 days, language development of the children was tested by use of standardized reference tests. The tests were administered in the pre-schools by experienced language experts from the Institute of Neurology of Senses and Language, who were blinded to the screening results.
Statistical Analyses
First, we report descriptive statistics for the subscales. Second, we report reliability estimates (Kuder-Richardson KR-20) for the screening scales. Third, to evaluate construct validity of the screening scales, we applied confirmatory factor analysis (CFA) for binary items using a weighted least squares estimation (WLSMV) in Mplus 8 (44). Following the guidelines proposed by (41,45) a good model fit is indicated by χ²/df ≤ 2, CFI ≥ 0.97, RMSEA ≤ 0.05. An acceptable fit is indicated by χ²/df ≤ 3, CFI ≥ 0.95, RMSEA ≤ 0.08. Fourth, to evaluate criterion validity, receiver operator characteristic (ROC) analyses were used to evaluate the diagnostic accuracy of the subscales. Following Swets (46), AUCs ≥ 0.9 are regarded as excellent, AUCs ≥ 0.8 and < 0.9 as good, AUCs ≥ 0.7 and < 0.8 as fair, and tests with AUCs < 0.7 as poor. To compare AUCs of the subtests, we used a bootstrapped test for paired ROC curves-as implemented in the pROC package (47) in R. Fifth, we applied logistic regression using Jamovi 1.6 (42) to investigate whether both subscales independently contribute to the prediction of LD. Sixth, to evaluate the generalizability of the screening results we compared ROC curves between subsample (Cohort A vs. cohort B, boys vs. girls, age groups). As noted by Youngstrom (48) significant differences between subsamples would indicate variations in the diagnostic accuracy and thus, limit the generalizability of the screening results. A bootstrapped test for unpaired ROC curves was used to compare the AUCs between subgroups. Additionally, the Venkatraman permutation test (49) was used that compares actual ROC curves-not AUCs. If two ROC curves do not differ significantly, each cutoff values would result in the same sensitivity and specificity for the subsamples and therefore, a single cutoff would be appropriate for both subsamples. Finally, we used the R-OptimalCutpoints package (50) to determine appropriate cutoff scores. Cutoff scores are evaluated using the following diagnostic accuracy statistics: Figure 1 shows the distribution of the RG and EG screening subscales. As exactable for an LD screening, items are rather easy and thus, just a few children score in the bottom range of the screening scales. Consequently, the empirical means (MRG = 10.5, SDRG = 2.01; MEG = 11.2, SDEG = 3.57) are higher than the midpoints of the scales (RG = 6.5, EG = 8.5).
Reliability
The internal consistency (KR-20) for the RG scale was rather low at .60. The internal consistency of the EG scale was of moderate size (KR-20 = .74).
Construct Validity
We performed separate CFAs for the screening subscales. Overall, the CFAs for RG and EG yielded an acceptable fit (RG: However, for EG the CFI was quite low but also near the cutoff of 0.90 what is also sometimes considered as acceptable [e.g., (53)]. Next, we compared a two-factor model (EG and RG) with a one-factor model (i.e., all EG and RG items load on a single factor). The two-factor model yielded an acceptable to good fit (χ²(404) = 528.532, p < 0.001, RMSEA = 0.029, CFI = 0.921). The fit for the one-factor model was somewhat worse (χ²(405) = 568.600, RMSEA = 0.033, p < 0.001, CFI = 0.896). Notably, a χ²-difference tests indicated that the two-factor fits the data significantly better than the one-factor model [ χ²(1) = 18.406, p < 0.001]. Overall, these results indicate that EG and RG are distinct but highly correlated constructs (latent correlation = 0.740, p < 0.001).
Logistic Regression
A logistic regression showed that both subscales independently contribute to the prediction of LD (EG: b = −0.430, p < 0.001; OR = 0.650. RG: b = -0.412, p < 0.001, OR = 0.662). McFadden's R² was .433. Notably, as coefficients (bs and odds ratios) for EG and RG were quite equal, an increase of 1 in both subscales is associated with a similar increase in the risk for LD. Thus, a simple sum of RG and EG is an appropriate and easy to calculate (and thus, feasible) total screening score. The AUC for the total screening score was excellent (AUC = 0.928, DeLong 95%-CI = [0.888, 0.976]).
Diagnostic Accuracy Differences Between Subgroups
The results of the comparisons of unpaired ROC curves (based on the total screening score) between subsamples are shown in Table 2. AUCs were generally excellent in all subsamples (only in the group of children younger than 56 months, the AUC was just
Cut-Off Estimation
Finally, to determine an optimal cut-off, we used the "SpEqualSe" criterion (i.e., specificity equals sensitivity) in the Optimal Cutoff R-Package (50
Feasibility
The 7-item questionnaire on feasibility of the LOGiK-S language screening, including both versions for children with German and Non-German as their dominant language and a phonology scale was returned by 39 (93%) from a total of 42 speech-language-therapists. The average screening time was 9.49 min (SD 3.49). Screening materials were rated as very appealing by 44% and as appealing by 54%. Similarly, practicability within the constraints of universal language screening in the pre-school setting was rated as very good by 49% and as good by 46% of the respondents. Sensitivity (ie correct identification of children with LD) of LOGiK-S was assessed as very good by 15% and good by 80%. Thirty-nine percent described no personal effort in administering LOGiK-S, and another 90% stated low effort. As compared to the former screening without standardized measures 74% did not feel stressed at all by the new procedure whereas the rest reported minimal strain. Ninety-two percent would recommend the new measure to others.
DISCUSSION
This study investigated the performance (accuracy and feasibility) of the new screening measure LOGiK-S in a sample of two cohorts of 374 children in total, having German as their only or dominant language and attending the penultimate year of a public pre-school in Upper Austria. To avoid bias, the whole study sample that had been screened underwent testing by use of standardized language tests by speech-language experts blinded for the screening results. Screening results of the first cohort and practical experiences of the screeners were used to systematically reduce the number of screening items. Finally, all available data for the final selection of screening items were analyzed.
The EG scale of LOGiK-S demonstrated excellent accuracy (AUC = 0.918). The AUC of the RG scale was significantly smaller but still good (0.826). As indicated by logistic regression, both scales independently predict LD. A total screening score (combining EG and RG) showed excellent accuracy (AUC = 0.928). Using a cut-off of 18, the rate of screening fails was 21.8 %. Sensitivity (0.842) and specificity (0.853) were found to be good. As predictive values depend on the prevalence of the disorder under investigation (48), the rather low PPV (0.395) is not surprising given only 10.2 % of LD in our sample. Diagnostic likelihood ratios for positive and negative screening results (DLR+ and DLR-) of moderate size were found. Even though a higher PPV would be desirable as it leads to an overreferral of children, the dimensional nature of LDs must be taken into account. Children with false-positive screening scores have been shown to perform significantly lower on subsequent standardized measures than children with true-negative results (54) linked with a higher risk for language, psycho-social and cognitive delay. Therefore, follow-up diagnostic testing should be regarded as an opportunity to identify children with unmet needs for interventions (educational language and social support).
Tests for comparing unpaired ROC curves demonstrated no significant difference in screening accuracy (AUC and actual ROC curves) between both cohorts, despite some diversity in the study characteristics. Similarly, AUCs and ROC curves did not significantly differ between boys and girls and between younger and older children. Therefore, age related norms or sex related norms are not required. Overall, the independence of screening accuracy between groups (cohorts, sex, age) can be regarded as strengths of the screening instrument, as it supports its generalizability and therefore implementation with a variety of children and pre-schools can be recommended.
Feasibility of the new screening procedure was mostly rated as good or very good. Average screening time was below 10 min, materials were reported to be appealing to the children. Practicability within the constraints of the universal pre-school screenings was rated as very good and good. No or minimal personal effort involved in the administration of the new standardized instrument was described, and more than 90% of the screeners, who had to adapt their screening procedure to the new instrument, would recommend LOGiK-S to others.
Due to the lack of an accurately defined gold standard for LD in the literature we operationalized LD by language skills of at least 1.25 standard deviations below the norm in at least two of three linguistic dimensions following common practice in the field. Nevertheless, uncertainties of definition of the reference criterion must be considered a limitation. Moreover, a slight overrepresentation of children of parents with a university degree cannot be ruled out since population data for the specific target group (i.e., parents of children growing up monolingually in Upper Austria) are not available. Finally, the socioeconomic description of the sample is limited to parental education, because it was not possible to collect data on family income.
CONCLUSION
The LOGiK-S is the first validated language screening measure that identifies increased risk of LD in children with Austrian German as their first or dominant language in their penultimate year of pre-school. Accuracy of LOGiKS was found to be high. ccuracy. Implementation with a variety of screeners and in a variety of pre-schools confirms high feasibility of the new measure. Consequently, the implementation of LOGiK-S for universal language screening can be recommended in Austria.
DATA AVAILABILITY STATEMENT
The dataset presented in this article is not readily available because parents have not given their consent to data sharing. Requests to access the dataset should be directed to daniel.holzinger@bblinz.at.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethikkommission Barmherzige Schwestern und Barmherzige Brüder. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
DH: conceptualization, funding acquisition, and supervision. DH and CW: methodology. BD: validation, investigation, and data curation. CW: formal analysis. DH, BD, and CW: writingreview and editing. DH and BD: project administration, and writing-original draft preparation. All authors have read and agreed to the published version of the manuscript.
|
2022-06-24T15:07:46.256Z
|
2022-06-22T00:00:00.000
|
{
"year": 2022,
"sha1": "08ca0abede227102d212b65c048beaa688940d87",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2022.866598/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "723b1fd2e9d0f9fab06a9a0e8b9674b967c64892",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119184994
|
pes2o/s2orc
|
v3-fos-license
|
The MHV Lagrangian for a spontaneously broken gauge theory
Starting from the standard Lagrangian for a SU(2) x U(1) gauge theory plus a Higgs field we derive the corresponding"maximal helicity violating"(MHV) Lagrangian. From this MHV Lagrangian one deduces simple diagrammatic rules for the calculation of multi-particle scattering amplitudes. We arrive at the MHV Lagrangian by a canonical change of the field variables in the light-cone gauge. We comment on the modifications which occur in a spontaneously broken gauge theory as compared to a pure (unbroken) Yang-Mills theory.
Introduction
The efficient calculation of scattering amplitudes with many external legs is a challenging task and needed for phenomenological studies at TeV colliders. Of particular interest are processes which involve electro-weak gauge bosons. These processes often lead to the same signatures in the detector as signals of new physics.
In the past years, various new methods for efficient calculations in a gauge theory have been introduced, motivated by the relation of gluon amplitudes to twistor string theory [1]. In particular these methods include the diagrammatic rules of Cachazo, Svrček and Witten (CSW) [2], where tree level QCD amplitudes are constructed from vertices that are off-shell continuations of maximal helicity violating (MHV) amplitudes [3], and the recursion relations of Britto, Cachazo, Feng and Witten (BCFW) [4,5] that construct scattering amplitudes from on-shell amplitudes with external momenta shifted into the complex plane. These methods have found numerous applications in tree level and one-loop calculations in QCD. The diagrammatic methods have also been applied to include additional non-QCD-type particles, like vector bosons or the Higgs boson [58][59][60][61].
The BCFW recursion relations have first been proven with the help of Cauchy's theorem and the vanishing of the amplitudes at infinity [5,13,25]. From the BCFW recursion relations one can then deduce the MHV rules [12]. Given the simplicity of the MHV rules it is natural to ask if there is a direct way to transform the conventional Lagrangian of Yang-Mills theory into an effective Lagrangian such that the MHV rules can be read off directly from this effective Lagrangian. This is indeed possible and has been shown for pure Yang-Mills theory with two different approaches. The first approach makes use of a canonical transformation in the field variables [62][63][64][65][66][67]. In the second approach one starts from an action in twistor space [68][69][70][71][72][73][74]. The action in twistor space has an extended gauge symmetry. The conventional Lagrangian and the MHV Lagrangian are then obtained from the action in twistor space for different gauge choices.
The interest in the major part of the literature has been focused up to now on an unbroken gauge theory. Equipped with the knowledge and experience from the case of an unbroken gauge theory it is then natural to ask if these methods can be carried over to the case of a spontaneously broken gauge theory. This is the question which we want to address in this paper. We start from the conventional Lagrangian for a SU (2) ×U (1) gauge theory plus a Higgs field and derive the corresponding MHV Lagrangian. From this MHV Lagrangian one obtains simple diagrammatic rules for the calculation of scattering amplitudes involving several electro-weak gauge bosons and/or scalar fields. In this first paper on the MHV formulation of a spontaneously broken gauge theory we try to focus on the essentials. Therefore we do not include fermions nor do we include QCD. With the methods presented in this paper the inclusion of these two sectors is in principle straightforward, but leads to longer formulae.
The motivation for deriving the MHV Lagrangian for a spontaneously broken gauge theory is two-fold: First of all the diagrammatic rules are helpful in phenomenological applications. Scattering amplitudes with many external particles involving electro-weak gauge bosons are notoriously cumbersome to calculate with traditional methods based on Feynman diagrams. The MHV rules offer here an alternative. Secondly, we are also motivated from a more formal perspective: Reformulating the part of the Lagrangian responsible for the electro-weak symmetry breaking into a different -and in certain aspects simpler -form will shed some light on the origin of the symmetry breaking itself.
In order to arrive at the MHV Lagrangian for a spontaneously broken gauge theory we follow the approach based on a canonical transformation. On a technical level we profited from the papers by Boels and Schwinn [75][76][77], in which they derived the MHV Lagrangian for a pure U (N)-Yang-Mills theory plus a massive scalar (without scalar self-interactions). In these papers the authors treat the mass term for the scalar particle as a perturbation. This perturbation does not enter the equation which determines the canonical transformation. We will proceed similar and treat the Higgs potential as a perturbation. Our results are also relevant in the case of an unbroken SU (N)-gauge theory or an unbroken SU (N) ×U (1)-gauge theory with unequal couplings, both coupled to a scalar field. In these cases the canonical transformation induces an additional tower of vertices involving four scalar fields. In the latter case each vertex of this tower is proportional to the difference of the squares of the couplings (and vanishes therefore for a U (N)-theory, but not for SU (N) or SU (N) × U (1)). The inclusion of a λΦ 4 -term in the Higgs potential leads straightforwardly to a further tower of vertices with four scalar fields and proportional to λ.
In the case of a spontaneously broken gauge theory there are a few additional complications related to the non-vanishing of the scalar field at infinity and to inverse differential operators. We will discuss these in detail in the main part of the paper. As a final result we find that the MHV formulation of a spontaneously broken gauge theory is the one of an unbroken gauge theory coupled to a scalar field plus additional towers of vertices all proportional to the vacuum expectation value v of the scalar field. This paper is organised as follows: In the next section we start with a short summary of the notation which we use throughout the paper. Section 3 is the main part of this article and gives the derivation of the MHV formulation for a spontaneously broken gauge theory. This section is sub-divided into five steps. Section 4 contains a summary and the conclusions. We have included two appendices: Appendix A is devoted to inverse differential operators. In Appendix B we have collected useful information on how the system of integro-differential equations arising from the canonical transformation is solved.
Notation
The derivation of the MHV Lagrangian for the electro-weak theory is simplified by an appropriate notation. In order to help the reader to follow our arguments in the main part of this article we give in this section a summary on the notation used throughout this article.
The electro-weak part of the Standard Model is described by a SU (2) ×U (1) gauge theory. We will denote the gauge fields in the unbroken sector by W j µ (for the SU (2)-gauge fields) and by B µ (for the U (1) field). The conventional Lagrange density for the electro-weak sector is given by where Φ denotes the Higgs doublet. The gauge indices of the Higgs doublet are not shown explicitly. The field strengths are as usual The covariant derivative acting on the Higgs field is given by g and g ′ are the couplings of SU (2) and U (1), respectively. The Higgs doublet has hyper-charge Y = 1. The SU (2)-matrices are given by I j = 1 2 σ j , where σ j are the Pauli matrices. These matrices satisfy Tr It is convenient to introduce a fourth matrix I 0 = 1 2 1 and to combine B µ and W j µ into a fourdimensional vector In this paper we use the convention that gauge indices from the beginning of the alphabet are in the range [0, 1, 2, 3] and refer to a four-dimensional vector like in eq. (5), while gauge indices from the middle of the alphabet are in the range [1,2,3] and refer only to the SU (2) part. We denote by A µ , W ± µ and Z µ the eigenstates of the mass matrix. Again we combine them into a four-dimensional vector The mass eigenstates X a µ are linear combinations of the states V a µ : The matrix R ab is given by It is also convenient to introduce the Lie-algebra valued fields together with the corresponding field strengths We will also write With this notation we can write the covariant derivative simply as We will work in light-cone gauge. We define the light-cone coordinates by With this definition the Minkowski scalar product is given by The contra-variant version of the light-cone coordinates is defined analogously Then For the vector We define the spinors as This definition applies to all four-vectors p µ . If the four-vector p µ is light-like, the spinors are the eigenstates of the Dirac equation with eigenvalue zero. If the four-vector p µ is not light-like, eq. (18) defines the off-shell continuation of the spinors. Spinor products are denoted as Multiple Fourier integrals will occur frequently and for these integrals we introduce the shorthand notation (1,...,n) dP(x) = d 4 p 1 (2π) 4 ...
Derivation of the MHV Lagrangian
In this section we derive the MHV Lagrangian for a spontaneously broken gauge theory, which is the main result of this paper. We organise the derivation in five steps. In the first step we simply choose the light-cone gauge for the SU (2) and the U (1) gauge fields. In step two we integrate out one component for each gauge field and obtain a Lagrange density which depends only on the two transverse degrees of freedom for each gauge field. This Lagrangian is not yet in the MHV form, as it contains both a MHV three-vertex and an anti-MHV three-vertex. Integrating out one component for each gauge field introduces additional terms which are quartic in the scalar field.
In step three we analyse the vacuum state of the scalar field and expand the scalar field around a minimum of the theory. In step four we eliminate the anti-MHV three vertex with the help of a canonical transformation. Finally, in step five we assemble all pieces and give the Lagrangian of a spontaneously broken gauge theory in the MHV form.
Step 1: Light-cone gauge
Our starting point is the Lagrangian of the electro-weak sector of the Standard Model as given in eq. (1). We can re-write this Lagrangian as We choose the light-cone gauge In this gauge we can re-order the Lagrangian as follows: such that L 2 contains all terms bilinear in the gauge fields. Terms with three or four gauge fields are collected in L 3 and L 4 , respectively. L Φ contains the terms bilinear in the scalars as well as couplings of the scalars to the gauge fields. The Higgs potential is denoted by L V . The explicit expressions read
Step 2: Integrating out W + and B +
We observe that the fields W + and B + occur only quadratically or linearly in eq. (24). We can therefore integrate these fields out. To see how this is done we first consider the case of integrating out a single field ψ. As an example we consider the path integral We assume that P is a differential operator of even degree and independent of the other fields. In the case at hand we will have that P is proportional to ∂ 2 − . K(φ) on the other hand may depend on the other fields, which are collectively denoted by φ. We would now like to proceed as in the case of an unbroken gauge theory and we would like to make the substitution Here the inverse differential operator P −1 appears. In the case of a spontaneously broken gauge theory we have to be careful with this inverse differential operator. Let us first consider the case of an unbroken theory. In the appendix A we define the space of functions F −m,0 , where m is a positive integer. A field belongs to F −m,0 if the field and its first m inverse derivatives vanish at infinity. The function spaces F −m,0 have the property that for sufficiently large m we may use partial integration without boundary terms also for the inverse differential operators, see eq. (95) and eq. (96). The space F = F −m,0 with a suitable m is appropriate for an unbroken gauge theory. Within perturbation theory we may assume that all fields lie within this space F , and that is what is done in the derivation of the MHV Lagrangian for an unbroken gauge theory. Now let us turn to the case of a broken gauge theory. We first note that by definition F does not include any function, which does not vanish at infinity. In particular all functions which go to a constant non-zero value at infinity are not included. This is clearly insufficient for a broken gauge theory. There the Higgs doublet acquires a vacuum expectation value and goes to a constant at infinity. Let us therefore denote by F 1 the space of functions, which consists of F and the constant functions. If we now consider the differential operator ∂ − , we first note that the kernel of ∂ − are just the functions which are constant in x − . Therefore we may invert ∂ − on F , but the application of ∂ −1 − on a field of F 1 is ambiguous. We may write any field φ ∈ F 1 as the sum of a constant field φ 0 and a field φ ′ ∈ F : We then set and therefore As a consequence we have for all fields φ ′ ∈ F the expected relation but for fields φ ∈ F 1 we have With these words of warning we now proceed with the substitution given in eq. (26). We anticipate that K may go to a constant K 0 at infinity and we write where K ′ now falls off at infinity. We then obtain for the expression in eq. (25) We can neglect the irrelevant factor D ψ exp d 4 x Tr and obtain The result in eq. (35) is identical to the unbroken case, only in eq. (34) we have picked up an extra (irrelevant) term Tr ψK 0 . We remark that eq. (35) can equally be written as i.e. term proportional to K 0 do not contribute. This will be important later, when we expand around the minimum of the Higgs potential.
Let us now return to W + and B + . For W + we have P = 2 After integrating out W + and B + we can write the Lagrange density of the electro-weak sector as with and L V is given as in eq. (24). The Lagrange density in eq. (39) and eq. (40) contains now only the transverse degrees of freedom for the fields W and B. In associating terms with a scalar field Φ to the individual pieces in eq. (40) we have counted a field Φ as "+" and a field Φ † as "-".
Step 3: Expansion around the minimum
We are interested in a spontaneously broken gauge theory. Up to now we parametrised the fields as in an unbroken gauge theory. We now expand the fields around a minimum of the theory. To find the minimum we look at the self-interactions of the scalar field. If we ignore the gauge fields the Lagrangian reduces to The first line is just the standard Lagrange density for the Higgs field. The terms in the second and third line originate from L ++−− in eq. (40). These terms are quartic in the scalar fields and involve derivatives. The attentative reader might now fear that these additional terms modify the position of the minimum, maybe even in a momentum dependent way. This is not the case as we will show now. To find the minimum we write the scalar field Φ (x) as the sum of a constant field Φ 0 and a new field Φ ′ (x): Inserting this splitting into the Lagrangian of eq. (41) we determine the minimum (and therefore Φ 0 ) from the requirement that the terms linear in Φ ′ (x) vanish. Let us first discuss the additional terms in eq. (41). We examine the combination This combination has a term linear in Φ ′ (x) and a term which is quadratic in Φ ′ (x). In the second and third line of eq. (41) this combination occurs squared. Therefore these terms are at least quadratic in Φ ′ (x) and do not contribute to the position of the minimum. Therefore the minimum is given as usual by the solution of the equation We set The components of the new field Φ ′ (x) are written as Let us examine closer the terms L ++− and L +−− in eq. (40). These terms are invariant under the shift of the scalar field given in eq. (42) as can be seen as follows: If we look at L ++− we find that the combination transforms invariantly under a shift of the scalar field. A similar relation holds if we replace ∂ ⊥ * by ∂ ⊥ , which in turn can be applied to L +−− .
After parametrising the fields around the minimum we can write down the Lagrange density in terms of the new scalar field Φ ′ (x). In order to economise on the notational side we relabel the new scalar field Φ ′ (x) by Φ(x). Ignoring a constant term the Lagrange density is then given by The new terms L ′′ ++−− and L ′′ V are proportional to v 2 and given by L ′′
Step 4: Canonical transformation
In the fourth step we eliminate the non-MHV vertices contained in L ++− by a canonical change of the field variables. This step is similar to what has been done in the case of an unbroken gauge theory. We can rely on the results obtained for a pure gauge theory [63][64][65][66][67] and for a gauge theory coupled to scalar fields [75][76][77]. The only modification which we have to make is to include an additional U (1) field.
To motivate the canonical transformation we treat the variable x + as a time variable and collect the remaining three variables in a vector x = (x − , x ⊥ , x ⊥ * ). In order to simplify the notation we will suppress the dependence of the fields on x + and write φ( x) instead of φ(x + , x). We will denote the new fields after the canonical transformation with a tilde, e.g.
Now let us look again at eq. (39) and eq. (40). The "momenta" conjugate to W j ⊥ , B ⊥ and Φ are δL EW We look for a canonical transformation, where the generating function of the transformation depends on the new "coordinates"W ⊥ ,B ⊥ ,Φ and the old "momenta" The new "momenta" are then given by The transformation should eliminate the unwanted L ++− term, therefore we require The fact that the transformation is canonical implies We then plug the expressions in eq. (54) into eq. (55) and use eq. (56). It is convenient to introduce the following two differential operators From the coefficients of ∂ − W ⊥ * , ∂ − B ⊥ * and ∂ − Φ † we find three integro-differential equations To solve these equations it is simplest to combine the U (1)-field B µ and the SU (2) where the index a takes values from 0 to 3. If the two couplings g and g ′ would be equal, we would have a perfect U (2)-gauge theory coupled to a scalar field. The fact that the two couplings are not equal leads only to minor complication which we can deal with by adjusting in the appropriate places the coupling factors. To this aim we define by n 0 (a 1 , ..., a n ) the number of times a zero occurs in the list a 1 , ..., a n . We observe that the gauge fields occur in eq. (40) in L +−− and L ++−− either in a combination like 60) or in commutators to which only the SU (2)-gauge field give a non-vanishing contribution. An example is given by the term The inclusion of the factor which adjusts the couplings has no effect here: In all cases where n 0 (a, b, c) is non-zero the accompanying trace is zero. We can summarise these observations in the rule that the U (2)-gauge field V a µ is always accompanied by a factor (g ′ /g) n 0 (a) . In appendix B we have collected detailed information how the equations of the canonical transformation are solved. The solution to the integro-differential equations (58) is given by 2 Tr(I a I a 1 ...I a n ) g ′ g n 0 (a 1 ,...,a n )−n 0 (a) (I a 1 ...I a n−1 ) i 1 i 2 g ′ g n 0 (a 1 ,...,a n−1 ) (I a 1 ...I a n−1 ) i 1 i 2 g ′ g n 0 (a 2 ,...,a n ) 2 Tr(I a I a 1 ...I a n ) g ′ g n 0 (a 1 ,...,a n )−n 0 (a) (I a r+2 ...I a n I a I a 1 ...I a r−1 ) i 1 i 2 g ′ g n 0 (a 1 ,...,a r−1 ,a r+2 ,...,a n )−n 0 (a) The coefficient functions are given by , We remark that the field V a ⊥ ( x) is expressed in terms of the fieldsṼ a ⊥ ( p) alone, while the field V a ⊥ * ( x) involves not onlyṼ a ⊥ * ( p) andṼ a ⊥ ( p), but also the scalar fieldsΦ † i 1 ( p) andΦ i 2 ( p). In all cases the new fields agree with the old fields to leading order in g and g ′ :
Step 5: Assembling the pieces
We are now in a position to put all the pieces together. Inserting the solutions (61) of the canonical transformation into the Lagrange density (48) one finds that the Lagrange density can be written in the following form: The first term L kin is rather simple and contains the kinetic terms: All other terms contain each an ascending tower of interaction vertices. Each interaction vertex is most conveniently expressed with the help of the Fourier transforms. The series of interaction vertices contained in L (n) involves only gauge fields. One finds The vertex function α j (p 1 , ..., p n ) is given .. p n−1 p n p n p 1 (67) and corresponds exactly to the MHV formula. Each vertex contains two fields V ⊥ * with indices 1 and j and an arbitrary number of fields V ⊥ . Since the trace is cyclic, we have Tr (I a 1 ...I a j−1 I a j ...I a n ) = Tr (I a j ...I a n I a 1 ...I a j−1 ) .
The factor 1/2 takes into account that we are summing twice over identical traces. The third term L (n) ΦΦ contains two scalar fields and an arbitrary number of gauge fields. This term reads The coefficient function β j (p 1 , ..., p n ) is given by Each vertex contains exactly one fieldΦ † and one fieldṼ ⊥ * . These fields are counted as "-". The remaining fields of the vertex are one fieldΦ and an arbitrary number of fields V ⊥ , which are all counted as "+". The vertices correspond therefore to MHV vertices.
The term L (n) ΦΦΦΦ contains four scalar fields plus an arbitrary number of gauge fields. It is given by (1,..,n) dP(x) γ j (p 1 , ..., p n ) + δ j (p 1 , ..., p n ) + λ j (p 1 , ..., p n ) The vertices are again MHV vertices, the twoΦ † -fields are counted as "-", all other fields are of the type "+". We have written explicitly a factor 1/2 in front, since we sum twice over identical strings of generators of the gauge group. We have here three vertex functions γ j (p 1 , ..., p n ), δ j (p 1 , ..., p n ) and λ j (p 1 , ..., p n ). The explicit form of these functions is given by .. p n−1 p n p n p 1 1 + p 1 p j p j−1 p n p 1 p j−1 p j p n , λ j (p 1 , ..., p n ) = − 1 2 λ i √ 2 n−4 p 1 p j−1 p 1 p n p j p j−1 p j p n p 1 p 2 p 2 p 3 ... p n−1 p n p n p 1 .
Here we used the short-hand notation γ j (p 1 , ..., p n ) arises from the minimal coupling of the scalar field to a U (2) gauge theory. The vertex function δ j (p 1 , ..., p n ) is proportional to (g ′2 − g 2 ) and arises from the last term of L ++−− in eq. (40). Finally, λ j (p 1 , ..., p n ) results from the (Φ † Φ) 2 -term in the Higgs potential.
The piece L (n) µ of eq. (64) is obtained from the quadratic term in the Higgs potential. It reads The coefficient function is given by Note that the n = 2 contribution is the standard mass term for the scalar field: Up to now all expressions would equally apply to an unbroken gauge theory coupled to a scalar field with a quartic self-interaction. The theory is unbroken if m 2 = −µ 2 > 0. The remaining pieces in the Lagrangian of eq. (64) are all related to the spontaneously symmetry breaking and The coefficient functions are given by The term L (4) provides the masses for the electro-weak gauge bosons. Using momentum conservation the corresponding coefficient functions simplify to The remaining terms in the second line of eq. (64) read j (p 2 , ..., p n ) + δ We notice that there are no mixing terms between scalars and gauge fields. This is related to the fact that the term L +−− in eq. (40) transforms invariantly under the transformation given in eq. (42). The terms bilinear in the fields are most conveniently expressed in terms of the mass eigenstates. We change to a basis of mass eigenstates with a transformation analogously of eq. (7). In terms of the mass eigenstates we find The masses are given by We note that the pseudo-Goldstone fieldsφ 1 ,φ 2 andχ have exactly the same mass as the corresponding gauge bosons. In the MHV approach each gauge field has two transverse degrees of freedom. For each gauge field which acquires a mass there is an additional scalar pseudo-Goldstone field with the same mass, which provides the third degree of freedom.
Conclusions
In this article we considered a SU (2) ×U (1) gauge theory coupled to a scalar field with a potential which leads to a spontaneous symmetry breakdown. Starting from the standard Lagrangian of such a theory we derived an equivalent Lagrangian in the MHV formulation. Our main results are given in the formulae (64) to (83). These results describe the theory in terms of simple scalar propagators and towers of interaction vertices with an increasing number of gauge bosons. The list of the formulae might look at a first sight rather long, but one should keep in mind that these formulae are valid for an arbitrary number of gauge bosons. Therefore in processes with a high number of external gauge bosons these formulae lead to a simplification compared to a standard Feynman diagram approach.
A Inverse differential operators
In this appendix we discuss inverse differential operators. For simplicity we do this for functions of one variable. The generalisation to several variables is straightforward. Let f (x) be a function with the Fourier representation f (p) denotes here the Fourier transform of f (x). The ordinary derivative ∂ acts on the Fourier representation as The action of the inverse differential operator ∂ −1 on f (x) is defined through the Fourier representation As an example we have for From this example it follows that there is no product rule for inverse differential operators. If f 1 (x) = e −iq 1 x /(2π) and f 2 (x) = e −iq 2 x /(2π) then but We are interested in function spaces such that the function together with its generalised derivatives (ordinary derivatives and inverse derivatives) vanishes at infinity. We define the space F m,n as the space of functions f (x) such that Obviously we have for m ′ ≤ m and n ≤ n ′ F m ′ ,n ′ ⊂ F m,n . (94) If f , g ∈ F −1,0 we may use for the inverse differential operator partial integration without boundary terms:
B Solution for the coefficients of the canonical transformation
In this appendix we give detailed information on how the solution for the canonical transformation is determined. We have to solve the integro-differential equations (58). This is most elegantly done by first solving the special case of equal couplings g ′ = g. The correct couplings are then restored in the final result. For equal couplings we combine the SU (2)and the U (1)field into a U (2)-field, which we denote by V a µ , where the index takes values from 0 to 3. The integro-differential equations which need to be solved read then Let us start with the equation for V a ⊥ ( x). We make the ansatz 2 Tr(I a I a 1 ...I a n ) d 3 x 1 ...d 3 x n ϒ ( x, x 1 , ..., x n )Ṽ a 1 ⊥ ( x 1 ) ...Ṽ a n ⊥ ( x n ) .
In addition we have to express the "old" conjugated fields V a ⊥ * ( x) and Φ † i 1 ( x) in terms of the "new" fields. The relevant equations to be solved are given in eq. (54). Adapted to the U (2)-case these equations read It is technically simpler to start with the scalar field Φ † i 2 ( x). We make the ansatz (I a 1 ...I a n−1 ) i 1 i 2 d 3 x 1 ...d 3 x n X ( x, x 1 , ..., x n )Φ † i 1 ( x 1 )Ṽ a 2 ⊥ ( x 2 ) ...Ṽ a n ⊥ ( x n ) .
|
2010-07-16T11:10:46.000Z
|
2010-07-16T00:00:00.000
|
{
"year": 2010,
"sha1": "5cdb467223d81324fb37726abb0665f51da870a5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1007.2742",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5cdb467223d81324fb37726abb0665f51da870a5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
22556851
|
pes2o/s2orc
|
v3-fos-license
|
Invasive Nontuberculous Mycobacterial Infections among Cardiothoracic Surgical Patients Exposed to Heater–Cooler Devices
Invasive nontuberculous mycobacteria (NTM) infections may result from a previously unrecognized source of transmission, heater–cooler devices (HCDs) used during cardiac surgery. In July 2015, the Pennsylvania Department of Health notified the Centers for Disease Control and Prevention (CDC) about a cluster of NTM infections among cardiothoracic surgical patients at 1 hospital. We conducted a case–control study to identify exposures causing infection, examining 11 case-patients and 48 control-patients. Eight (73%) case-patients had a clinical specimen identified as Mycobacterium avium complex (MAC). HCD exposure was associated with increased odds of invasive NTM infection; laboratory testing identified patient isolates and HCD samples as closely related strains of M. chimaera, a MAC species. This investigation confirmed a large US outbreak of invasive MAC infections in a previously unaffected patient population and suggested transmission occurred by aerosolization from HCDs. Recommendations have been issued for enhanced surveillance to identify potential infections associated with HCDs and measures to mitigate transmission risk.
Invasive nontuberculous mycobacteria (NTM) infections may result from a previously unrecognized source of transmission, heater-cooler devices (HCDs) used during cardiac surgery. In July 2015, the Pennsylvania Department of Health notified the Centers for Disease Control and Prevention (CDC) about a cluster of NTM infections among cardiothoracic surgical patients at 1 hospital. We conducted a case-control study to identify exposures causing infection, examining 11 case-patients and 48 control-patients. Eight (73%) case-patients had a clinical specimen identified as Mycobacterium avium complex (MAC). HCD exposure was associated with increased odds of invasive NTM infection; laboratory testing identified patient isolates and HCD samples as closely related strains of M. chimaera, a MAC species. This investigation confirmed a large US outbreak of invasive MAC infections in a previously unaffected patient population and suggested transmission occurred by aerosolization from HCDs. Recommendations have been issued for enhanced surveillance to identify potential infections associated with HCDs and measures to mitigate transmission risk.
N ontuberculous mycobacteria (NTM) typically cause infection in patients who are immunocompromised or have chronic lung disease (1)(2)(3)(4)(5) but have also caused healthcare-associated infections related to water sources such as showers and ice machines (6)(7)(8). Outbreaks of NTM infections have occurred among patients undergoing cardiac surgery; these typically involve surgical site infections or infections associated with contaminated products, such as prosthetic implants and cardioplegia solutions (6,7,9). Pulmonary infections are the most common disease manifestation of NTM, but 10% of NTM infections are extrapulmonary (2). Disseminated infections are uncommon among immunocompetent patients (10)(11)(12)(13)(14)(15) but are often serious and require treatment with a long, complicated regimen of antibiotic drugs (2).
During spring 2015, investigators in Switzerland reported an outbreak of invasive infections with Mycobacterium chimaera, a distinct species within the NTM category M. avium complex (MAC), associated with contaminated heater-cooler devices (HCDs) used during cardiopulmonary bypass for cardiac surgery (16). HCDs regulate the temperature of patient blood, cardioplegia solution, and warming/cooling blankets through a water circuit not intended to have contact with patients or their blood. Given this outbreak and similar outbreaks reported in other countries in Europe, European public health authorities have issued a warning regarding the risk for M. chimaera infections associated with HCDs (17).
In July 2015, a cluster of invasive NTM infections was identified among patients who underwent cardiothoracic surgery at Wellspan York Hospital in York, Pennsylvania, USA. The Pennsylvania Department of Health (PADOH) and the Centers for Disease Control and Prevention (CDC) conducted a field investigation to identify the extent of infections and determine associated risk factors and exposures to prevent further infections.
Setting
Wellspan York Hospital is a 585-bed community teaching hospital at which ≈650 cardiac surgeries are performed annually. Of these, ≈400 require cardiopulmonary bypass, which involves use of a HCD. Three operating rooms are used for cardiothoracic surgery.
Initial Case Finding
We searched a database of microbiology results to identify all NTM-positive blood, sputum, pleural fluid, and tissue specimens at this hospital during the previous 5.5 years (January 1, 2010, to July 1, 2015). We cross-referenced patients with an NTM-positive specimen with the hospital's surgical database to determine whether they underwent surgical procedures during an exposure period 30 days to 3.5 years preceding the NTM-positive specimen collection date. Surgical procedures occurring <30 days before an NTMpositive specimen was collected were excluded because of the likelihood that they were either diagnostic or therapeutic procedures for a suspected NTM infection (and therefore not responsible for NTM transmission). Surgical procedures occurring >3.5 years before an NTM-positive specimen was collected were excluded because available published reports suggested that most NTM infections were diagnosed within 3.5 years after cardiac surgery (16). To explore whether NTM infection rates differed by surgery category, we calculated the rate of NTM-positive patients (per 10,000 operations performed) for the 3 most common surgical categories (cardiothoracic, general surgery, or orthopedic) and compared these rates using the Fisher exact test.
Case-Control Study
We found that NTM-positive specimens occurred at a higher rate among cardiothoracic surgical patients than among patients in other major surgical categories. Given this finding and recent reports suggesting that HCDs are a potential risk factor for NTM infection, we conducted a case-control study to identify risk factors associated with invasive extrapulmonary NTM infections among patients who underwent cardiothoracic surgery at Wellspan York Hospital.
Case Definition
Inclusion criteria for case-patients were an extrapulmonary NTM-positive specimen collected during 2010-2015 and a cardiothoracic surgery during 2009-2014 occurring during the exposure period (30 days-3.5 years before collection of the NTM-positive specimen). We excluded patients with NTM-positive specimens collected before 2010 because acid-fast bacillus tissue cultures before 2010 were not included in the microbiology database; patients with a history of MAC infection (or a MAC-positive specimen) before cardiothoracic surgery, which suggests that their infection could not be temporally attributed to their cardiac surgery; patients whose cardiothoracic surgeries occurred before 2009, because surgical documentation in the electronic medical record at that time was less standardized and reliable; and patients with only NTM-positive pulmonary specimens, because patients with pulmonary infections have been shown to differ epidemiologically from patients with other types of NTM infections (11,18). All patients with NTM-positive specimens from an extrapulmonary sterile body site were included.
Control Selection
We selected 48 unmatched controls at random from a list of all patients who underwent cardiothoracic surgery at this hospital during 2009-2014 and who had no history of MAC infection. Because controls did not have an NTM-positive specimen date to determine the surgical exposure period (30 days-3.5 years before the NTM-positive specimen collection date), we assigned an index date based on the median incubation period of all patients with NTM infection (397 days from cardiothoracic surgery to NTM-positive specimen) and used this date to determine a comparable exposure period.
Data Collection
We abstracted patient demographic characteristics, medical history or risk factors, outcomes, and NTM specimen information (for case-patients only) from patients' electronic medical records. We also collected perioperative and hospital exposures for every surgery that patients underwent during the exposure period. For the 1 control-patient who had 2 cardiothoracic surgical procedures that required a cardiopulmonary bypass machine to be operational in the room, we summed time of surgery and time connected to the bypass machine to reflect cumulative exposure for both operations.
Infection Control, Environmental, and Laboratory Assessment
We conducted interviews with healthcare personnel and directly observed operating room practices during cardiac surgery. We reviewed the facility's perioperative protocols and the HCD manufacturer's instructions for use. Before the field team's arrival, all 3 HCDs used for cardiothoracic surgery at Wellspan York Hospital had been removed from service and replaced with new ones. During the field investigation, we collected water samples from the decommissioned HCDs, the new HCDs introduced during the investigation, the nearest scrub sink, and 2 ice machines that supplied nonsterile ice to the operating rooms. We collected swab samples from the internal water reservoirs of the 3 HCDs. We disassembled 1 HCD to permit a more thorough inspection. Another HCD was operated in an empty cardiothoracic operating room during a simulation in which we collected water samples from the HCD and air samples from various locations within the operating room (18 inches from the HCD exhaust vent and next to the operating room exhaust vents located in the room corners) in 200-L and 500-L volumes using an impaction air sampler (SAS 90; Bioscience International, Rockville, MD, USA) before starting the HCD and then intermittently over 5 hours after starting the HCD.
Three case-patient isolates (2 from blood and 1 from bone marrow) were available for further characterization. Patient isolates and environmental samples were sent to CDC for testing, including culture isolation, acid-fast bacillus staining, identification by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and 16S and rpoB sequencing, as well as molecular typing by pulsed-field gel electrophoresis (PFGE) and whole-genome sequencing.
Statistical Analyses
We compared patient demographic and clinical characteristics between case-patients and control-patients using the Fisher exact test for categorical variables and 2-sample t test and Wilcoxon 2-sample test for continuous variables. To assess the association between case status and surgical exposures, we initially conducted univariable logistic regression with Firth's penalized maximum likelihood method that accounts for small sample size, issues of data separability, and bias of the parameter estimates, and obtained crude odds ratios (ORs), 95% CIs, and p values. Several surgical exposure variables associated with increased odds of case status were further tested for association with exposure to cardiopulmonary bypass with HCD using the Fisher exact test. Because undergoing a surgery requiring cardiopulmonary bypass with HCD was associated with major cardiothoracic surgery (p<0.0001), presence of a central line (p<0.0001), and implantation of an artificial valve or graft (p<0.0001), we did not include these variables in the multivariable regression analysis that examines the association between case status and length of HCD exposure.
To examine the association between case status and length of HCD exposure, we used 2 different, but related, exposure length variables: surgical time with HCD (time a patient is in the operating room while an HCD is operated) and time on cardiopulmonary bypass (time when a patient's blood is routed through the bypass machine). We conducted multivariable logistic regression with Firth's penalized maximum likelihood method to evaluate such relationships and examined collinearity between factors considered in the multivariable models. Because of the notable correlation detected between surgical time with HCD and time on bypass (Pearson ρ = 0.9, variance decomposition proportion for surgical time with HCD = 0.92, and variance decomposition proportion for pump time = 0.88 at a given condition index of 10), we analyzed these 2 exposurelength variables in separate models. For each exposurelength variable, we began with a full saturated model that also included several potential patient factors, all of which were removed by using a backward elimination method owing to nonsignificance at an α of 0.05, except for immunocompromised status, which was retained because of the biological plausibility of this medical condition affecting the risk for invasive NTM infections. We used SAS statistical software version 9.3 (SAS Institute, Inc., Cary, NC, USA) for all analyses.
Initial Case Finding
Among 144 patients with an NTM-positive specimen collected during January 1, 2010, to July 1, 2015, 48 (33%) underwent >1 surgery during the exposure period. The rate of NTM-positive specimens was noticeably higher among cardiothoracic surgical patients (20 patients/10,000 surgeries) than the rates among patients in other common surgical categories, including general surgery (8 patients/10,000 surgeries; p = 0.04) and orthopedic surgery (5 patients/10,000 surgeries; p = 0.004). Approximately 2,276 surgical procedures with HCDs were performed during this period.
Of the 20 NTM-positive patients who had >1 prior cardiothoracic surgery, we excluded 10 patients based on the case definition (Figure 1), which left 10 patients with invasive extrapulmonary infections, all of whom demonstrated clinical signs of infection at the time of specimen collection. One of the patients who was excluded because of a previous cardiothoracic surgery before 2009, when surgical documentation in the electronic medical record was less standardized and reliable, was later included in the analysis to increase the number of cases, given the low sample size. Demographic and clinical details of these 11 case-patients are shown in Table 1. Most case-patients (8, 73%) had specimens positive for MAC. Five (45%) casepatients were considered to have a thoracic infection, with only specimens from sterile thoracic sites testing positive for NTM. The remaining 6 (55%) case-patients had extrathoracic infections with NTM-positive specimens obtained from sterile body sites outside the thoracic cavity, which likely represented disseminated infections. A large proportion (63%) of patients died, although the cause of death was not necessarily attributable to NTM infection. Of the 11 patients who met the case criteria, 0-3 cases occurred each year during 2010-2015. The median infection latency (length of time between patients' most invasive cardiothoracic surgery and NTM-positive specimen collection date) was 1.2 years (range 0.1-2.3 years).
Case-Control Study
Case-patients and control-patients did not differ noticeably on demographics (age, gender, or race) or predisposing medical conditions (chronic lung disease, diabetes, or immunocompromised state) ( Table 2). However, sarcoidosis was more likely to have been diagnosed in case-patients than in control-patients (27% versus 0%; p = 0.005).
Overall, the number of surgical exposures, and specifically cardiothoracic surgical exposures, was similar among case-patients and control-patients (Table 3). Case-patients had 1-3 cardiothoracic surgeries during the exposure period, but none had >1 cardiothoracic surgery requiring cardiopulmonary bypass. Case-patients had greater odds of major cardiothoracic surgery compared with control-patients; all case-patients had undergone either major cardiac surgery or major thoracic surgery. Having major cardiac surgery was associated with increased odds of invasive NTM infection (OR 4.9, 95% CI 1.2-27.5; p = 0.04). Specifically, having aortic surgery increased the odds of being a case-patient 82-fold (95% CI 3.2->999.9; p = 0.008). Whereas 45% of case-patients had aortic surgery, none of the control-patients had undergone this procedure, likely reflecting its rarity (n = 154, 1.6% of cardiothoracic surgeries). Nine (82%) case-patients had cardiothoracic surgery requiring cardiopulmonary bypass, and most (8, 89%) also had a specimen positive for MAC (Table 1) also associated with significantly higher odds of invasive NTM infection (OR 5.3, 95% CI 1.3-29.2; p = 0.03). The use of cardiopulmonary bypass with an HCD was correlated with undergoing major cardiothoracic surgery (p<0.0001), placement of a central line (p<0.0001), and implantation of an artificial vale or graft (p<0.0001); these factors were not independently associated with case status. The mean surgical time when an HCD was required and mean time on bypass machine were both significantly longer for casepatients than for controls (surgical time, p = 0.003; time on bypass, p = 0.002).
No patient characteristics were significantly associated with increased odds of case status ( Table 4). Odds of invasive NTM infection increased for progressively longer surgery times with HCD and longer time on bypass, although this reached statistical significance only for surgery time with HCD >5 hours and time on bypass >2 hours. Using the final logistic regression model in which surgery time with HCD was a dichotomous variable (models 1.1 and 1.2), the odds of NTM infection for surgical time with HCD >5 hours was 13.2 times and 13.6 times higher than for surgical time with HCD <5 hours, respectively, both without and with adjustment for immunocompromised status. Similarly, time on bypass for >2 hours (models 2.1 and 2.2) was associated with significantly higher odds of NTM infection both without (OR 16.5, 95% CI 3.8-84.0; p = 0.0004) and with (OR 16.6, 95% CI 3.8-88.4; p = 0.0006) adjustment for immunocompromised status.
Infection Control and Environmental Assessment
An infection control assessment focusing on perioperative practices did not identify any breaches related to operating room ventilation, water use, storage, operating practices, and patient care. The hospital had 3 HCDs, all LivaNova (formerly Sorin Group Deutschland GmBH, Munich, Germany) Stӧckert Heater-Cooler 3T Systems (referred to as 3T HCDs); 1 was acquired in 2009 and 2 were acquired in 2012. The manufacturer revised its instructions for use in February 2015 and made additional revisions in a June 2015 field safety notice, including recommendations for more frequent and higher potency disinfection and for positioning the HCD so that its exhaust vent is directed away from the surgical field. Periodic updates to the manufacturer's disinfection recommendations may have resulted in inconsistencies between the hospital's HCD cleaning and disinfection practices before June 2015 and the manufacturer's recommendations at that time. Before the onsite investigation, sterilized water had been used in HCDs at the hospital, but all 3 HCDs were replaced and appropriate changes made to ensure compliance with the most recent manufacturer's operating instructions. When a decommissioned HCD was dissembled for further inspection, biofilm was visible on tubing and surfaces submerged in an internal water reservoir ( Figure 2).
Laboratory Assessment
Water samples from all 3 decommissioned HCDs and swab specimens of the biofilm from the 1 disassembled HCD tested positive for M. chimaera (Table 5). Water samples from a scrub sink near the cardiothoracic operating rooms and ice machines used for nonsterile purposes in the operating room tested positive for rapid-growing NTM species but not M. chimaera. Culture results from water samples collected from the new HCD before installation were also negative. However, the concentrations of non-NTM bacteria detected in HCD water samples were higher after operating the device than before (150,000 vs. 116 colonyforming units/mL). Air samples collected 18 inches from the HCD exhaust vent during the operating room simulations were found to be positive for M. chimaera after 2, 3, and 4 hours of HCD operation ( Table 5). All remaining air samples, including those collected before starting the HCD, were negative for all NTM.
All 3 available case-patient isolates were identified as M. chimaera; these and the environmental M. chimaera isolates (obtained from air and HCD samples) were found to be highly related by PFGE (Table 5; Figure 3). Subsequent whole-genome sequencing results confirmed the PFGE analysis; M. chimaera sequences from clinical isolates, the HCDs, and air samples were highly related (19).
Discussion
This investigation confirmed a prolonged outbreak of invasive MAC infections associated with cardiac surgery requiring cardiopulmonary bypass with exposure to 3T HCDs, similar to reports from Europe (16). The infection rate was low (8 cases/2,276 surgeries with HCDs) among those who were exposed, making recognition of this outbreak difficult. We describe a case-control study in which 8 case-patients likely had HCD-related M. chimaera and MAC infections. Laboratory testing suggested a common source of M. chimaera transmission from the HCDs through aerosolization, which is consistent with studies demonstrating NTM's high propensity for aerosolization (20).
Our investigation had several limitations. Presentation with nonspecific symptoms and the low clinical suspicion by providers make diagnosis of invasive NTM infections difficult in this patient population. Challenges in diagnosis and clinical follow-up may have resulted in decreased sensitivity of case detection and misclassification bias. Because of the retrospective nature of this study and the prolonged period of the outbreak, it was not practical to obtain all potential health records from any healthcare facility at which patients may have been treated. Many clinical isolates from case-patients were unavailable, preventing further species identification of MAC specimens as M. chimaera. We used a maximum exposure window of ≈3.5 years for our analysis based on published reports available during the time of the investigation (16), but subsequent reports have suggested that the time between exposure and diagnosis can be as long as 6 years (21). However, preliminary review did not identify any additional patients with exposures >3.5 years before specimen collection who would have qualified for inclusion in this analysis.
Our statistical analysis was limited by the small sample size and high correlation between various surgical exposures. Laboratory isolation of certain NTMs is hindered by NTM's slow growth. Additionally, many clinical laboratories are unable to perform species-level identification of MAC to M. chimaera and would not be able to identify a cluster of infections caused by this organism. Based on this investigation, CDC and the PADOH made several recommendations to the hospital to enhance detection and surveillance of HCD-related NTM infections and to mitigate risk during future cardiac surgeries. These recommendations included increasing awareness among patients and providers to facilitate earlier diagnosis and treatment. Wellspan York Hospital notified 1,300 cardiac surgery patients and their providers of the potential exposure and established an NTM clinic to evaluate and monitor these patients. Subsequently, an additional 4 patients with likely HCD-associated infections were identified. CDC and the PADOH also issued recommendations to mitigate the risk for HCD-related NTM infections prospectively by ensuring compliance with the manufacturer's cleaning and disinfection instructions, positioning the HCD to minimize aerosolized particles from reaching patients, and using filter-sterilized water to decrease HCD contamination.
PADOH has been proactive in raising awareness of MAC infections related to the 3T HCDs, publicly reporting and investigating these initial cases and raising awareness of the issue among healthcare facilities and clinicians in Pennsylvania by issuing a statewide health advisory (22). Since October 2015, the Food and Drug Administration (FDA) and CDC have issued multiple nationwide communications to raise awareness, improve identification of contaminated HCDs and HCD-related infections, encourage notification of potentially exposed patients, and mitigate risk (23)(24)(25)(26)(27)(28)(29), including broad outreach to professional societies of providers caring for this patient population. CDC and FDA have continued to receive reports of NTM-contaminated devices and related infections with M. chimaera, and an FDA advisory panel was convened (30). Recent evidence suggests that contamination likely occurred during the manufacturing of the 3T HCDs; nearly identical strains of M. chimaera have been detected among samples from infected patients, HCDs from 3 different countries, and the 3T manufacturing plant (31). Whole-genome sequencing analysis of clinical and 3T HCD M. chimaera isolates from geographically distinct areas of the United States have demonstrated closely related strains, also suggesting point-source contamination of the 3T HCDs (19).
Additional areas for research include the effectiveness of disinfection practices given NTM's propensity to form biofilm, as well as device design issues to decrease NTM growth and aerosolization (30). Hospitals and public health services should continue to raise provider and patient awareness about the risks of the 3T HCD (25). Short-term solutions to minimize risk, such as passing device exhaust through a HEPA filter or moving the device outside the operating room, may affect the design and functionality of the HCD and require careful examination (28). In addition, because clusters of extrapulmonary NTM infections may be a frequently underrecognized sentinel of medical device or environmental contamination that decreases the safety of surgical procedures, reporting of such infections to public health is a key patient safety measure.
In conclusion, our investigation confirmed an outbreak of MAC infections in which undergoing cardiac surgery requiring cardiopulmonary bypass with an HCD was associated with increased odds of infection, even in immunocompetent patients. Environmental sampling results suggest that airborne transmission occurred through aerosolization and dispersal of MAC while an HCD was operational. These findings highlight the need for increasing awareness of invasive NTM infection risk among cardiac surgery patients exposed to 3T HCDs; identifying best practices for notifying, evaluating, and managing potentially infected patients; and identifying options for mitigating infection risk from these devices.
|
2017-10-16T00:51:31.814Z
|
2017-05-01T00:00:00.000
|
{
"year": 2017,
"sha1": "a1a1422e679178f55a5e88c94458d9f76ae8228e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid2305.161899",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b4702bfe04b6d8a6a0c06cb08ae763ba71d8e80",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248293143
|
pes2o/s2orc
|
v3-fos-license
|
TiO2−x films for bolometer applications: recent progress and perspectives
The bolometer is widely used in military and civilian infrared imaging due to its advantages of non-cooling, small size and portability. Thermosensitive materials seriously affect the performance of bolometers. As a kind of heat-sensitive material, the TiO2−x material has the advantages of good thermal stability, large-area preparation, and compatibility with the complementary metal-oxide semiconductor (CMOS) process. However, there is almost no review on the application of titanium oxide for bolometers. In this paper, we introduce the bolometer’s main thermal and photoelectric performance parameters and the critical technologies to manufacture the bolometer. Finally, we will particularly emphasize the effects of preparation process parameters of TiO2 on the performance parameters temperature coefficient of resistance (TCR), 1/f noise, etc.
Introduction
A bolometer is a thermal detector based on the absorption of thermal radiation, which changes sensitive materials' resistance. The first bolometer device was carried out around radiation metrology as early as 1880 [1], and its application to infrared imaging has been a matter of recent decades. Putley fully described the thin film bolometer's infrared detection principle and established a theoretical model of response and noise limit in 1966 [2]. However, the following matter is that the bolometer's size is large, which could not be made into an array device. In 1987, Johnson proposed using the anisotropic processing of silicon technology to create a silicon nitride thin-film microbridge structure for the thermal detector insulation structure [3]. Until 1992, Honeywell Research Center successfully manufactured a microbolometer uncooled infrared imaging device [4]. After that, a microbolometer can be widely used in night driving assistance systems for vehicles and ships, security monitoring, building energy-saving detection, industrial temperature measurement, and environmental monitoring, etc The bolometer device has three main issues: absorbing radiation energy by absorb structure, keeping absorbed energy by thermal isolation structure, and converting the energy to an electric signal by the heat-sensitive material. To achieve a high-performance bolometer, a heat-sensitive material with a high temperature coefficient of resistance (TCR), low noise, and adaptable resistance is necessary.
In recent years, a few kinds of heat-sensitive materials have been developed. Vanadium oxide (VO x ), amorphous silicon (a-Si), and carbon nanotubes (CNTs) are widely accepted heat-sensitive materials due to their high TCR and affordable noise level. These materials have high TCR (of approximately 2∼3 -%/K for VO x [5], 1∼4 -%/K for a-Si [6] and CNT for 0.26 -%/K [7]) and suitable noise characteristics (1/f noise parameter: low 10 -13 for VO x , low 10 -11 for a-Si) for bolometer. However, VO x has a drawback of its unstable reproducibility because it undergoes a phase change caused by the heating process that occurs after the formation of the oxide film [8]. The a-Si is known for its high 1/f noise, which is detrimental to the bolometer [9,10]. The diameter of CNTs will essentially affect the width of the material band gap [7]. If the CNTs with inappropriate diameter are used as the surface adsorption material of the detector element, it will affect the infrared absorption of the detector. Refer to table 1 for relevant parameters.
As a metal oxide semiconductor, titanium dioxide (TiO 2 ) has attracted much attention because of its nontoxic, high chemical stability, high dielectric coefficient, diverse and straightforward preparation methods [16]. TiO 2 is widely used in sensitized solar cells, gas sensors, resistance switch storage, photocatalysis and memristor [17][18][19][20][21]. TiO 2 can be widely used in these fields related to its structure and properties [16]. Non-stoichiometric titanium oxide (TiO 2−x ) means that Ti has different chemical states Ti 3+ and Ti 4+ , corresponding to Ti 2 O 3 and TiO 2 . Ti 2 O 3 is a conductive material, while TiO 2 is an insulator with a high resistivity of about 10 8 Ω·cm. In comparison, TiO 2−x with excess titanium is an N-type semiconductor with unique electrical properties [22]. Recently, it has been reported that non-stoichiometric TiO 2−x films can be used as thermistor materials for the uncooled bolometer [23,24]. At the same time, through research, TiO 2−x has a good application in radiometers [25]. Therefore, scientists at home and abroad have made outstanding achievements in improving the electrical performance of titanium oxide materials. Reddy et al [8] studied the influence of various factors on TiO 2−x films for bolometric properties. The author reported the influence of thermal annealing at 300°C on the TiO 2−x sample, indicating that a small decrease of the bandgap was observed and the 1/f noise parameter, resistivity, and TCR were also decreased. The TCR of the TiO 2−x film samples can get up to 3.66 -%/K with a 1/f noise parameter for 1.89 × 10 -11 . They successfully investigated the effect of the oxygen partial pressure and Nb doping on the TiO 2−x film properties. They found that with the oxygen partial pressure increasing, the resistivity was increased, which resulted in the enhancement of TCR value for about 3.6 -%/K [22]. The Nb-doped TiO 2−x films showed a controllable resistivity, low 1/f noise parameter for 10 -12 , and relatively high TCR value for 3.1 -%/K [25]. Tanrikulu et al [26] synthesized the TiO 2−x thin films by atomic layer deposition grown at 150°C and annealed at 300°C, which has a very high TCR for 9 -%/K Ju et al [27,28] fabricated amorphous TiO 2−x thin films at room temperature by controlling the substrate temperature and oxygen partial pressure, the films' electrical properties could be adjusted with the O/Ti ratio changed from 1.73 to 1.97, and the TCR of the samples also varied from 1.2 to 2.3 -%/K. Based on material research, Kwon et al [29] reported the first TiO 2−x films based 50 μm pitch microbolometer with the NETD of 34 mK. Jeong et al [30] reported TiO 2−x based on the focal plane array with an array size of 640 × 480 pixels and 1024 × 768 pixels in 2018, the noise equivalent temperature difference (NETD) and time constant for the VGA detector were 40.5 mK and 8.5 msec, respectively. TiO 2−x has been widely studied in recent years because of its easy availability, environmental friendliness and good stability in harsh environment. Therefore, in recent years, many scientists have attracted in-depth research on TiO 2−x material films. Therefore, TiO 2−x is a potential substitute for mainstream bolometer thermal sensitive materials at this stage.
2. Bolometers: main characteristics and tradeoffs 2.1. Heat flow equation All thermal IR detectors exhibit a change in some measurable property that accompanies a change in the sensitive element's temperature: the picture element caused by the absorption of IR radiation by the pixel. A bolometer infrared detector's basic principle is to use thermally sensitive detection materials whose resistance value changes correspondingly with the temperature changing.
The infrared detector has measurable characteristics, which change with the temperature of the sensitive element. The two-dimensional component thermal pixel is composed of a single focal plane array. Each thermal pixel is connected with the support substrate. Its structural diagram is shown in figure 1. The thermal sensitive material covered by each pixel area absorbs infrared radiation and increases its temperature, so that the heat flows from the sensitive area to the surrounding environment.
There are three common heat transfer mechanisms: conduction, convection and radiation: heat can flow from the sensitive area and its support to the substrate. If the sensitive area is continuous, heat flows from the sensitive area of one pixel to adjacent pixels, which is called transverse heat flow. Try to avoid this situation in the design of the thermal detector because it will reduce the resolution of the image. If the array is not installed in a vacuum package, heat will flow through the surrounding atmosphere. In general, the effect of heat transfer on the thermal array is not considered because the array package is usually empty. In addition, if the array package does not empty the gas, the heat loss of the sensitive element is usually through conduction rather than convection. At present, the main heat loss mechanism is radiation, and the array is at the background limit, which is the basic limit of its performance. The thermally sensitive material absorbs the infrared radiation in the detector absorption layer. Part of the absorbed radiation power is dissipated by thermal radiation and heat conduction, and the other part is stored to increase the temperature of the thermal element. The heat transfer between the heating array and the air will lose heat, which is usually avoided by vacuum packaging. The final thermal balance is established between the temperature rise of the energy stored in the thermal array and the energy dissipated by heat conduction and infrared radiation.
Let the sensitive area of a pixel have a heat capacity of C. Assume that the thermal conductivity of the primary heat loss mechanism, which is usually the thermal conductivity of the support structure, is G. Time-modulated IR radiation with a power amplitude of f 0 falls on the pixel. Let the fraction of incident absorbed radiation be η. Let the angular modulation frequency of the radiation be ω. Let the temperature rise of the pixel-sensitive area be ΔT. When infrared radiation of an amplitude power of f 0 sinusoidally modulated falls on the pixel with an angular frequency of ω, the temperature increase of the sensitive area of the pixel is ΔT. Then the heat flow equation describing the pixel is Then the steady-state solution of the heat flow equation is: Obviously, under particular infrared radiation, when the ΔT of the thermal element is higher, the detector's material is more sensitive. With the modulation frequency ω increasing, the heat capacity C is more significant than thermal conductivity G. Then, we continue to increase the frequency, which will reduce ΔT. The parameter reflects the response time of the heat detector. τ is the thermal response time, defined as Then the equation (1) can be expressed as follows:
Bolometer performance parameters
The main performance parameters of the bolometer are voltage response rate (R), noise equivalent power (NEP), and detection rate (D * ). The temperature change of the bolometer causes the change of the physical quantity. The final measurement is usually the voltage signal, so the detector's voltage response rate is its essential performance parameter. The bolometer's voltage response rate is the ratio of its output voltage signal (V) to the radiation power (P). It is generally expressed by The bolometer's equivalent noise power is the output voltage generated by the infrared radiation projected on the bolometer precisely equal to the bolometer's noise voltage. At this time, the radiation power of this infrared radiation is called the noise equivalent power.
V T is the total noise voltage, described in detail in later chapters, R is the voltage response rate. In order to characterize the performance of detectors with different areas and noise bandwidths, the detection rate D * is introduced. The detection rate is the signal-to-noise ratio obtained when the unit power radiation is irradiated on the Sensitive element unit area under the amplifier's unit bandwidth.
Temperature coefficient of resistance
TCR is an essential parameter for measuring thermal sensitive materials, and its value is also related to the temperature environment. Most objects will change their resistance with the change of surrounding temperature [32]. Therefore, the TCR value of the thermal material can be calculated by measuring the resistance of the thermal material.
Assuming that the temperature increase ΔT of the heat-sensitive materials due to the absorption of IR radiation is small enough so that the resistance change ΔR is linear with ΔT, that is So the parameter that indicates the relationship between material resistance and temperature is the TCR, which is defined as the relative rate of change of resistance with temperature, expressed by α.
It can be seen that when α is larger, this means that the material is more sensitive to temperature. When the measured temperature reaches a certain temperature, the output signal of the detector will be large. Therefore, TCR is an important standard to measure the thermal properties of thermally sensitive materials.
All the heat-sensitive materials can be divided into three categories: metal, semiconductor, and superconductor. Moreover, TCR can be either positive or negative. For metals at room temperature, it is positive that the resistance increases with increasing temperature. It is called a positive temperature coefficient (PTC) heat sensitivity material. Common metal materials are nickel, bismuth, platinum, and antimony, generally used at low temperatures. Their typical a is 0.03 -%/K, and a specific detection rate is 1 × 10 8 cmHz 1/2 W −1 with a response time of about 10 ms. These materials are quite brittle at low temperatures, making it challenging to form arrays for imaging. The TCR is usually negative for semiconductors at room temperature, called a negative temperature coefficient (NTC). Most transition metal oxide semiconductors have an NTC. Among them, a class of semiconductor materials represented by vanadium oxide has a large NTC. A slight temperature rise within a specific temperature range will cause a resistance drop of 3∼4 orders of magnitude. TiO 2−x is a kind of semiconductor material.
Superconductors include nickel-tin, lead-tin, NbN, etc, and their TCR is positive. In the phase transition temperature of the superconductor, the tiny temperature change will cause a significant change of resistance. Its detection rate can reach the order of microseconds or even nanoseconds. Therefore, the superconductor is a highly sensitive system. Superconducting materials need to work at a low temperature close to the temperature of liquid helium. The work temperature range is relatively narrow. With the further study of high-temperature superconducting materials of the YBaCuO system, superconducting bolometers working temperature has been developed rapidly. Phong et al [33] prepared the room temperature YBaCuO microbolometers with TCR value up to 4 -%/K. And the optical responsivity and detectivity of the bolometers were 7 × 10 4 V W −1 and 3 × 10 9 cmHz 1/2 W −1 at low frequencies, respectively.
The temperature dependence of the resistance is expressed as E a is the activation energy, k is the Boltzmann constant, T is the absolute temperature, and R 0 is a constant. With equations (10) and (11), the following equation can be obtained:
Noise mechanism
There are many noises, including readout integrated circuit (ROIC) related noise, thermal fluctuation noise, Johnson noise, and Flicker (1/f ) noise. Among them, Johnson noise and Flicker noise are electrical noise. Thermal fluctuation noise is thermal noise mainly caused by fluctuations of heat exchange between the device and the environment. They are all random noise, and ultimately thermal noise will also be expressed in electrical signal fluctuations on the detector's output signal.
Johnson noise
When the temperature is T, the flux of the heat-sensitive material radiation is Aε σT 3 . Where A is the area, ε is the emissivity or absorption rate. When the temperature rises by a small dT, the temperature is T+dT ignoring the higher-order term, σ is the Stefan-Boltzmann constant. The increment dФ of the radiant flux is 4Aε σT 3 dT and the thermal conductivity G corresponding to this increment is The mean square thermal fluctuation noise power of the heat exchange between the bolometer and the environment within the frequency bandwidth is The root mean square voltage is expressed as Where R is the response rate is the ratio between the output electrical signal and the input infrared signal. Δf is the thermal noise bandwidth, and G is the effective thermal conductivity. Johnson noise is thermal noise, also known as Nyquist noise. It is caused by the carriers' random movement inside the resistance device colliding with the lattice atoms, thereby showing the phenomenon that the voltage across the resistance fluctuates irregularly from the mean value. Its value is closely related to temperature. When the temperature rises, the carrier movement will intensify, and Johnson's noise will also become more extensive. The root mean square voltage of Johnson noise is Here, k is the Boltzmann constant, T is the resistive device's temperature, and Δf is the bandwidth. From the equation, we can conclude that the value of Johnson noise is not related to the detector's bias voltage and frequency. It is a kind of 'white noise' whose value depends on the temperature T and the electronic bandwidth Δf.
2.4.2. 1/f noise 1/f noise is also called flicker noise or modulation noise, which is a frequency-dependent noise. When the bolometer's operating frequency is in the low-frequency range below 1 kHz, 1/f noise is relatively large. Moreover, when the operating frequency is greater than 1 kHz, it becomes a constant again. The generation of 1/f noise is closely related to the inherent defects of the crystal. The low-band noise with large noise is that the material of the photosensitive layer of the detector pixel is uniformly distributed. Alternatively, there are certain impurities and defects in the heat-sensitive materials of the bolometer. A micro-spark discharge occurs between the material particles when current flows through the bolometer, generating a micro-electric explosion pulse. The following equation can define the voltage spectral noise density related to 1/f noise: k is the 1/f noise parameter, strongly influenced by the bolometer materials, and v bias is the bias voltage.
Of all the noise, thermal fluctuation noise is sufficiently low. Therefore, we only consider the Johnson and Flicker noise components as essential contributors to the bolometer's performance. The Johnson noise measured under zero bias conditions has an almost constant power spectral density over the entire frequency, and the noise level is below 1/f noise so that it can be ignored. In general, 1/f is the dominant type of noise among the noises sources mentioned earlier. Because 1/f noise is low-frequency noise, which mainly depends on the materials, contact quality, and other factors, we can reduce the 1/f noise by improving the heat-sensitive material.
Universal bolometric parameter (β)
The universal bolometric parameter (β) is the ratio between the TCR (α) value and the square root of the 1/f noise parameter (k). β is an important parameter for evaluating the bolometer's thermal materials, which are related to the TCR and 1/f noise. It is given by Where α H is the Hooge parameter, n is the charge carrier density, K is the normalized Hooge parameter. Ω is the volume of the material. TCR, 1/f noise, and β are considered essential evaluation indicators for thermal materials' performance. Therefore, the research on heat-sensitive materials mainly focuses on reducing 1/f noise and improving TCR to improve the universal bolometric parameter (β).
Structural design of bolometer
Many design features and tradeoffs should be considered to design an array of uncooled infrared bolometers with high sensitivity. Some of the most critical bolometer design parameters have been described in detail above, including high absorption rate of infrared radiation in a large area, bolometer temperature sensing material with a high TCR, low 1/f noise characteristics, and a sufficiently low thermal time constant of the bolometer. Furthermore, it is vital for commercial bolometer applications that the bolometer pixels are small enough. The reduction of the bolometer pixels would greatly increase the fill factor in a small area to reduce the cost and improve the focal plane array's resolution. By reducing the effective size of the focal plane array (FPA), the cost of FPA chips and infrared optics can be decreased.
The key point of obtaining a high-performance bolometer is to design a thermal isolation structure with thermal conductivity. The micro-bridge structure has the advantages of small size, simple processing technology, and low thermal conductivity, making it the first candidate for thermal insulation structures. Microbridge structure can be divided into single-layer micro-bridge structure and double-layer micro-bridge structure.
Single-layer micro-bridge structure
The schematic diagram of a traditional single-layer bolometer micro-bridge structure is shown in figure 2. The micro-bridge structure comprises three parts: supporting bridge legs, bridge piers, and bridge deck, with the bridge legs and bridge deck in the same plane. The entire micro-bridge structure is suspended on the substrate using the surface sacrificial layer technology, where the readout circuit of the micrometer radiometer is integrated. The heat-sensitive film would be deposited on the bridge surface. The bridge surface absorbs the radiation when explored to infrared radiation and causes a temperature rise, which induces a change in the infrared-sensitive film. The electrical channel in the bridge leg transfers the electrical charge to the substrate's readout circuit to detect infrared radiation.
The bridge legs' main functions in the microbridge structure are providing mechanical support to the bridge deck, electrical channels to readout circuit, and thermal insulation to the microbridge structure. The bridge deck's primary function is to absorb as much infrared radiation as possible to assure the entire microbridge obtains a higher temperature change. Therefore, to achieve high infrared detection performance of the microcalorimetry radiometer and ensure the bridge legs provide sufficient mechanical support for the bridge deck, the thermal insulation of the bridge legs should be improved as better as possible.
Once the bridge legs' material is determined, increasing the length-to-width ratio of the bridge legs is the principal means to improve the thermal insulation of the bridge legs. On the other hand, the bridge deck area should be increased as much as possible to improve the infrared absorption capacity of the bridge deck.
Therefore we introduce the concept of micro-measured radiometer unit fill factor. The fill factor of a microradiometer is defined as the portion of the bolometer pixel area used to absorb the incident infrared. The higher the fill factor means, the stronger the infrared absorption capability of the device. The fill factor of the traditional single-layer infrared bolometer is usually between 60% and 70%.
To obtain higher thermal isolation and a larger fill factor in the same cell size, various shapes of micro-bridge legs are widely reported, including I-type bridge legs [34], L-type bridge legs, U-shaped bridge legs [35], and snake-shaped bridge legs [36], etc. The I-shaped is the most typical bridge leg shape, with two supporting bridge legs a simple I shape, as shown in figure 3(a). This design is useful in ensuring good thermal insulation performance and guaranteeing the fill factor of the bolometer. The L-shaped leg is an improved design, with two supporting L-shaped bridge legs. And double length-to-width ratio, as exhibited in figure 3(b), further improves the thermal isolation performance of the microbridge and the fill factor. The U-shaped leg's length-to-width ratio is also about twice that of the I-shaped leg, as displayed in figure 3(c). Although the thermal insulation performance has not been improved compared to the L-shaped leg, the U-shaped leg can effectively reduce the bridge's stress. The serpentine leg can be seen as a combination of multiple U-shaped legs. As shown in figure 3(d), this bridge leg structure would achieve very high thermal insulation performance, but its cell fill factor is reduced simultaneously.
Double-layer micro-bridge structure
To increase the bolometer pixel fill factor, a double-layer micro-bridge structure has been created. The doublelayer micro-bridge structure can be divided into three forms: umbrella-shaped double-layer micro-bridge structure, eaves-shaped double-layer micro-bridge structure, and hidden bridge-type double-layer microbridge structure. The fill factor of the double-layers infrared bolometer is up to 90%.
A typical umbrella-shaped double-layer microbridge structure is shown in figure 4. The umbrella-shaped micro-bridge structure includes an umbrella-shaped absorption layer, an optical cavity, a heat-sensitive film, bridge legs, bridge piers, and a substrate with integrated readout circuits. The bridge legs design the electrical channels to connect the thermosensitive film and the substrate readout circuit. The umbrella-shaped absorption layer is at the top of the entire micro-bridge structure, which can effectively improve the microbolometer unit's fill factor and enhance the infrared absorption rate of the device. The heat-sensitive film and the bridge legs are under the umbrella-shaped absorption layer. Besides, the dual optical cavity design plays an essential role in the optical resonant cavity by adjusting the height of a specific cavity, further improving the device's responsiveness.
The second structure is the eaves-shaped double-layered micro-structure, which can also be regarded as building an eaves structure above the traditional single-layer micro-bridge structure to increase the device's filling factor further and improve the infrared absorption capacity (figure 5). Moreover, there is ample space under the eaves structure to ensure the length-to-width ratio of the bridge legs and enhance the micro-bridge structure's thermal insulation performance. Thus, the infrared response rate of the bolometer can be improved.
The effective utilization of infrared radiation can also be enhanced by increasing the infrared absorption rate of the microbridge structure. Resonant optical cavity (Fabry-Perot) structure is the most popular method for uncooled microbolometer to enhance the absorption rate of a specific waveband, designed according to the principle of multi-layer thin film interference filtering. The typical structure is shown in figure 6.
Manufacturing techniques of bolometer focal plane arrays
The most commonly used manufacturing approaches for uncooled infrared bolometer FPAs are bulk silicon micromachining and surface micromachining. As for bulk silicon micromachining, the chemical isotropic or anisotropic etching process is adopted in the etching solution to make micro-channels micro-cavities on the silicon substrate to suspend the micro-bridge structure. Figure 7 shows the substrate's selective etching under the bolometer to achieve an excellent thermal insulation effect.
The bulk silicon process' advantage is the compatibility with the CMOS process, which means the low cost of the bolometer. However, the disadvantage is the dependent shape of the cavity sidewall on the crystal plane, and a considerable part of the substrate material needs to be consumed. Second, bulk silicon micromachining is more challenging to integrate with the integrated circuit (IC) process.
The surface micromachining process is another manufacturing technique shown in figure 8. It enables the entire utilization of the current IC process and the control of the microstructure at a certain level. The microbridge structure can be made without damaging the bottom readout circuit, which is very suitable for large-scale area array devices.
First, we form an electrical contact pad and a reflective layer on ROIC. Then the sacrificial layer is spincoated, which is typically high-temperature stable polyimide. Moreover, the bottom membrane is deposited by magnetron sputtering or plasma-enhanced chemical vapor deposition process, and then the thermally sensitive material TiO 2−x is deposited on the bottom film. After the passivation layer is formed, a photoresist is patterned to selectively remove the TiO 2−x material of the microbridge's connecting legs ( figure 8(a)). Figure 8(b) displays the removal of the TiO 2−x layer. Then a hole is etched in the sacrificial layer to form an electrical connection between the electrical contact pad and the micro-bridge structure (figures 8(c) and (d)). Afterward, the infrared absorption layer is deposited on the film, and the micro-bridge structure's fabrication is completed (figure 8(e)), followed by the removal of the sacrificial layer to form a resonant cavity ( figure 8(f)).
Sol-gel
The sol-gel method is an inexpensive and straightforward film preparation method. The necessary steps are as follows: firstly, organic metal compounds are synthesized into sol in a liquid phase at low temperature using inorganic materials or metal alcohols as precursors. The sol is then applied to the substrate using lifting, spin coating, spraying, brush coating, etc Finally, after drying, sintering, fixing its gradient component, gel solidification, and heat treatment, the film is obtained to remove organic matter. The advantages are: the precursor can be purified firstly, the sol-gel process can be carried out at room temperature, the preparation equipment, the preparation process, the operation mode are relatively simple, and is not limited by the substrate shape and size, easy doping, so can be prepared in a large area of the film. The disadvantage is that the refractive index gradient, the size, and thickness of the prepared film are difficult to control, the process control requirements are high, the film is prone to cracking and bubbles [41].
Pulsed laser deposition
PLD is a method that bombards objects with a laser and then deposits the bombarded materials on different substrates to obtain the precipitates or films. The advantage is that it is easy to control the component, and the deposited film is utterly consistent with the component of the target material. The process parameters can be adjusted arbitrarily, and there is no limit to the type of target. The disadvantage is that the deposited film's uniformity is low, and it cannot be deposited on a large area of the substrate, which also limits the application of PLD in the radiant heat and infrared focal plane [42].
Atomic layer deposition
ALD is a method that can deposit substances layer by layer on the substrate surface in the monatomic film. ALD is similar to ordinary chemical deposition. However, in the process of ALD, the chemical reaction of a new layer of atomic film is directly related to the previous layer. In this way, only one layer of atoms is deposited in each reaction. With the development of science and technology, more and more applications will be found in the near future. According to this technology's reaction principle and characteristics, all kinds of different materials can be deposited. The deposited materials include metals, oxides, carbides (nitrogen, sulfur, silicon), various semiconductor materials and superconducting materials [43].
Electron beam evaporation
EBE is a physical vapor deposition method. It uses a high-energy electron beam to accurately bombard the target in the crucible, convert the kinetic energy of electrons into heat energy, melt them and deposit them on the substrate. A film with a compact structure and stable performance can be grown [44]. At the same time, the target has a small heating area, reduced heat radiation loss and high thermal efficiency. In addition, EBE coating is more suitable for single material coating because most composite films will decompose under high-energy electron bombardment [45].
Magnetron sputtering
MS is a kind of Physical Vapor Deposition (PVD). It is a physical vapor deposition method widely used in the preparation of titanium oxide films. The main sputtering methods are RF sputtering, ion beam sputtering and DC magnetron sputtering [46].
Usually, high-purity metal titanium is used as the target material, and the substrate can be glass, c-Si, SiO 2 /Si, Maria glass, single sapphire crystal, etc The sputtering surface was first washed with Ar + on the substrate. The substrate is at an Angle of 30°∼60°from the target. The substrate heating temperature is generally 300°C∼550°C, and the vacuum is about 10 -3 Pa. The vacuum chamber is typically an inert gas, which can be obtained by changing the oxygen partial pressure and substrate temperature to get titanium oxide films with different structures and performances [22].
The thickness of the film prepared by magnetron sputtering is easy to grasp, and the thickness of the film can be controlled within the required range in the process of preparation. The working current controls the deposition rate of vacuum sputtering coating. As long as the working current is strictly controlled, the diemaking deposition rate can be well controlled, and the film thickness can be controlled. The film prepared by the magnetron sputtering method has good repeatability. At the same time, Almost all solids can be prepared by sputtering, such as metal film, alloy film, dielectric film, oxide film, and semiconductor film, insulator film. As long as the target can be made, the film can be prepared. In addition, compound films can be prepared by reactive sputtering from elemental targets [47,48].
The structure and property of TiO 2−x films
In general, the TiO 2−x films prepared at room temperature has no prominent diffraction peak on the x-ray diffraction pattern, which means that the TiO 2−x films are an amorphous structure at room temperature. Therefore, the composition and microstructure of the TiO 2−x films usually depend on the preparation conditions ( figure 9(a)). For example, the oxygen partial pressure variation affects the composition of the material, and high reaction temperature usually induces an orderly structure of the film.
TiO 2−x has a wide bandgap (3.2 eV for anatase, 3.0 eV for rutile). Mardare et al [49] have researched the temperature dependences of the electrical conductivity and pointed out that when the temperature is high above 300 K, the measured conductivity of the TiO 2−x films can be explained by a simple thermal activation conduction mechanism. However, the conductivity occurs by variable range hopping (VRH) of carriers between local states at a low temperature below 300 K. Moreover, the activation energy of the hopping is much smaller than the simple activation conduction. The electron transport mechanism in the a-TiO 2−x obeys the Meyer-Neldel Rule (MNR). The MNR is an empirical relation that the pre-exponential factor of TiO 2−x films shows an exponential dependence as functions of the activation energies ( figure 9(b)).
The equation is Where σ a0 is Meyer-Neldel's pre-exponential factor, E MN is characteristic energy. And they are all constants.
The effect of preparation conditions for TiO 2−x 5.2.1. Deposition temperature
Deposition temperature is a vital deposition parameter that influences the bolometric properties of TiO 2−x films. The deposition rate of the film decreases with increasing temperature. When the sputtering temperature gradually increases, the deposition kinetic energy of the sputtered particles on the film surface becomes more massive, and the diffusion ability is much more potent. Therefore, the particles are tricky to deposit on the substrate and induce a relatively low deposition rate and thin film. We can conclude from table 2 that TCR and activation energy (E a ) decreased with the increase of deposition temperature from 25°C to 200°C due to increased resistivity. The sample deposited at 200°C has a low TCR but low 1/f noise parameter, indicating better bolometer performance of the TiO 2−x film deposited at 200°C.
The oxygen pressure and thermal annealing
Annealing is a means of metal heat treatment process that mainly heats the metal slowly to a specific temperature for a sufficient time and then cools at a reasonable rate [50]. Thermal annealing of TiO 2−x films is mainly to refine the grain and improve the structure of the films. The pressure of oxygen affects the trivalent titanium (Ti 3+ ) to tetravalent titanium (Ti 4+ ) in the TiO 2−x films; as the oxygen pressure (pO 2 ) increases, the content of Ti 3+ decreases because of the deducing of the oxygen vacancies. There are many oxygen vacancies in the TiO 2−x films obtained by magnetron sputtering. The films are unstable in the air and easy to oxidize after annealing treatment. The oxygen vacancy in TiO 2−x film can be compensated. The structure and electrical properties of TiO 2−x films deposited at different annealing temperatures will change significantly. Therefore, many orders of magnitude can be changed by changing titanium oxide films' resistance and temperature coefficient [8].
TiO 2−x films were deposited by RF reactive magnetron sputtering on Si/SiO 2 substrates with a 4 inch. Pure titanium (99.99%) targets at different relative mass flow of oxygen gas (R O2 ) levels (3.4%∼3.7%) in mixed gas (Ar+O 2 ) atmospheres [8]. Oxygen and argon were used as the reactive and sputter gases, respectively, and these were adjusted discretely by mass flow controllers. At room temperature, the deposition process was performed in 18 min with a process pressure of 2 mTorr and an RF power of 300 W. The samples' thickness varied from 90.1, 89.4, 88.9, to 69.8 nm by increasing the RO2 from 3.4, 3.5, 3.6, and 3.7%, respectively [8]. To improve the performance of the bolometer, the sample is annealed at 300°C in the atmosphere.
The effect of pO 2 on the surface morphology of TiO 2−x films was observed through field emission scanning electron microscope (FESEM) images. Figure 10(a) shows that the deposited film has dispersed grains at lower pO 2 . The surface of TiO 2−x films becomes dense and smooth with the increase of pO 2 . This may be due to the lower surface mobility of the particles deposited at higher pO 2 . The deposited film was air annealed at 300°C for one hour, as shown in figure 10(b). At lower pO 2 , the annealed films show refined grains. However, the films exhibit a relatively dense and smooth surface with no grain characteristics at higher pO 2 . Figure 11 shows the X-ray diffraction and Raman spectra of the TiO 2−x film sample after deposition and annealing. In figure 11(a), the deposited sample appears amorphous due to insufficient heat energy on the sample substrate. The annealed sample changes from amorphous to crystalline (rutile/anatase) [8]. In figure 11(b), no TiO 2−x bands were observed in the deposited samples, indicating that they had an amorphous structure, while the annealed samples showed anatase/rutile TiO 2−x bands [51].
Due to the oxygen vacancy decrease, the bandgap increases with the R O2 level, which leads to the number of defect states decreasing near the edge of the conduction band (figures 12(a) and (b)). The decrease of the bandgap in annealed samples may be due to the decrease of atomic spacing and crystallinity change [52]. It can be seen that annealing might reduce the material's bandgap because the crystallinity will increase when the atomic distance is reduced [53]. As shown in table 3, TCR increases with the R O2 because the oxygen vacancy can be offset with ascending R O2 , and the electron concentration will be reduced [54]. The TCR of annealed samples is slightly lower than that of as-deposited samples except for the samples with higher R O2 parameters. Annealed samples show lower 1/f noise parameters than deposited samples, as concluded from figures 12(c) and (d). Samples with higher R O2 parameters show higher resistivity, lower carrier density, and higher activation energy. Samples with high resistance usually have higher noise, while samples with low resistivity are more suitable for better device performance because 1/f noise is dominant at low frequency and depends on temperature, contact quality, surface treatment, and carrier density [55].
XRD and Raman results show that TiO 2−x film exhibits better crystallinity, narrower bandgap and lower resistivity after annealing. It can be seen that the TiO 2−x after thermal annealing has the advantages of low resistivity, low 1/f noise, and high thermal radiation parameters, etc, which can ensure effective thermal radiation measurement performance.
Nb doping
During the preparation process of TiO 2−x films, titanium is very sensitive to oxygen. Even if the amount of pO 2 in the reaction chamber changes slightly, the resistivity of the TiO2-x film will change greatly [23]. The flow of oxygen in the reaction chamber should be carefully controlled to obtain certain film resistivity. It is a challenge to ensure high accuracy in this process. In order to control the TiO 2−x film's resistivity, the researchers tried to incorporate Nb metal atoms into the TiO 2−x film. Nb atoms act as an extrinsic donor in the TiO 2−x films whose Nb 5+ state replaces the Ti 4+ state due to their similar ionic radii, which can reduce the oxygen sensitivity and resistivity of the material [56,57]. Before each deposition step, the target is pre-sputtered in an Ar environment for 15 min to remove the oxide layer or any other possible surface contamination. In a mixed gas atmosphere, Nb-doped titanium is used as a metal target, and a doped titanium oxide film is prepared on a 4-inch SiO 2 /Si substrate using RF reactive magnetron sputtering technology. The target was placed in the sputtering chamber at a distance of 140 mm from the substrate at an inclination of 45°. The pressure in the sputtering chamber is kept below 3 × 10 −7 Torr. The film was deposited at room temperature at a constant RF power of 300 W for 14 min. When the total flow rate is 50 sccm, the O 2 /(O 2 +Ar) ratio of the main combustion chamber varies from 4.3% to 4.8% [25].
When pO 2 reaches 4.7%, the TiO 2−x film shows a stable state. The deposition rate decreased significantly when pO 2 ascended up to 4.8%. When the titanium target's reaction rate with oxygen is greater than the sputtering rate, the target's surface is completely oxidized, and the construction mode changes from the metal mode to the oxide mode [25]. A metal peak of titanium is observed in figure 13(a) when the pO 2 content is 4.3%., Whereas no rutile and anatase crystal structure when the pO 2 level is higher than 4.3% due to enough oxygen reacting with titanium atoms. The resistivity and TCR of Nb-doped TiO 2−x films are 0.05 Ω·cm and 1.88 -%/K, which are lower than those of other films [25]. The resistivity of Nb-doped and pure TiO 2−x samples varies from 0.39 to 2.48, and 0.82 to 42.65, respectively, with the increase of pO 2 . The resistivity of Nb-doped TiO 2−x films is much lower than that of pure TiO 2−x films due to the decrease of oxygen vacancies and the decrease of carrier concentration with the increase of pO 2 level, which leads to the increase of resistivity [58]. The Nb concentration in Nb-doped TiO 2−x films does not change with the pO 2 level, so the resistivity does not increase significantly.
As shown in figure 13(b), as the number of oxygen vacancies decreases, the carrier jump's activation energy increases. Compared with a pure titanium oxide film, doped niobium TiO 2−x film has a higher TCR value.
The voltage density of 1/f noise is in the test range of 1∼1000 Hz. As displayed in figure 13(c), the thin film's resistivity has a significant influence on the value of the 1/f noise parameter. When Nb doping reduces the TiO 2−x film's resistivity, a lower 1/f noise parameter can be obtained.
The XRD results show that the amorphous phase is formed when the content of pO 2 is between 4.4% and 4.7%. XPS analysis confirms that the increase of resistivity is due to increased pO 2 and the decrease of oxygen vacancies. The resistivity of Nb-doped TiO 2−x films decreases significantly. Nb-doped TiO 2−x films have lower resistivity, lower 1/f noise parameter, higher TCR value, and better radiation calorimetry characteristics.
Thermal stability of Nb-TiO 2−x
Infrared equipment will encounter a high-temperature environment in use. In a high-temperature environment, the infrared detector will appear ghost image phenomenon to reduce the burn effect at high temperatures. Franken et al [59] reduced the resistance variation characteristics of a VO x -based uncooled microbolometer simple with annealing treatment. They annealed the samples at a high current to remove the ghost image, but the resistivity of annealed samples decreased. Therefore, increasing the current for annealing is not a fundamental solution to the ghost image [60].
TiO 2−x has made significant progress in device manufacturing and thermal metering performance [8,23,61]. TiO 2−x is highly sensitive to oxygen, and the oxygen vacancy in TiO 2−x during preparation makes the resistivity challenging to be adjusted. To effectively control the resistivity of TiO 2−x , Nb doping in TiO 2−x is a common way to regulate electrical properties [62,63]. Niobium ions were replaced by Ti lattice to reduce the number of oxygen vacancies. High-temperature annealing was carried out in an oxygen environment to reduce the oxygen vacancy in Nb: TiO 2−x [32,64]. Oxygen atoms could diffuse into the film at high temperatures and occupy oxygen vacancies [65][66][67]. The diffusion mechanism of oxygen during annealing in the oxygen atmosphere is exhibited in figure 14(a). It is clearly shown that oxygen vacancies are offset by oxygen atoms diffusion during high-temperature annealing [60]. Figure 14(b) shows the XRD patterns of the as-deposited and annealed samples. No prominent diffraction peak proves an amorphous structure of the as-deposited film. Clear polycrystalline (101) and (200) orientations of the rutile TiO 2−x phase were observed for the film annealed in an oxygen atmosphere. There are no Nb 2 O 5 peaks in the XRD image, indicating that Nb was dissolved entirely in TiO 2−x during deposition. The radii of Nb 5+ and Ti 4+ ions are 70 pm and 68 pm, respectively, indicating that Nb 5+ ions can easily occupy the position of Ti lattice and form a stable, reliable solution.
As shown in figure 14(c), the sample deposited at room temperature does not show the Raman spectra of TiO 2−x , indicating that the film is amorphous. In the annealed sample, the Raman spectra of 230 cm −1 (broadband), 436 cm −1 (E g ), and 614 cm −1 (A1 g ) are observed, which correspond to rutile structure [68,69].
The firm photoelectron peaks of Ti, O 2 , and Nb can be seen in figure 15(a). As shown in figure 15(b), each sub-peaks relative area concentration in the backward sample decreases with the reduction of oxygen vacancy. The ratio of O a to O S in the samples after annealing decreases slightly, as proved in figure 15(c), indicating that the concentration of oxygen vacancy decreases after annealing in the oxygen atmosphere [70]. The O S spectrum's relative peak area in the annealed sample increases slightly, which indicates that oxygen atoms can diffuse into the film after annealing and reduce the oxygen vacancy concentration [71]. No additional peak was observed in figure 15(d) at the lower binding energy of Nb 3d 5 When annealed in an oxygen-rich environment, external oxygen atoms diffuse into the film and occupy the oxygen vacancy. Besides, thermal stability tests at different exposure temperatures show that the samples with Reproduced from [60]. © IOP Publishing Ltd. All rights reserved. less oxygen vacancy have higher thermal stability. Although the annealed samples' thermal metrology performance decreased slightly, the thermal stability was significantly improved compared with the deposited samples. This type of oxygen vacancy compensation method opens a new opportunity to enhance titanium oxide films' thermal stability and other oxide materials annealed in an oxygen atmosphere [60].
Atomic layer deposition
ALD is a deposition technique separated by intermittent evacuation or purification when different precursors are introduced. This method has a high advantage because of its limited growth. This makes it possible to deposit a large area of uniform film with single-layer thickness and control the single layer's thickness. The nanofilm prepared by ALD has good thermal conductivity and near-ideal optical properties, which greatly improves the bolometer [72].
At present, there are few ways to characterize TiO 2−x films by TCR, which are mainly formed by RF reactive magnetron sputtering and DC sputtering deposition [26]. Kwon et al [29] studied reactive sputtering TiO 2−x thin films and obtained TCR values up to 2.8 -%/K Reddy et al [8,23] grew TiO 2−x films under the same [50]. The substrate was fixed to the deposition chamber after cleaning, and ALD treatment was started. Due to the low deposition temperature, it is necessary to extend the cleaning time to improve the films' quality. Tetrakis(dimethylamino)titanium (IV) (TDMAT) was used as the reaction precursor of titanium, and milli-Q water (H 2 O) was used as the reaction precursor of oxygen. The TDMAT precursor was kept at 75°C, and the TDMAT pulse, N 2 , H 2 O, and N 2 pulse of 100 ms, 1 min, 15 ms, and 1 min were carried out within a cycle, respectively. The deposition rate of TiO 2−x film is 0.4Å/cycle. N 2 was used as the carrier gas with a flow of 20 sccm. The samples were deposited at temperatures of 150, 200, and 250°C, respectively, then annealed at different temperatures (300, 330, 475, 550 and 600°C) based on Thermogravimetric Analysis (TGA), and annealed in a conventional furnace in the air for 1 h [26]. TCR measurements use temperature control during the healing phase, where the temperature varies between 15°C and 40°C, while the voltage on the resistor is recorded by applying a current of 1 to 10 μA. Noise measurements are made by applying a 3 μA current to the resistor and measuring the resistor's voltage with the help of an amplifier and a dynamic signal analyzer. After the measurement, the noise power spectral density of the resistance is obtained, and the angular noise frequency of 1/f noise is calculated [26]. Figure 16(a) shows the grazing incident X-ray diffraction pattern of the deposited TiO 2−x film annealed at different temperatures. It can be deduced from figure 16(a) that the as-deposited TiO 2−x film is amorphous, and anatase occurs when the temperature rises above 300°C. Both the strength of anatase and the crystallinity of thin-film increase with the annealing temperature. It is found that the thin film annealed at 600°C has lowintensity diffraction of the rutile phase. According to TGA and XPS analysis, the phase transition from anatase to rutile appears to occur at 475°C. Hanaor et al [73] report that the initial transition temperature from anatase thermal activation to rutile depends on experimental parameters such as deposition method, deposition temperature, and different substrates [26]. As shown in table 5, the crystal structure is more orderly, reducing the oxygen vacancy and the resistivity after annealing. It can be seen that the temperature of the coating has little influence on resistivity.
As shown in figure 17(a), the thin film's resistance changed significantly with growth temperature. The results show that the TCR value of the film is strongly dependent on the temperature angular frequencies of the TiO 2−x films annealed at 300 and 475°C are 1.8 and 1.2 k Hz, respectively, which are consistent with the angular frequencies of most microbolometer materials, as shown in figure 17(b). Due to the increase of crystallinity and the decrease of defects, the value of cracker noise is lower at higher annealing temperatures [26].
As shown in table 6, the maximum TCR values of TiO 2−x films are between 20 and 30°C. By controlling the annealing temperature, a higher TCR value can be obtained. The samples annealed at low temperature (300°C and 330°C) have mixed phases (anatase and rutile), which can produce metastable films, while the samples are annealed at high temperature (475°C and above) change with the annealing temperature. The results show that the highest TCR value of the TiO 2 film grown at 150°C and annealed at 300°C is 9 -%/K, which is much higher than that of the active layer used in commercial microbolometers [26].
In summary, compared with the films without annealing, the TCR values of the films coated and annealed at different temperatures are higher (9 -%/K), and the results of the electrical noise test also verify that annealing is a feasible method to improve the properties of the films [26].
Microwave plasma etching
At present, the preparation methods of dense, uniform and high-quality films have been developed very maturely whether one or more film optimization methods can quickly process the prepared TiO 2−x film so that the resistance of the prepared titanium oxide film changes and the TCR value increases. Qiming et al [78] prepared 300 nm titanium oxide thin films by EBE. EBE method is conducive to the evaporation of high melting point materials. Due to the high flux density produced by electron beam heating, the evaporation rate can be improved to a certain extent. Microwave plasma etching (MPE) technology interacts with atoms on the crystal surface and active atoms under high voltage and low power RF. The 300 nm thick TiO 2−x films were etched by microwave plasma for 2∼6 min in an oxygen and argon atmosphere. Through research and analysis, the resistance of titanium oxide film after 4 min of MPE shows a linearly decreasing trend, and its TCR value increases from 0.32429 -%/K to 1.65751 -%/K [78]. Figure 18(a) is an XRD analysis of TiO 2−x after MPE at different times. EBE coating is a pure physical film preparation method. The prepared film has high purity, and the quality of the film can be well guaranteed. With the increase of microwave plasma etching time from 2 min to 6 min, there are two obvious diffraction peaks at 25.281°and 27.446°, corresponding to the diffraction peaks of anatase (101) and rutile (110) of TiO 2−x . In figure 18(b), three Raman peaks appear at 144 cm −1 , 448 cm −1 and 608 cm −1 , and these three Raman peaks become obvious with the time of MPE. The results show that the crystal form of TiO 2−x film after MPE changes from amorphous to crystalline, and the crystal quality is also improved [78].
In figures 19(a) and (c) , the obvious binding energy peaks of Ti at 464.3 0.2 eV and 458.5 0.2 eV correspond to Ti 2p 1/2 and Ti 2p 3/2 orbits, respectively [74,79]. After MPE for 4 min, the peak value of Ti 3+ increased relatively. In figures 19(b) and (d), the peak value of Os decreased significantly in the TiO 2−x film after MPE. With the increase of Ti 3+ peak and the decrease of Os peak, it can be concluded that there is oxygen vacancy in TiO 2−x film [59,78].
We can see from figure 20 that with the increase of MPE time, the surface of TiO 2−x film is rough due to the impact of microwave plasma. At the same time, hydrogen, as a reaction gas, may react with oxygen in the film [78]. With the increase of temperature, the resistance of the films decreased, which can confirm that MPE can change the conductivity of TiO 2−x film. There is a strictly linear relationship between the resistance and temperature of TiO 2−x film treated by MPE for 4 and 5 min, which is conducive to the characteristics of bolometer applications. The temperature dependence of the resistance of 4 min TiO 2−x film is greater than that of 5 min film, and the thermal sensitivity is better. The TCR value of TiO 2−x film after MPE for 4 min is calculated to be 1.65751-%/K [78], as shown in figure 21. The TiO 2−x film was prepared by EBE and optimized by MPE. XRD, Raman, XPS, SEM and other analysis methods were used to characterize, and it was concluded that MPE could effectively change the characteristics of TiO 2−x film. Finally, preparation and optimization of TiO 2−x film were given out in table 7.
Summary and conclusion
This paper has discussed the bolometer's main performance parameters and manufacturing technology, which can inspire us to design a bolometer with better performance. We also explored the application of TiO 2−x films for bolometers and summarized the methods for improving the bolometric properties of TiO 2−x films material at home and abroad.
TiO 2−x material is a promising candidate for heat-sensitive materials on a bolometer. To achieve the bolometer's high performance, we have investigated the effect of the deposition process on the film structure, composition, and electrical properties of this material, such as resistivity, TCR, and activation energy. Furthermore, these factors are very crucial for the detectivity of thermal IR detectors. The TCR of TiO 2−x film can move up to 3.6 -%/K with a relatively low 1/f noise parameter. Titanium oxide-based 12 μm pixel pitch uncooled bolometer has been made and shown better-infrared imaging performance.
The performance and resolution of the bolometer have improved significantly in the past five years. The bolometer's performance is getting closer to the theoretical limit, and the distance from the photon detector is getting smaller. The bolometer has advantages in terms of weight, power consumption, and cost of nonrefrigeration technology. Therefore, its output is more extensive than all other infrared array technologies. Radiometer arrays have become the technology of choice for low-cost infrared imaging systems used in civilian and military applications. The future development must be a bolometer with smaller pixels. Simultaneously, titanium oxide material has good thermal stability, and the advantages of compatibility with the CMOS process will be the focus of bolometer material research.
Through the elaboration of this paper, it can be explained that TiO 2−x film can be effectively used as the heat sensitive material of bolometer, and it has good heat sensitive properties. In the future, researchers may be able to select thermal materials according to external factors such as application scenarios and temperature range. At the same time, taking appropriate preparation methods, exploring appropriate experimental parameters through experiments, and exploring effective film optimization methods under the condition of ensuring the uniformity and quality of the film will be the way to promote the application research of TiO 2−x as a bolometer. The bolometer using TiO 2−x film as thermal sensitive material has good compatibility with CMOS. TiO 2−x has various phases in nature, and its oxygen content can be changed flexibly by external force. TiO 2−x has stable chemical properties, non-toxic and good photocatalytic activity. Therefore, TiO 2−x can be used as thermistor material for more exploration.
|
2022-04-21T20:08:07.808Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "af814340baf7913cee75915e12765c43ab2e2a47",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ac4327",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "af814340baf7913cee75915e12765c43ab2e2a47",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
252888511
|
pes2o/s2orc
|
v3-fos-license
|
ASSESSMENT OF THE QUALITY OF EDUCATIONAL ACTIVITIES IN THE CONTEXT OF DIGITAL TRANSFORMATION
: The relevance of the study of the issues of assessing the quality of educational activities is due to the digital transformation of education, increasing requirements for the level of quality of training, complicating the conditions for organizing the educational process, which determines the need to search for new approaches and technologies. Assessment of the quality of educational activities in the conditions of digital transformation requires the creation of the necessary infrastructure, the formation of an appropriate regulatory, educational, and methodological base. When considering this problem, methods of systematization and generalization were used. The practical significance of the research results is determined by the fact that the conclusions drawn and the proposed scientific and theoretical provisions can be useful to the heads of educational organizations and have the ability to adapt to the conditions of the educational process.
Introduction
Quality education today is one of the goals of the educational policy of the Russian Digital technologies are rapidly spreading and updating, opening up unlimited opportunities for access to digital tools, materials, and services. Students and teachers gain extensive control over their information space and the prospects for its joint use. Their opportunities for self -and mutual control, for the formation of interest in academic work, have expanded.
Russian President V. V. Putin notes the need for effective use of educational and other infrastructure, as well as the capabilities of modern technologies (RUSSIA, 2020). In addition, in the Address to the Federal Assembly for 2021, attention was focused on the transition to the digital transformation of the national school, as well as on the need to update the teaching and laboratory base and training programs of educational institutions of higher education (Message of the President to the Federal Assembly, 2021).
Thus, the digital transformation of education is a direction of work for a long period that affects all levels and all subjects of education and involves updating approaches to assessing the quality of educational activities.
Literature review
The study of scientific approaches to the definition of the essence of the "quality of education" and "quality of educational activity" concepts has shown that the research reflects several points of view on these concepts. Different definitions are given to them in legislative acts.
The law "On education in the Russian Federation" defines "quality of education" as "a complex characteristic of educational activities and training of a student, expressing the degree of their compliance with federal state educational standards, federal-state requirements and (or) the needs of an individual or legal entity in whose interests educational activities are carried out, including the degree of achievement of the planned results of the educational program".
The assessment of the quality of education implies not only an assessment of the quality of educational results of students but also an assessment of the quality of educational activities and educational programs. Educational activities in the Law "On Education in the Russian Federation" are defined as activities for the implementation of educational programs and are carried out by educational organizations and, in the cases established by this Federal Law, by organizations providing training, as well as individual entrepreneurs.
The educational activity of the university is a complex structured system and its quality should be determined by the totality of all processes: the development of educational programs; pre-university training; selection of applicants; educational and methodological work; educational process; employment of graduates, as well as their support by providing personnel management processes, document management, financial activities etc. (RUDENKO, 2008).
One of the main aspects of quality assessment is the compliance of the results of educational activities with the existing and prospective needs of direct consumersstudents who expect to successfully find a job or continue their education at the next level after completing their education. Therewith, it is important to consider that it is essential to evaluate directly in the educational process since the quality assessment only "at the output" (for example, at the stage of the state final certification) increases the percentage of "defects" and does not allow correcting the current situation.
An independent assessment of the quality of education is an evaluation procedure based on information about the educational activities of organizations engaged in educational activities. An independent assessment of the quality of education has been represented by an external and internal assessment.
External evaluation is carried out by public experts (public accreditation, including professional and public and international accreditation).
The internal assessment is carried out directly by the university itself, and each educational organization has its internal system of education quality (KIRYUSHKIN, 2020).
Professional and public assessment of the quality of professional education programs is the recognition of the quality and level of training of future specialists who have mastered this educational program in a specific organization that carries out educational activities that meet the requirements of professional standards, the requirements of the labor market for specialists, employees and workers of the corresponding profile.
The Russian higher education system has a more developed external quality assessment focused on standards and performance indicators. The main elements of this system are standardization and licensing, certification and accreditation procedures, as well as a comprehensive assessment of educational institutions as a whole and individual specialties based on a rating system. All these procedures include conducting an internal audit. The basis for an objective assessment of the quality of education is the federal state educational standards and federal state requirements, as well as educational standards established by universities.
The study of the question shows that the following main approaches are used to understand the essence of the concept of quality, concerning the quality of education: the traditional approach, when the quality of education is considered as compliance with the Federal State Educational Standard and/or the needs of an individual or legal entity in whose interests educational activities are carried out; -an effective approach that evaluates the correspondence between various parameters in assessing the result of a particular person's education (the quality of knowledge, the level of competence formation) Thus, we can conclude that these two approaches are focused on evaluating different indicators: educational activity is evaluated with the traditional approach, and its result is evaluated with the effective approach.
Therewith, in European and Russian educational practice, the point of view of competence as a category that is primarily understandable to the employer and characterizes the professional activity of a student after graduation, directly at the workplace, is increasingly spreading. For the formation of the competencies required by employers in university graduates, it is important to ensure not only the quality of the result of the educational program but also how it was obtained, i.e., the quality of educational activities (KURKINA, 2017).
Methods
When studying the topic, a set of methods was used: a systematic analysis of scientific and methodological works, generalization of experience, observation, analysis of educational activities of universities, which allow considering this problem considering many factors that affect the assessment of the quality of educational activities in the conditions of digital transformation. The analysis of the main approaches to the process of assessing the quality of educational activities using the electronic information and educational environment implemented by universities has been carried out.
At the initial stage of the study, the main theoretical and methodological grounds for determining the problem have been considered. Further, the essence of the digital transformation of education has been revealed, the factors influencing this process have been indicated, the main approaches to assessing the quality of educational activities using the electronic information and educational environment, implemented by universities, have been highlighted. At the final stage, the promising directions of the university's activities to improve the effectiveness of assessing the quality of educational activities in the conditions of digital transformation have been determined.
Results
The digital transformation of education is an update of the planned educational results, the content of education, methods, and organizational forms of educational work, as well as the assessment of the achieved results in a dynamically changing digital environment for the fundamental improvement of the educational results of each student. The main goal is to unite the following in the educational process: mastering the learner's defined content; achievement of selected goals by students; support and development of the educable's ability to learn, the formation of their educational independence.
In a broad interpretation, "digital transformation" as a concept is considered in three contexts: − Internal factors are caused by processes that are largely developing within the framework of the education system, which is associated with problems within the education system, its ability to respond to social requests, perceive and master new technologies, and tools for working with information to solve urgent problems. These factors are characterized by: the existing scientific and methodological groundwork in the field of development and use of all types of digital educational resources; the achieved level of professional training of teachers in the field of digital literacy; the flexibility of the management system, its readiness for changes, the ability to recognize and master new things, to disseminate effective organizational forms and methods of conducting educational activities, to improve the methods of management of an educational organization.
In practice, the following approaches to the process of assessing the quality of educational activities using the electronic information and educational environment, implemented by universities, are most often used (Table 1).
Synchronous approach
The simultaneous presence of teachers and students in the EIEE. The procedures for completing tasks and evaluating them are carried out simultaneously.
Zoom, cloud services, the university's online learning platforms
Asynchronous approach
The possibility of delayed assessment without the simultaneous presence of subjects of the educational process in the EIEE.
Electronic systems based on MOODLE, 1c University, etc.
Combinatorial approach
A combination of procedures involving the simultaneous presence of subjects of the educational process in the EIEE and the possibility of delayed assessment. Employers and students, acting as consumers of educational services, evaluate the quality of the university's activities and the quality of training according to other indicators and criteria. The graduate considers a high-quality education that allows him/her to be a competitive specialist, get a high-paying job in the profile of training, and successfully build his/her career in the future.
Employers are primarily interested in hiring young specialists with a high level of professional knowledge, capable of taking responsibility for the results of their professional activities, working in a team, solving non-standard tasks, and navigating the production environment, possessing leadership qualities, capable of creativity and continuous professional growth.
Therefore, in the future, the analysis of public opinion in social networks will be used to assess the quality of educational activities of universities. The study of public opinion, the factors influencing it, will allow identifying problem areas, and, on this basis, improving the quality of educational activities.
Meanwhile, it is necessary to identify some problems in the digital transformation of assessing the quality of educational activities, which in general can be divided into content and organizational ones.
The content aspect includes the following: − the incompleteness and one-sidedness of the evaluation tools and, as a result, the difficulty of a comprehensive assessment of the results; − inadequate efficiency of the system content filling of the EIEE; − lack of EIEE resources for timely and qualitative assessment of the results of educational activities; − evaluation funds do not involve the use of EIEE; − a large number of tasks for evaluation; The organizational aspect includes the following: − unjustified transfer of traditional assessment technologies to an online format; − low involvement of students in the assessment process; − lack of feedback that prevents an adequate assessment of the quality of assimilation of the material; − inability to track the immediate reaction of students; − the formalism of monitoring and evaluating students' activity in the classroom.
As a measure that contributes to the effective solution of these problems, it is possible to continue developing a comprehensive roadmap for assessing the quality of educational activities of the university in the context of digital transformation, which will ensure the adaptation of the university to the challenges of the future.
Conclusion
Thus, the conducted research allows drawing the conclusions proposed below.
One of the important tasks for universities in the context of digital transformation is to determine the effectiveness of the digital educational environment, to check the validity of changes in the content and organization of the assessment of the quality of educational activities. The digital transformation of education should contribute to solving such problems of effective assessment of the quality of educational activities as the development of a unified set of digital solutions and formats of evaluation funds, unified platforms or requirements for the compatibility of individual services, as well as the boundaries of copyright protection.
The following can be identified as promising areas of the university's activities to improve the effectiveness of assessing the quality of educational activities in the context of digital transformation: − improving the necessary infrastructure; − selection of variable pedagogical technologies for monitoring educational results; − improving the efficiency of assessment with the preservation of digital data of students; − increasing the involvement of the main subjects of education in the assessment of the quality of educational activities and training; − improvement of the regulatory framework and organizational procedures for assessing the quality of educational activities and training.
|
2022-10-14T15:09:56.895Z
|
2021-12-30T00:00:00.000
|
{
"year": 2021,
"sha1": "7165f4625fd16169fedd04f7a6939dac13237643",
"oa_license": "CCBYNCSA",
"oa_url": "https://periodicos.fclar.unesp.br/rpge/article/download/16004/14300",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4921e2530a28e0ffdf4a1513e8ccb117fda73030",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": []
}
|
201103486
|
pes2o/s2orc
|
v3-fos-license
|
Effect of tardive dyskinesia on quality of life in patients with bipolar disorder, major depressive disorder, and schizophrenia
Purpose Tardive dyskinesia (TD) is a common but serious hyperkinetic movement disorder and side effect of antipsychotic medications used to treat bipolar disorder (BD), major depressive disorder (MDD), and schizophrenia (SZ). The purpose of this study was to evaluate health-related quality of life (HRQoL) in a population with diagnoses for BD, MDD, or SZ by comparing patients with TD (n = 197) with those without TD (n = 219). HRQoL in each group was also compared with HRQoL of the general population. Methods This study employed a cross-sectional web-based survey. HRQoL was assessed by four instruments: the SF-12 Health Survey, Version 2 (SF-12v2), the Quality of Life Enjoyment and Satisfaction Questionnaire, Short Form (Q-LES-Q-SF), the Social Withdrawal subscale of the Internalized Stigma of Mental Illness Scale (SW-ISMI); and two questions on movement disorders. Results Patients with TD had significantly worse HRQoL and social withdrawal than those without. The differences were more pronounced for physical HRQoL domains than for mental health domains. Patients with more-severe TD, assessed through either self-rating or clinician rating, experienced significantly worse HRQoL than did those with less-severe TD. The impact of TD was substantially greater in patients with SZ than in those with BD or MDD. Compared with the general population, patients with BD, MDD, or SZ experienced significantly worse HRQoL regardless of TD status, although this deficit in HRQoL was greater among those with TD. Conclusions The presence of TD is associated with worse HRQoL and social withdrawal. The most severe impact of TD is on physical aspects of patients’ HRQoL.
Symptoms of TD are characterized by involuntary and repetitive movements that most commonly affect the face, mouth, and tongue, but can also manifest in the extremities [6,7]. These can range from mild to severe, with involuntary movements being localized or widespread [2,5]. The severity of TD symptoms can be assessed using clinician-reported outcome assessments such as the Abnormal Involuntary Movement Scale (AIMS) [8]; however, AIMS alone does not sufficiently capture the full impact of TD on a patient's health-related quality of life (HRQoL).
It is difficult to rate the impact of TD on a patient's HRQoL by rating the severity of abnormal movements alone, because even subtle involuntary movements in the facial area can have substantial negative social and emotional impacts. For those with TD, social and emotional impacts have been described as the most debilitating aspects of living with the condition [9]. For instance, involuntary movements associated with the typical orobucco-lingual dyskinesia such as frowning, tongue twisting or thrusting, and lip smacking and puckering can cause difficulty for the patients in fully participating in their communities or in maintaining employment [2,10]. A recent study reported that many of the most common impacts of TD were social and emotional in nature, including unwanted social attention, feeling embarrassed, and social isolation [11].
The prevalence of TD has been estimated to increase over the coming decade [12]. This is due in part to the increased incidence of early onset schizophrenia, as well as the expanded use of antipsychotics in broader patient population [13][14][15]. Despite this increasing prevalence, the effect of TD on patients' HRQoL has not been thoroughly investigated. For individuals with schizophrenia, previous studies have shown that those who have TD have higher mortality rates and poorer quality of life than those without TD; however, these studies had a relatively small sample size and included only patients with schizophrenia [16,17]. Very little research is available on the impacts of TD in non-schizophrenia populations, such as those with BD or MDD. These research gaps indicate the need to more fully explore the relationship between TD and HRQoL, particularly among a sample of patients with respect to varying psychiatric diagnoses.
As such, the current study was undertaken to better understand the impact of TD on HRQoL and social withdrawal among individuals also diagnosed with BD, MDD, or SZ. To our knowledge, this is the first study to describe HRQoL in patients with TD across multiple psychiatric conditions.
Study design
This study utilized a cross-sectional web-based survey of adult psychiatric patients with clinician-confirmed diagnosis of TD or clinician-confirmed absence of TD. This study was exploratory and quasi-experimental in approach, as assignment to condition was determined by existing subject characteristics, and no intervention was applied.
Recruitment of patients was performed using a stratified sampling approach such that age, gender, and primary diagnosis (BD, MDD, or SZ) would be similar for TD and non-TD groups. Patients were recruited for the study from a variety of sources, including pre-existing participant databases, patient advocacy groups, and clinicians.
Patient selection
Participants were recruited by MedQuest Global Marketing Research, a research firm that identifies patients that meet pre-determined study inclusion criteria. MedQuest utilized e-mail lists provided by patient advocacy/support groups, and relationships with doctors/clinicians to recruit and enroll participants.
MedQuest asked that interested groups or clinics contact patients about the study. Potential participants contacted MedQuest, through which arrangements were made to complete an informed consent form (ICF), Authorization to Use and Disclose Protected Health Information for Research, patient screening form, and clinician screening form. The patient screening forms included items regarding psychiatric diagnoses, TD status, and demographic questions. The clinician screening form included items about patients' psychiatric diagnoses and severity, TD status and severity (if applicable), and medical exclusion criteria (described below). Patients were instructed to bring the form to their current treater, who completed the form especially for participation in this study. If the participant met all of the eligibility criteria (described below), they were e-mailed a link to the survey.
Patients were eligible for inclusion if they had a clinician-confirmed diagnosis of BD, MDD, or SZ, were at least 18 years of age, were willing and able to provide informed consent, and were able to complete the survey online and in English. Patients were excluded from participation in the survey if they had a clinician-confirmed history of traumatic brain injury, debilitating stroke, or the presence of neurodegenerative disease (i.e. Parkinson's disease, Alzheimer's disease, or amyotrophic lateral sclerosis), as these conditions may include symptoms that are similar to TD, or if they were currently pregnant or had been pregnant in the previous 6 months, as there was concern regarding the potential for changes in patient medication regimens.
Upon completion of the survey, patients received a $75 honorarium in exchange for their time and effort. The survey opened on May 11, 2017 and closed on December 12, 2017 after the recruitment goal of 450 subjects was reached. All participants resided in the United States. This study was approved by an IRB and all participants gave consent prior to participation.
Assessment of TD
TD status and severity were captured by two items on the clinician screening form. The first asked the clinician: "Does the patient currently have tardive dyskinesia (TD)?" and had response options of "yes" or "no." The second, answered only for patients who had been diagnosed with TD, inquired as to the status of the patient's TD, and read: "Please rate the severity of the patient's TD (check only one)" and had response options of "mild," "moderate," or "severe."
Survey measures for HRQoL and social withdrawal
The survey was constructed after a review of literature describing instruments used to measure HRQoL and social withdrawal in patients with BD, MDD, and SZ. An evaluation of the psychometric properties of each of the identified instruments informed the ultimate selection of instruments for the study. The set of instruments comprising the final survey included a generic instrument, the SF-12v2 ® Health Survey (SF-12v2) [18]; a psychiatric-specific measure of HRQoL, the Quality of Life Enjoyment and Satisfaction Questionnaire, Short Form (Q-LES-Q-SF) [19]; and a measure of social withdrawal due to self-perceived stigma, the Social Withdrawal subscale of the Internalized Stigma of Mental Illness Scale (SW-ISMI) [20].
HRQoL: SF-12v2 Health Survey (SF-12v2)
The SF-12v2, a 12-item, self-reported general health questionnaire, was used to measure functioning and well-being. The psychometric properties, including reliability, of the SF12v2 are well established across numerous health conditions and diseases, including psychiatric populations [18]. The SF-12v2 assesses eight health domains: physical function (PF), role limitations due to physical problems (RP), bodily pain (BP), general health (GH), vitality (VT), social function (SF), role limitations due to emotional problems (RE), and mental health (MH). Scores are summarized in two overall health components: the physical health component (PCS, mainly reflecting the PF, RP, BP, and GH domains), and the mental health component (MCS, mainly reflecting the MH, RE, SF, and VT domains). All scores, including MCS and PCS, have a mean of 50 and a standard deviation of 10 in the general population of US adults, with a higher score indicating better functioning and well-being [21].
The SF-12v2 uses two items to assess PF. However, considering the physical manifestations of TD, the PF domain was assessed using a longer scale, the 10 PF items from the SF-36v2 survey, the parent survey of the SF-12v2 [22].
HRQoL: Quality of Life Enjoyment and Satisfaction Questionnaire Short Form (Q-LES-Q-SF)
The Q-LES-Q-SF, a 16-item survey, has proven to be suitable for assessing HRQoL in psychiatric populations [19].
The Q-LES-Q-SF has been shown to have good psychometric properties, including good internal consistency and test-retest reliability [19]. The first 14 items of the Q-LES-Q-SF contribute a single overall HRQoL score that ranges from 0 to 1, with scores closer to 1 indicating better HRQoL. The last two of the 16 items, measuring satisfaction with medication and overall life satisfaction, respectively, are stand-alone items not included in the summary score.
Social Withdrawal subscale of the Internalized Stigma of Mental Illness scale (SW-ISMI)
The SW-ISMI, a six-item short form of the 29-item ISMI survey, was used to measure exacerbation of mental illness stigma on social functioning due to TD [20]. The full survey comprises five subscales: Alienation, stereotype endorsement, discrimination experience, social withdrawal, and stigma resistance; however, participants in the current study completed only the social withdrawal scale, which has shown good internal consistency and test-retest reliability [20].The final score, which can have a maximum value of 4 for severe social withdrawal, is the sum of all values from each item divided by the total number of answered items.
Group differences by TD status and diagnosis
One-way analyses of covariance (ANCOVAs), which included covariates (age and gender) to account for variation due to demographic variables, were performed for each domain to compare SF-12v2, Q-LES-Q, and SF SW-ISMI scores between patients with and without TD. This type of ANCOVA was conducted for the pooled group of patients with BD, MDD and SZ, and for each diagnosis type alone.
Group differences by clinician-rated TD severity and diagnosis
When clinicians rated patients' TD severity on a 3-point scale (mild, moderate, or severe), very few patients were rated as having severe TD (4.1%). Thus, patients who were rated as having moderate or severe TD were collapsed to create one group, described as moderate/severe TD. Subsequently, one-way ANCOVAs, which included demographic covariates (age and gender), were performed for each domain to compare SF-12v2, Q-LES-Q, and SF SW-ISMI scores between patients without TD, mild TD, or moderate/ severe TD. This type of ANCOVA was conducted for the pooled group of patients with BD, MDD, and SZ, and for each diagnosis type alone.
Burden of disease analyses
To evaluate the health status burden associated with TD, one-way ANOVAs were performed to compare the baseline SF-12v2 scores from the study sample with benchmark scale scores from the US general population. To evaluate the health status burden associated with TD, oneway ANCOVAs were performed to compare the baseline SF-12v2 scores from the study sample with normative sample scale scores from the US general population. To control for differences in key sample characteristics between the current and normative samples, the normative sample was adjusted to match the age and gender distribution of the current sample or subsample (i.e., for the total sample and separately for each condition) using separate least squares multiple regression models for each of the SF-12v2 scales and component summaries. Weights for mean scores for each benchmark sample were then estimated based on these matched sample characteristic demographics, with mean scores and standard errors then adjusted based on these weights.
Statistical analyses
Significant differences for all comparisons were assessed at an alpha level of 0.05 with no correction for multiplicity, as these were exploratory analyses. The data are presented as group mean ± standard error. The interpretation of differences between groups in the SF-12v2 survey were based on recommended values for the minimal important difference (MID). These differences have been derived based on analyses of important health consequences such as risk of mortality and hospitalization, as well as studies of score differences for groups known to differ on physical and mental health [22]. In addition, Cohen's d effect size (ES) [23] for standardized differences was calculated and is presented as an absolute value to interpret the magnitude of difference in HRQoL between the patients in this study and the general population. Criteria for defining the level of clinical meaningfulness were as follows: ES < 0.2 = negligible; 0.2 ≤ ES < 0.5 = small; 0.5 ≤ ES < 0.8 = moderate and ES ≥ 0.8 = large.
To address concerns that the distributions of dependent variables may have violated the assumptions of statistical tests, analyses were performed wherein the continuous dependent variables were recoded into five equal-sized categories (low, low middle, middle, high middle, high) and submitted to ordinal logistic regressions with TD status as the predictor. These tests are not influenced by outliers, as the distance between scores is removed when recoded.
Study population
Among 459 patients who completed the online survey, 416 were included in the study. A total of 43 participants were excluded for the following reasons: four did not meet the eligibility requirements, one requested to be withdrawn from the study, one had someone else fill out the survey, three had survey responses that could not be accurately linked to diagnostic information provided at screening, and 34 did not provide consistent answers based on checks of the logical consistency of eight pairs of responses to individual survey items (e.g., indicated that they were "limited a lot" in walking 100 yards, but were "not limited at all" in walking a mile, etc.). Of the study population, 219 did not have TD, while 197 had TD. Of those diagnosed with TD, clinician assessments indicated that 109 had mild TD (55.3%) and 88 had moderate/severe TD (44.7%). The average time to complete the survey was 8.5 min (IQR = 4.73-10.76).
Demographic and clinical characteristics of the survey population as well as the patient-reported outcome scale distributions are summarized in Table 1. The mean age was similar between the non-TD and TD groups, and both groups had a similar proportion of male and female patients. Each of the diagnoses (BD, MDD, SZ) made up approximately one-third of the non-TD and TD groups.
Effects of TD status on HRQoL
In the pooled group of patients with BD, MDD, or SZ, HRQoL (SF-12v2 and Q-LES-SF) and social withdrawal (SW-ISMI) were compared between patients with and without TD (Fig. 1). Patients without TD diagnosis had significantly better HRQoL and less social withdrawal than patients with TD (SW-ISMI score: 2. Similar results were obtained when examining BD, MDD, and SZ separately. Compared with patients without TD, patients with TD had worse HRQoL and more social withdrawal. MID thresholds were exceeded for PCS, MCS, and PF score differences in the SZ group but not in the BD and MDD groups. The MID was exceeded only in the PCS domain for patients with BD (score difference = 2.48) or MDD (score difference = 3.01). The score differences in all domains were larger for the SZ group than for the BD and MDD groups, as patients without TD achieved the best scores and patients with TD had the worst scores of all.
Association between clinician-rated TD severity and HRQoL
The impact of different levels of TD severity on patients' HRQoL and social withdrawal was examined in the pooled group of patients with BD, MDD, and SZ by comparing the outcomes of patients that had clinician-rated mild TD with those that had clinician-rated moderate/severe TD. Patients with mild TD scored better on all scales than did patients with moderate/severe TD (Fig. 2), and these differences were significant (P < 0.05) for the following scales; SW-ISMI:
Disease burden
To evaluate the health status burden associated with TD, baseline SF-12v2 (MCS, PCS, and PF) scores from the pooled group with BD, MDD, and SZ were compared with adjusted benchmark scores from the US general population (Fig. 3). In the non-TD group patients had significantly lower scores in all three domains of the SF-12v2 than the general population norm. The largest difference was observed in the MCS domain (− 11.0; Cohen's d = 1. Notably, presence of TD further increased the score differences between the pooled group of patients (BD, MDD, and SZ) and the general population norm in all three domains of the SF-12v2. The largest difference was again observed in the MCS domain (− 13.1; Cohen's d = 1.22), These results show that the burden associated with mental health (MCS score) was high due to the underlying mental illness (BD, MDD, or SZ), while only a minor effect was observed on physical health (PF and PCS scores). Presence of TD had an additional impact on all three SF-12v2 domains with more dramatic effects on the scores associated with physical health.
Sensitivity analysis using ordinal logistic regression supported the results obtained from the primary analyses by showing that TD status was a significant predictor of all dependent measures (PF: P < 0.001, PCS: P < 0.001, SW-ISMI: P = 0.012, Q-LES-Q-SF: P = 0.001) except MCS (P = 0.095).
Discussion
This study investigated the impact of TD on HRQoL and social withdrawal using a survey that contained items from the SF-12v2, Q-LES-Q-SF, and SW-ISMI questionnaires, and which was completed by patients with different underlying mental illnesses (BD, MDD, or SZ). To date and to the best of our knowledge, different aspects of HRQoL in patients with TD, such as mental and physical health, have not been addressed elsewhere. In the pooled group combining BD, MDD, and SZ, patients with TD had significantly worse HRQoL and increased social withdrawal than patients without TD and the general population norm. Notably, the burden of TD increased with escalating severity of the condition. These results are consistent with previous studies that investigated the effect of TD on quality of life in patients with SZ [17,24]. However, although these reports clearly demonstrate a negative impact of TD on patients with SZ, the impact of TD on other populations was not investigated.
In this study, results from the SF-12v2 questionnaire show that physical health-related differences between the TD and non-TD groups were much larger than differences related to mental health. Similarly, the physical health burden in patients with TD compared with the general population norm was higher than in patients without TD, while mental health burden was high in patients with BD, MDD, or SZ regardless of TD status. In addition, decreases in HRQoL with increasing TD severity were mostly due to decreased physical health. These observations indicate that presence of TD has a substantial impact on physical health burden in patients who already experience a significant mental health burden due to the underlying mental illness.
Considering the nature of the SF-12v2 physical functioning items (climbing stairs, walking, carrying groceries, etc.), which mostly map onto activities requiring gross motor movements, it is difficult to reconcile how small involuntary movements of the face and extremities can predict differences not only by the presence or absence of TD, but also by TD severity. While any posited mechanism is speculative, one possibility is that those suffering from TD lose some level of confidence in their physical abilities as a result of the involuntary movements they experience. This would not necessarily mean that changes in physical functioning are simply a change in perception. In fact, we see it as more likely that this lack of confidence would influence physical functioning, and we suggest that the SF-12v2 physical functioning items are reflecting real changes that result from this lack of confidence.
Interestingly, when analyzing each mental illness separately, the SZ group was more sensitive to TD than the BD and MDD groups. In the absence of TD, patients with SZ had a comparable physical health status with the general population norm and a mental health status that was moderately affected. Surprisingly, both of these scores were better in the SZ group in the absence of TD than those observed in the BD and MDD groups in the absence of TD. Although we cannot rule out the possibility of sampling error, this unexpected result could be a consequence of the progressive nature of SZ, which may result in the gradual acceptance of one's condition over time [25] versus the more-episodic appearance of BD and MDD. However, when patients also had TD in addition to SZ, both, mental-and physical health-related scores were significantly lower. Moreover, patients with SZ and TD had the lowest score on all scales compared with the respective BD or MDD groups.
Thus, in the pooled group of patients with BD, MDD, and SZ observations regarding differences on all scales between patients with and without TD were driven by those who had SZ. However, the burden of TD in patients with BD or MDD should not be discounted, since having TD was associated with overall worse HRQoL, in particular physical health, for both conditions. This study has implications for clinicians prescribing antipsychotic medication, as it clearly demonstrates that TD is not simply a nuisance side effect. Rather, TD impacts patients' physical well-being and exacerbates existing issues of stigma related to their underlying condition. Considering that treatment with antipsychotics is one of the only options available for some patients, clinicians might look for treatments for TD that can be administered in conjunction with antipsychotic medications. Future work might investigate the mechanisms by which physical functioning is impacted by TD, to better characterize how these involuntary movement of the extremities and face influence the larger scale type of physical functioning captures by the SF-12v2. The impact of TD on antipsychotic medication adherence is also an important area for mental health treaters to understand. Ultimately, methods of treatment for TD need to be evaluated in light of patient experience, including HRQoL and stigma.
Study limitations
This study has several limitations. Observations presented here may be directly associated with TD but may also be influenced by other pre-existing differences, such as comorbidities, that are beyond the control of this study. Therefore, this study accounted for additional factors such as age and gender that could potentially act as a confound for TD. However, it is worth noting that severity of a mental illness may in fact be an important consideration in assessing TD status, as patients with severe mental illness more likely have a greater exposure to antipsychotic medication that causes TD. Statistical tests did not control for multiple comparisons. However, most tests of differences had P values of < 0.001 and would be significant even with adjustment for multiple comparisons. While sensitivity analyses using ordinal logistic regression showed TD status to be a significant predictor of PCS, PF, ISMI and Q-LES-Q, MCS was not quite statistically significant (P = 0.095) using this lowerpowered test. Out of concern that we could not assume that clinicians' ratings of mental illness severity and TD severity were independent of each other, we did not include severity of mental illness as a covariate in analyses. Another caveat of the study design was that the clinicians' ratings of TD severity were not validated with a more established method, such as AIMS.
Conclusions
Although patients with BD, MDD, or SZ experience a deficit in HRQoL and notable social withdrawal, the presence of TD further impacts their health status, with the strongest impact on physical health. The key concepts related to HRQoL in patients with TD identified here may be assessed in future clinical trials.
|
2019-08-21T14:38:17.027Z
|
2019-08-21T00:00:00.000
|
{
"year": 2019,
"sha1": "42a9017a9367e5e7a00830e9ced2a97e052a2fe2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11136-019-02269-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "79096d54d70eeb22f483a86c1cf5071e2c3f1e02",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249158178
|
pes2o/s2orc
|
v3-fos-license
|
Genome-Wide Analysis of Long Non-coding RNAs Involved in Nodule Senescence in Medicago truncatula
Plant long non-coding RNAs (lncRNAs) are widely accepted to play crucial roles during diverse biological processes. In recent years, thousands of lncRNAs related to the establishment of symbiosis, root nodule organogenesis and nodule development have been identified in legumes. However, lncRNAs involved in nodule senescence have not been reported. In this study, senescence-related lncRNAs were investigated in Medicago truncatula nodules by high-throughput strand-specific RNA-seq. A total of 4576 lncRNAs and 126 differentially expressed lncRNAs (DElncRNAs) were identified. We found that more than 60% lncRNAs were associated with transposable elements, especially TIR/Mutator and Helitron DNA transposons families. In addition, 49 DElncRNAs were predicted to be the targets of micro RNAs. Functional analysis showed that the largest sub-set of differently expressed target genes of DElncRNAs were associated with the membrane component. Of these, nearly half genes were related to material transport, suggesting that an important function of DElncRNAs during nodule senescence is the regulation of substance transport across membranes. Our findings will be helpful for understanding the functions of lncRNAs in nodule senescence and provide candidate lncRNAs for further research.
Root nodules are special organs formed by legume-rhizobium symbiosis. Emerging evidence suggests that lncRNAs function as crucial regulators of symbiotic nitrogen fixation (SNF) in nodules. A well-known lncRNA associated with SNF is ENOD40 in M. truncatula (Campalans et al., 2004) which can act as a dual RNA (Bardou et al., 2011) in nodule organogenesis. Another lncRNA is TAS3 RNA in M. truncatula and the miR390/TAS3 pathway plays negative roles in nodulation and nodule development (Hobecker et al., 2017). Recently, thousands of lncRNAs in M. truncatula have been identified to be involved in SNF and possibly regulate mRNA expression in cis way (Pecrix et al., 2018). SNF by nodules lasts for a peroid, peaks at some time in the nodule life-span and declines with the senescence of nodules. Mature indeterminate nodules (such as nodules on M. trunctula roots) are divided into four developmental regions namely the apical meristematic, the infection, the nitrogen fixation and the senescence zones ( Van de Velde et al., 2006). Nodule senescence is a developmental process which is initiated in the senescence zone and advances gradually to the meristematic zone. Although a large number of lncRNAs involved in SNF have been identified, little is known about the lncRNAs related to nodule senescence. Interestingly, several recent reports have suggested that lncRNAs play key roles in leaf senescence (Chao et al., 2019;Huang et al., 2021) and nodule senescence has a relatively high similarity with leaf senescence at the molecular level ( Van de Velde et al., 2006), indicating that lncRNAs are also likely to be important regulators during nodule senescence. However, previous work focused on the identification and functions of multiple genes involved in nodule senescence, the research on ageing-related lncRNAs in root nodules has been lacking. In this study, we conducted high-throughput strand-specific RNA-seq of nodules at 21-and 35-days post inoculation (dpi) with Sinorhizobium meliloti 1021 to investigate and characterize lncRNAs associated with nodule senescence. Our findings will provide new insights into the underlying functions of lncRNAs during nodule senescence.
Plant Materials
M. truncatula A17 seeds were surface-sterilized in 75% ethanol for 5 min and 2% sodium hypochlorite solution for 15 min, before washing 5-6 times with sterile water. The seeds were placed on 1.5% (w/v) agar plates in 4 • C for 1 day. After germination in a greenhouse (20 • C/25 • C and 16 h/8 h light/dark) for 1-2 days, the seedlings were planted in sterilized sand and watered with Fahraeus nitrogen-free nutrient solution (Fahraeus, 1957). S. meliloti 1021 was inoculated after the cotyledons were expanded. Nodules were collected from the taproots of 40 plants at each dpi.
Paraffin sections of nodules were carried out for microscopic observation. Nodules at different dpi were cut longitudinally and fixed with FAA more than 24 h. After dehydrated with gradient ethanol and cleared with dimethylbenzene, the nodules were embedded in paraffin and made into sections. Then the slides were stained with toluidine blue and observed with the Olympus light microscope.
Library Preparation for Long Non-coding RNA-Seq Total RNA was obtained using plant RNA extraction kit RN40 (Aidlab, Beijing, China). Nanodrop2000 (Thermo Fisher, Waltham, MA, United States) was employed to verify RNA concentration and purity. Agilent Bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA, United States) was used to verify RNA integrity. Ribo-off rRNA Depletion Kit N409-2 (Vazyme, Nanjing, China) was used to remove rRNA. The VAHTS Total RNA-seq Library Prep Kit for Illumina (Vazyme, Nanjing, China) was used for library construction. The libraries were sequenced on an Illumina NovaSeq 6000 platform (PE150).
Identification and Analysis of Long
Non-coding RNA Clean data were produced by removing reads containing adapter and low-quality reads from raw data. HISAT2 v2.0.4 (Kim et al., 2019) was used for sequence alignment. The transcriptome was assembled using StringTie v1.3.1 (Pertea et al., 2015) and Scripture based on the reads mapped to the reference M. truncatula genome MedtrA17_4.0. The assembled transcripts were compared using the Cuffcompare v2.1.1 program (Trapnell et al., 2010). LncRNAs were screened for using the following criteria: (1) Transcripts less than 200 nt were removed. (2) LncRNA transcripts were evaluated for their potential proteincoding with CPC (Kong et al., 2007), CNCI (Sun et al., 2013), Pfam and CPAT platforms, and the intersection of the four methods were retained.
Identification of Differentially Expressed
Long Non-coding RNAs, Target Gene Prediction and Transposable Element Analysis of Long Non-coding RNAs Differential expression analysis was performed using the DESeq R package v1.10.1 (Anders and Huber, 2010). LncRNAs or mRNAs with p < 0.05 and Fold Change ≥ 1.5 were considered to be differentially expressed. For target gene prediction, Perl scripts were used to search adjacent genes within ± 100 kb of lncRNAs as the cis-target genes, while trans-target gene prediction was based on the complementary sequence using LncTar (Beattie, 2014) prediction program. The target lncRNAs of microRNAs were predicted using TargetFinder (v1.0; Fahlgren and Carrington, 2010). Extensive de-novo TE Annotator (EDTA; Ou et al., 2019) 1 was used for Transposable Element (TE) annotation. The lncRNA overlapping with TE-site but not completely inside a TE was confirmed as TE-associated lncRNA (Wang D. et al., 2017).
Gene Annotation and Functional Analysis of Differentially Expressed Long Non-coding RNAs Target Genes
Gene function was annotated based on the databases of Nr (NCBI nonredundant protein sequences 2 ), Pfam (Protein family 3 ), KOG/COG (Clusters of Orthologous Groups of proteins 4 ), Swiss-Prot (a manually annotated and reviewed protein sequence database 5 ), KEGG (Kyoto Encyclopedia of Genes and Genomes 6 ) and GO (Gene Ontology 7 ). GO enrichment analysis was implemented by the TopGO R packages, and KOBAS (Mao et al., 2005) software was used for KEGG pathway analysis.
Quantitative Real-Time PCR
Total RNA was extracted by TRizol regent and random reverse primer was used for reverse transcription of lncRNA and mRNA. The qPCR was performed on qTOWER 3.0 real-time PCR System (Analytik, Jena, Germany). Primers are listed in Supplementary Table 1. The relative expression levels of genes were calculated by 2 − CT method.
Statistical Analysis
Statistical analysis was performed using SPSS 19.0 software (IBM, Chicago, IL, United States). Two groups of data were analyzed using the unpaired two-tailed t-test. Significance analysis of length and expression level between TE-and non-TE-lncRNAs was performed by Wilcoxon test.
Nodules at 35 Days Post Inoculation Displayed Senescence
Nodules at 21 and 35 dpi were collected to determine their developmental stage. At 21 dpi, nodules were small and pink, while at 35 dpi, a small proximal section gained a green color, indicating that aging has occurred ( Figure 1A). Paraffin sections stained with toluidine blue were performed to observe the developmental zones. The cells in nitrogen fixation zone of 21 dpi nodules remained healthy with a large number of bacteroids. While at 35 dpi, a small distinct senescence zone was present at the proximal region of the fixation zone ( Figure 1B). In this region, the number of infected cells reduced and the loss of cellular content was observed, indicating the degradation of bacteroids ( Figure 1C). Moreover, some infected cells were abnormal with a very large vacuole. According to the above observations, we determined that 35 dpi nodules have entered the aging stage.
We compared the expression level, transcript and ORF length, exon number, and the isoform number of lncRNAs with mRNAs. The results reflected the different characterizations between lncRNAs and mRNAs. The average expression level of mRNAs was 1.8 times that of lncRNAs ( Figure 2D). Most lncRNAs have a transcript length of less than 1000 nt (71.1%; Figure 2E) and an Frontiers in Plant Science | www.frontiersin.org ORF length ≤ 100aa ( Figure 2G). In contrast, the average length of mRNAs was 2467nt ( Figure 2F) and the ORF of most mRNAs was longer than 100aa ( Figure 2H). The majority of lncRNAs contained less than three exons (Figure 2I), while about 76.5% mRNAs have more than three exons ( Figure 2J). For isoform number, the presence of one or two isoforms is the most common case for both lncRNAs and mRNAs ( Figure 2K).
Identification and Functional Analysis of Long Non-coding RNAs Related to Nodule Senescence
A total of 126 DElncRNAs including 67 up-regulated and 59 down-regulated lncRNAs were identified (N35 vs N21; Figures 3A,B). The distribution of DElnRNAs in chromosomes displayed the same preference with total lncRNAs. Moreover, although chromosome 6 had a small number of DElncRNAs, most of them were up-regulated ( Figure 2C). Many lncRNAs function by regulating gene expression, so the prediction of their target genes can provide insight into their biological roles. A total of 1911 putative cis-regulated and 28 trans-regulated target genes of DElncRNAs were predicted. GO terms analysis of these target genes showed significant differences between the mature and senescent nodules. Notably, for the top 20 terms of cellular component, integral component of membrane was the most significantly enriched term by both cis ( Figure 3C) and trans ( Figure 3D) target genes. KEGG pathway analysis revealed that cis target genes were enriched in RNA polymerase, purine and pyrimidine metabolism, as well as flavonoid and amino acids biosynthesis pathways (Supplementary Figure 1). The trans target genes were enriched in MAPK signaling, plant hormone signal transduction, plant-pathogen interaction and isoflavonoid biosynthesis pathways (Supplementary Figure 1).
Identification of Long Non-coding RNAs Targeting Memebrane-Related Genes and Transcription Factors
Among the target genes of DElncRNAs, 48 genes were identified to be differentially expressed (DEmRNAs) between N35 and N21. TopGO analysis of the DEmRNAs demonstrated that 13 of the 48 DEmRNAs were membrane associated ( Figure 4A). Furthermore, six of the 13 DEmRNAs encoded membrane proteins related to transport function which are two casparian strip membrane proteins, the SNARE protein SYP132, an EamA domain protein, a nitrate transporter NRT1(PTR) and a transmembrane protein (Supplementary Table 4).
Since transcription factors (TFs) play important roles during nodule senescence, we investigated the TF genes targeted by DElncRNAs. Altogether, 41 TF genes belonging to 13 families were identified, of which, MYB (11, 26.8%) and MADS (8, 19.5%) constituted the largest two families containing the high number of target genes ( Figure 4B). We found 7 of the 41 TF genes were differentially expressed between N21 and N35 including three
Identification of Long Non-coding RNAs Targeting Well-Studied Genes Involved in Nodule Senescence
Since some ageing-related genes were reported to play important roles in nodule senescence (de Zelicourt et al., 2012;Berrabah et al., 2014;Pierre et al., 2014;Dhanushkodi et al., 2018;Deng et al., 2019;Trujillo et al., 2019), the lncRNAs probably targeting them (within ± 100 kb of the associated genes) were investigated. We firstly examined the expression of these genes in the data of RNA-seq. As the early molecular markers of nodule senescence, two cysteine proteinase genes, MtCP6 and MtVPE, were significantly upregulated in N35. A NAC family TF gene MtNAC969 which is a negative regulator of nodule senescence also showed upregulated expression. While the expression of the rest genes has no significant difference between N21 and N35. The result was consistent with previous reports. A total of 34 lncRNAs were predicted targeting to 13 senescence-associated genes ( Table 1). Of these, 9 DElncRNAs including 7 down-regulated and 2 upregulated lncRNAs were identified. As seen from Table 1, most genes could be targeted by multiple lncRNAs. For instance, MtNAC969 was probably targeted by six lncRNAs, two of them showed differential expression. MtCP6 and MtVPE were targeted by one DElncRNAs, respectively. The result implied that lncRNAs played regulatory roles in nodule senescence by targeting some key senescence-related genes.
Prediction of Differentially Expressed Long Non-coding RNAs Targeted by MicroRNAs
Plant lncRNAs can play regulatory roles by acting as the target mimicry of miRNA, so it is necessary to predict the senescence-associate lncRNAs targeting by miRNAs. In total, 49 DElncRNAs were predicted to be targeted by 93 miRNAs (Supplementary Table 5). We found that some miRNAs could target more than one DElncRNAs. As a well-known regulator of multiple physiological processes, miR156 probably target three DElncRNAs. In addition to miR156, some other miRNAs such as miR172 and miR168 were also found to interact with DElncRNAs. Conversely, some DElncRNAs could also bind multiple miRNAs. For example, MSTRG.16162.3 was predicted to bind with eight miRNAs which belonged to four miRNA families. High-throughput sequencing of miRNAs showed that 36 of the above 93 miRNAs were differentially expressed (Supplementary Table 5) including 19 known miRNAs such as miR156, miR172, miR1509 and miR2629, and 17 novel miRNAs.
The percentage of TE-lncRNAs in polyA+ RNA was closed to that in polyA-RNA (63.1% vs 61.6%; Figure 5A). Compared with non-TE-lncRNAs, the average length of TE-lncRNAs was significantly longer and their expression level was relatively lower (Figures 5B,D). Furthermore, for DElncRNAs, the difference in length between TE-and non-TE-lncRNAs was greater ( Figure 5C). We investigated the family of TEs in lncRNAs according to their sequence homology with known TEs. In terms of quantity, the proportion of TE-lncRNAs associated with DNA transposons (2004, 70.3%) was much larger in M. truncatula than that in many plant species (Wang D. et al., 2017;Golicz et al., 2018b). The family which contributed the most to TE-lncRNAs was classified as Helitron, followed by TIR/Mutator, LTR/unknown, LTR/Gypsy and LTR/Copia. Similarly, for the TEs in DElncRNAs, the top three families were also Helitron, Mutator and LTR/unknown, but a small number of LTR/Copia and LTR/Gypsy elements were identified ( Figure 5E).
In addition, we investigated the contribution of different TE families to lncRNA length ( Table 3). We found that the lncRNAs related to LTR/Gypsy accounted for the most, which was different with the result calculated by quantity. Interestingly, the family that contributed most to DElncRNAs was TIR/Mutator rather than LTR/Gypsy. Furthermore, for both total lncRNAs and DElncRNAs, the contribution of DNA transposons to lncRNAs was greater than their contribution to the genome. Especially, DElncRNAs originated from TIR/Mutator accounted for 27.03% which was nearly three times the proportion of TIR/Mutator elements in all the TEs in the genome. However, the proportion of TE-lncRNAs derived from LTR/Gypsy or LTR/Copia was lower than their proportion in the genome.
Quantitative Real-Time Validation
To verify the data of RNA-Seq, eight DElncRNAs were selected randomly for qRT-PCR detection (Figures 6A-H). The expression trends of the DElncRNAs were consistent with the results of RNA-Seq (Figure 6I), which indicated the reliability of expression analysis. Furthermore, we selected three interesting lncRNAs to check the co-expression tendency of lncRNAs and their putative target genes by qRT-PCR. In 15, 21, 28, 35, and 45 dpi nodules, the expression tendency of lncRNA MSTRG.14267.1 and gene-LOC25492610 (encoding a lysinespecific demethylase) was highly consistent (Figure 6J). While the expression profiles of lncRNA MSTRG28751.1 and gene-LOC25502666 (encoding a bHLH transcription factor) presented an opposite trend ( Figure 6K). Additionally, the expression trends of two genes (senescence-associated gene, newGene_6237 and transmembrane protein gene, newGene_6245) were both consistent with that of MSTRG.31647.1 (Figure 6L).
Long Non-coding RNAs Are Involved in Regulating Nodule Senescence
Nodule senescence leads to the decrease of nitrogen fixation efficiency and affect crop yield. Thus, one effective measure to ensure crop yield is to prolong the period of nitrogen fixation by delaying the onset of nodule senescence. Investigating the regulatory mechanism underlying nodule senescence can provide potential targets for such. However, little research has focused on the lncRNAs related to nodule senescence. In this study, we identified 126 putative nodule senescence-related lncRNAs by strand-specific RNA-seq. Our findings can provide insight into the functions of lncRNAs in nodule senescence and provided candidate targets for nodule senescence regulation.
Transposable Element-Associated Long Non-coding RNAs Play Important Roles in Nodule Senescence TEs are widely distributed in plant genome. Previous work has demonstrated that a number of plant lncRNAs are derived from TEs. Lots of TE-lncRNAs have been identified in Arabidopsis, rice (Wang D. et al., 2017), maize (Lv et al., 2019), cotton (Zhao et al., 2018), tomato (Wang et al., 2016) and soybean (Golicz et al., 2018b). Similarly, our results revealed that more than 60% lncRNAs contained TE sequences, and the proportion was higher in DElncRNAs. In many plant species, TE-lncRNAs mainly originate from retrotransposon especially LTR family (Wang D. et al., 2017). But surprisingly, we found in M. truncatula nodule, Helitron and TIR/Mutator families contributed the most to lncRNAs in quantity and length, respectively, which was distinct with the results in maize (Lv et al., 2019), rice (Wang D. et al., 2017), cotton (Zhao et al., 2018) and soybean (Golicz et al., 2018b), but have some similarities with the finding in Arabidopsis (Wang D. et al., 2017). The result suggested that the contribution of different TE families varied according to plant species and growth conditions. TEs act as the functional domain of lncRNAs (Johnson and Guigo, 2014) and current researches showed that TE-lncRNAs often work as regulators both in plant response to abiotic stress (Wang D. et al., 2017;Lv et al., 2019) and plant development including fruit ripening (Wang et al., 2016) and the control of seedling height (Zhao et al., 2018). However, whether they are involved in nodule senescence remained unknown. An interesting finding in this work is that the contribution of TIR/Mutator to DElncRNAs in length was significantly greater than that of other families, while Helitron contributed the most in quantity. Mutator elements were firstly found in maize and their homologues are distributed in many other plant species, which are called Mutator-like element (MULE). MULEs are able to selectively capture host gene fragments in Arabidopsis (Yu et al., 2000), maize (Talbert and Chandler, 1988) and rice (Juretic et al., 2005). Genes associated with MULEs play important roles in plant growth and development. For instance, in Arabidopsis MULEderived genes acted as the transcriptional regulator of the genes involved in light response (Hudson et al., 2003). The mutation of genes related to MULEs caused delays in plant growth and flowering and reproductive defects (Joly-Lopez et al., 2012). Helitron elements have been reported to change the function and the expression level of genes Liu et al., 2020). However, the contribution of MULE and Helitron to lncRNAs and plant aging is unknown. Our results suggested TE-lncRNAs derived from MULE and Helitron are involved in nodule senescence.
Long Non-coding RNAs Regulate Nodule Senescence by Interacting With miRNAs
In plants, miRNAs are regarded as the control center of diverse biological processes including plant aging (Werner et al., 2021). During plant flowering, the aging pathway was regulated by miR156 and its target, SQUAMOSA PROMOTER BINDING-LIKE (SPL) transcription factors (Gou et al., 2019). SPL genes can increase the expression of miR172, and miR156/SPLs/miR172 constitute the regulatory network of aging pathway (Wei et al., 2017;Werner et al., 2021). Additionally, miR168 was involved in seed senescence in barley (Puchta et al., 2021). In legumes, miR156 and miR172 were essential regulators of nodulation in soybean (Yan et al., 2013;Yun et al., 2022), common bean (Nova-Franco et al., 2015) and alfalfa (Aung et al., 2017). However, the involvement of miRNAs in nodule senescence has not been reported. In our study, miR156, miR172 and miR168 displayed differential expression in N21 and N35, suggesting their roles in nodule senescence. LncRNAs can bind to miRNAs as competing endogenous targets. Here four DElncRNAs were predicted to be targeted by miR156 or miR172, which indicated that DElncRNAs could function in nodule aging by interacting with miRNAs.
Long Non-coding RNAs Regulate Nodule Senescence by Affecting the Material Transport Across Membrane
TopGO analysis of DEmRNAs targeted by DElncRNAs showed that 13 DEmRNAs were associated with membrane component and six of them encoded proteins related to material transport. Of these proteins, SYP132, a Qa-SNARE, was reported to mediate the fusion between vesicles and the target cell membrane. A SNARE in tobacco was responsible for the regulation of cell membrane ion channels (Leyman et al., 1999). SYP132 in A.thaliana mediated the endocytosis of H + ATPase (Xia et al., 2019). Previous work suggested that MtSYP132 localized to the symbiosome membrane and the membrane around the infection threads, indicating its roles in nodulation and nodule development. The symbiosome membrane provides a medium for communication between the bacteroids and host cells and transmembrane ion transport across the symbiotic membrane is crucial for the function and survival of bacteroids (Catalano et al., 2007). Because MtSYP132 was up-regulated during senescence, it was likely to regulate the transport of some special aging-related molecules across the symbiosome membrane. Besides SYP132, two casparian strip membrane proteins and a NRT1(PTR) protein were also up-regulated. Casparian strip membrane proteins mediated the deposition of casparian strip which regulated the transport of water and inorganic salts, and defects in its development led to increased solute leakage Calvo-Polanco et al., 2021). NRT1(PTR) proteins are known as nitrate transporter (Miller et al., 2007), which can also transport other substrates (Waterworth and Bray, 2006;Krouk et al., 2010). A NRT1(PTR) protein in M. truncatula was reported to be essential for lateral root growth and nodule development (Yendrek et al., 2010). In summary, we speculated that lncRNAs played a role in nodule senescence by affecting the material transport across membrane.
DATA AVAILABILITY STATEMENT
The data presented in the study are deposited in the SRA: https: //www.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA810777, accession number PRJNA810777.
AUTHOR CONTRIBUTIONS
YL, LZ, and LY conceived the project and design the protocol. XQ, JY, and LY performed the experiments. LY, TH, TW, ZL, and YL performed the data analysis. YL and LZ wrote the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by the National Natural Science Foundation of China (31760062 and 32060062) and Guangxi Natural Science Foundation (2018GXNSFAA138118).
ACKNOWLEDGMENTS
We are very grateful to Professor Youguo Li who provided Sinorhizobium meliloti 1021 and plant seeds.
SUPPLEMENTARY MATERIAL
The
|
2022-05-30T13:16:44.656Z
|
2022-05-30T00:00:00.000
|
{
"year": 2022,
"sha1": "3a5d6bb7d514734c865dafc3aa94f18f8b9af6b0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2022.917840/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "3a5d6bb7d514734c865dafc3aa94f18f8b9af6b0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
5615579
|
pes2o/s2orc
|
v3-fos-license
|
Metabolic Versatility and Antibacterial Metabolite Biosynthesis Are Distinguishing Genomic Features of the Fire Blight Antagonist Pantoea vagans C9-1
Background Pantoea vagans is a commercialized biological control agent used against the pome fruit bacterial disease fire blight, caused by Erwinia amylovora. Compared to other biocontrol agents, relatively little is currently known regarding Pantoea genetics. Better understanding of antagonist mechanisms of action and ecological fitness is critical to improving efficacy. Principal Findings Genome analysis indicated two major factors contribute to biocontrol activity: competition for limiting substrates and antibacterial metabolite production. Pathways for utilization of a broad diversity of sugars and acquisition of iron were identified. Metabolism of sorbitol by P. vagans C9-1 may be a major metabolic feature in biocontrol of fire blight. Biosynthetic genes for the antibacterial peptide pantocin A were found on a chromosomal 28-kb genomic island, and for dapdiamide E on the plasmid pPag2. There was no evidence of potential virulence factors that could enable an animal or phytopathogenic lifestyle and no indication of any genetic-based biosafety risk in the antagonist. Conclusions Identifying key determinants contributing to disease suppression allows the development of procedures to follow their expression in planta and the genome sequence contributes to rationale risk assessment regarding the use of the biocontrol strain in agricultural systems.
Introduction
Fire blight is the most important global threat to pome fruit production areas around the world and its causative agent, the enterobacterium Erwinia amylovora, can affect a wide variety of rosaceous plants [1]. During spring, the pathogen colonizes the stigmata of flowers and invades tissues through the nectaries, causing rapid necrosis and progressive wilt in infected branches. Epidemics can develop rapidly and result in death of individual plants or entire orchards within a single season, leading to severe economic losses [2]. E. amylovora has quarantine status outside North America, and it is a contentious trade issue with fruit movement into fire-blight-free countries such as Australia, Japan, and all countries of the Southern Hemisphere aside from New Zealand [3,4]. This invasive pathogen has spread across Europe and the Middle East, and is threatening to advance into the region of origin of apple germplasm resources in Central Asia [5], adding to the urgency to develop more effective control and containment strategies. With the recent publication of the genomes of E. amylovora CFBP 1430 [6] and its close relative Erwinia pyrifoliae DSM 12163 T [7], a solid knowledge base for the pathogen is present already.
Although fire blight is one of the most intensively studied bacterial plant diseases, the control of the disease is still not satisfactory [8]. The most effective control of fire blight was obtained with prophylactic applications of the antibiotic streptomycin on flowers. Unfortunately, resistant strains of E. amylovora have emerged in production areas where streptomycin is registered for use [9]. Biological control of the disease has been tested over the last decades as a valuable alternative [8]. Here, several modes of action have been described. First, competition for space and nutrients is one of the most common mechanisms [10]. Acidification of the habitat can also produce unfavorable conditions for pathogen multiplication [11]. Additionally, antibiotic production by the biological control agent can suppress growth of the pathogen [12,13].
P. vagans strain C9-1 [14,15,16] is an important biocontrol agent against E. amylovora [8,17] that is registered in the USA and Canada as BlightBan C9-1S (NuFarms America). Two Pantoea agglomerans strains (E325 and P10c) are also commercially marketed for fire blight control. An obstacle to wider approval of Pantoea for agricultural application as a biocontrol agent is the reporting of clinical isolates, typically with specious documentation and/or identification, within the same species as the biocontrol strains. As a result, European and other governmental regulators have categorized most Pantoea spp. as biosafety level 2 (BL-2) organisms (opportunistic pathogens). This qualification of Pantoea spp. as BL-2 organisms restricts beneficial uses much needed as alternatives to even more controversial products (e.g., antibiotics) or to fill gaps where no other protection options are available. Detailed analysis of the complete genome sequence of the P. vagans biocontrol strain C9-1 [18] offers an important foundation for demonstrating biosafety and moreover for elucidating traits that influence and can ultimately be harnessed to improve beneficial biocontrol performance.
Metabolic versatility in carbon metabolism
Nutrient competition by depriving pathogens of necessary resources on flowers is an important mechanism of action for biocontrol strains [10,19]. We evaluated the metabolic versatility of P. vagans C9-1 with Biolog GN2 and AN plates, and Biolog PM profiling (Supporting Information, Table S1). P. vagans C9-1 metabolized a wide range of carbohydrates, corresponding well with already published substrate ranges for this organism [14,20]. Many PTS systems are found in the genome annotation [18], most of them encoding sugar phosphorylases and most of them in close association to sugar-converting enzymes (e.g., kinases, glucosidases, etc). Additionally, some families of MFS and ABC transporters are predicted to transport sugars. The complete pathways including uptake proteins for several of these sugars were identified, providing genetic bases for most substrates found to be catabolized with Biolog PM profiling (Supporting information, Table S1).
On pome fruit hosts that utilize sorbitol as a primary carbon transport and storage compound [21], metabolism of sorbitol may contribute to virulence of E. amylovora [22,23]. The genome of P. vagans C9-1 encodes two gene clusters for sorbitol utilization on plasmid pPag2 [18], a feature that is absent in other P. vagans strains [14]. At the protein level, the pPag2-encoded genes are 71-95% identical to each other, and 53-88% identical to the proteins encoded by the sorbitol gene clusters of E. amylovora [6]. Growth of P. vagans C9-1 with sorbitol as sole carbon source was confirmed with the Biolog plates (Supporting Information, Table S1) and in liquid cultures [15,20].
Biosynthesis of the antibiotic pantocin A
On the chromosome, a gene cluster (paaPABC) encoding the pantocin A biosynthesis and autoresistance genes [24,25] was identified with 99-100% identity to those of P. agglomerans biocontrol strains Eh318 and Eh252 [26]. An antibiotic, previously called herbicolin O [17], with identical chemical characteristics as pantocin A is produced by P. vagans C9-1 (C.A. Ishimaru, personal communication). The genes within the paaPABC operon have a lower G+C content (40.5%) than the whole genome (55.1%), suggesting a horizontal gene transfer event as origin. This operon resides in a 28 kb genomic island (GI) that was identified by its lower G+C content (37.3%). One border of the GI was integrated in the N-terminal region of the mutS gene (Fig. 1). On the other border of the GI, a 52 bp repeat was found with high sequence identity to the N-terminus of mutS. The identified integrase was located adjacent to, but in the opposite direction of mutS. Within the GI, other mobile element-related genes were found (i.e., integrases, recombinases, relaxase, parB and repB), several of them inactivated by frame-shifts. The GI is absent in three other P. vagans strains (including the type strain, LMG 24199 T ) but is present in a few other Pantoea strains [15].
Biosynthesis of the antibiotic dapdiamide E P. vagans C9-1 was reported to produce a second antibiotic, called herbicolin I [17], but lately renamed dapdiamide E [27]. The chemical structure was described recently for P. agglomerans 48b/90 with the indication that P. vagans C9-1 also produced this compound [28], but the biosynthetic genes of this antibiotic were not identified. Dapdiamide E-deficient mutants were obtained using plasposon mutagenesis [29]. Plasmid rescue and sequencing of the insertion site in mutant CIR624 indicated that the insertion took place in a hypothetical gene, encoded on pPag2 in a cluster containing several predicted biosynthetic and hypothetical genes. Analysis of the insertions that are present in the proposed dapdiamide E biosynthetic cluster is underway [27].
Exopolysaccharide production
Biosynthetic genes for an O-antigen-type exopolysaccharide were found in the genome of P. vagans C9-1. The encoded proteins have between 78.5% and 93.4% protein sequence identity with the stewartan gene cluster of P. stewartii subsp. stewartii and share similar operon organization [30]. The biosynthetic clusters for the exopolysaccharide amylovoran of E. amylovora CFBP 1430 and E. pyrifoliae DSM 12163 T that is involved in pathogenicity, are less related and deviate in the specific glycosyltransferases [6,7,31].
Production of the plant hormone indole acetic acid
The plant growth regulator indole-3-acetic acid (IAA) is produced by some strains of P. agglomerans [32,33]. We confirmed that P. vagans C9-1 produces IAA in liquid cultures. Based on the standard curve published by Lindow et al. [33], the concentrations produced by P. vagans C9-1 vary roughly between 5 and 10 mg l 21 , in the same range as IAA produced by the P. agglomerans strains tested in their study. On the chromosome of P. vagans C9-1, a gene encoding the indole-3-pyruvate (IPyA) decarboxylase IpdC was identified. This protein is the key enzyme of the IPyA pathway in the biosynthesis of IAA [34] (Fig. 2). Another IAA biosynthetic pathway, the indole-3-acetamide pathway that involves tryptophan monooxygenase (Fig. 2), encoded on the plasmid pPATH from phytopathogenic P. agglomerans strains [35,36], is absent in the genome of P. vagans C9-1. Additionally, plasmid pPag2 in P. vagans C9-1 carries a gene cluster for biosynthesis of IAA from plant-derived aldoximes [34]. The genes for an aldoxime dehydratase, amidase and nitrile hydratase were found in a single gene cluster, similar to the arrangement in Pseudomonas syringae pv. syringae [37].
Siderophore production
Siderophore-mediated iron acquisition is essential to overcome conditions of iron limitation encountered on floral tissues [38]. The pathogen E. amylovora utilizes desferrioxamine for iron acquisition and this siderophore is implicated in pathogenicity on rosaceous plants [39]. P. vagans C9-1 may compete with the pathogen for the iron on flowers by the biosynthesis of high-affinity siderophores.
P. vagans C9-1 produces both enterobactin and hydroxamate siderophores [40]. On the chromosome of P. vagans C9-1, a complete gene cluster for the production and export of an enterobactin-like siderophore is present. This feature is shared with many members of the Enterobacteriaceae [40], but a similar gene cluster is absent in the genomes of the related P. ananatis LMG 20103 [41] or all Erwinia species [6,7].
Of the hydroxamate siderophores, desferrioxamine E is the major product of P. vagans C9-1, while smaller amounts of desferrioxamines D 2 and B are produced [40,42]. On plasmid pPag3, a gene cluster (dfoJACS) was found [43] that is related to the dfoJACS cluster of P. ananatis LMG 20103 [41] with the difference that the latter strain also encodes the TonB-dependent receptor FoxA directly downstream of dfoS. Lower sequence identities exist to the dfoJAC clusters of Erwinia spp. [6,7] and to lesser extend to the desABCD biosynthetic gene cluster for desferrioxamine E in Streptomyces coelicolor M145 [44]. In P. vagans C9-1, the ferrioxamine receptor gene foxA is located on the chromosome.
A striking difference is the number of TonB-dependent siderophore receptors that are present in P. vagans C9-1 in comparison to the low number of these receptors in Erwinia spp. The genome of P. vagans C9-1 encodes 10 TonB-dependent receptors [18], whereas the Erwinia spp. only encode four [6,7]. We postulate that P. vagans C9-1 may be a more effective competitor for iron compared to other Erwinia spp.
Type VI secretion systems
Two type VI secretion system (T6SS) gene clusters were identified in P. vagans C9-1. The T6SS cluster 1 has a similar gene organization as those in the closely related enterobacteria E. amylovora CFBP 1430 [6] and S. proteamaculans 568 (Fig. 3), but differs in the number of putative effector genes (i.e., hcp and vgrG). A third T3SS cluster as identified in the genomes of the related species P. ananatis LMG 20103 [41] and E. amylovora CFBP 1430 [6] was not observed.
As several T6SSs have been identified in many pathogenic species (e.g., Pseudomonas aeruginosa, Burkholderia spp.), it has been hypothesized that they may play a role in host interactions and virulence [45,46]. T6SSs are present in genome sequences of many non-pathogenic bacteria and studies on the role of T6SSs as a host-targeting virulence factor have yielded inconsistent data [47,48]. A different role of T6SSs is being considered, that they are involved in inter-bacterial interactions [48]. As such, the T6SSs may impact biocontrol via direct contact between P. vagans C9-1 and E. amylovora. This role is currently under investigation (Kamber, Smits and Duffy, unpublished).
Resistance factors
Plasmid pPag2 contains a functional tellurite-resistance operon similar to that of E. coli O157:H7 isolates [49]. P. vagans C9-1 is resistant to tellurite at 50 mg ml 21 and single colony mutants appear at higher concentrations. Plasmid pPag3 encodes a blactamase Bla and its cognate regulator AmpR, [43] conferring P. The commercialized product BlightBan C9-1S contains a spontaneous rifampicin-and streptomycin-resistant variant of the sequenced wild-type strain. Streptomycin resistance in this strain was identified to be conferred by a single ARG point mutation in the rpsL gene, yielding the known mutation K43R that leads to high resistance against the antibiotic. This is analogous to the position and extent of resistance found in streptomycin-resistant strains of E. amylovora [9].
Pathogenicity of P. vagans isolates on Eucalyptus species
The type strain of P. vagans and other initial strains of the species were isolated from Eucalyptus leaves showing symptoms of bacterial blight and dieback [14], but the species description did not state whether the strains are the causal agent of the disease. We tested pathogenicity of this species on seedlings of three Eucalyptus species. Symptoms of bacterial blight were not observed on Eucalyptus inoculated with P. vagans strains C9-1, LMG 24199 T or LMG 24196; inoculated plants looked similar to water-treated controls. Necrotic lesions were observed on leaves of each Eucalyptus species inoculated with high doses of P. vagans strain LMG 24195 within 2 weeks after inoculation, but symptoms were not observed when strain LMG 24195 was introduced into petioles or leaves at a lower dose (1610 6 CFU ml 21 ). Some strains of P. vagans, including C9-1, did not cause any disease symptoms and one strain caused minor symptoms, so pathogenicity towards plants among all strains of P. vagans should not be assumed based on isolation of the type strain from diseased tissues.
Comparative genomics to the related plant pathogen P. ananatis LMG 20103
Recently, the genome of the closely related plant pathogen P. ananatis LMG 20103 was published [41,51]. The most obvious difference between genomes of P. vagans C9-1 and P. ananatis LMG 20103 is the absence of large plasmids in the latter strain, while the total genome size is comparable. The order on the chromosome is highly syntenic (Fig. 4), although many small, collinear blocks identified on the three plasmids of P. vagans C9-1 are scattered over the chromosome of P. ananatis LMG 20103. Notably, only one small collinear block was identified on plasmid pPag2, corresponding to the genes encoding sorbitol metabolism in P. ananatis LMG 20103 [15,52].
Biocontrol activity and genomics reveal main mechanisms
The genome sequence of P. vagans strain C9-1 [18] has allowed the identification of several factors that might be involved in biocontrol efficacy. As with most biocontrol agents, preemptive exclusion and nutrient competition with the pathogen for necessary resources are important mechanisms of action [10,19]. The substrates utilized by P. vagans C9-1 include the known nectar sugars (e.g., glucose, sucrose and fructose) [53] and sorbitol, the major transport sugar in Rosaceae host plants [54], indicating that P. vagans C9-1 likely competes with E. amylovora for these substrates at colonization and infection sites [20]. An additional trait of P. vagans C9-1 is its production of a variety of antimicrobial metabolites that contribute to biocontrol efficacy, including siderophores and the antibiotics dapdiamide E and pantocin A [17,24,25]. Understanding the genetics behind biocontrol antibiotic biosynthesis may enable development of application and/or formulation strategies that optimize expression and preclude inhibition/degradation by co-inoculated or environmental bacteria [55,56].
Plasmid pPag2: a plasmid contributing to biocontrol
The biocontrol features dapdiamide E biosynthesis and sorbitol metabolism are located on the P. vagans C9-1 plasmid pPag2. This plasmid is not present in the other P. vagans strains and was not detected in several P. agglomerans strains [43]. The plasmid itself contains several remnants of IS elements and has a mosaic structure. This plasmid could thus constitute a plasmid specific to the biocontrol strain. The other two plasmids of P. vagans C9-1 contain general metabolic features of P. vagans (e.g., maltose and sucrose utilization) [14] and those genes could be detected in other members of the species by PCR [43]. These plasmids can thus be regarded as indigenous to the species.
Environmental fitness
We have identified several factors that are involved in environmental fitness, like pigments, siderophores, acyl-homoserine lactones, IAA biosynthesis from aldoximes, tellurite resistance and exopolysaccharides [18,43]. The pigment zeaxanthine diglucoside is produced by P. vagans C9-1 [57], giving it potential protection against reactive oxygen species that are produced during epiphytic growth on sunlight exposed plant surfaces [58]. The siderophores are involved in uptake of iron under conditions of iron limitation occurring on floral surfaces [38,59], but the presence of numerous TonB receptors also allows competition with other organisms that have similar systems [60]. The biosynthesis of the high-affinity iron siderophore enterobactin and the ability to utilize an array of other iron siderophores not produced by the organism would give P. vagans C9-1 a strong competitive advantage over Erwinia spp., as the latter group of organisms is able only to synthesize the siderophore desferrioxamine and contains only a limited number of siderophore uptake systems [6,7]. P. vagans C9-1 produces acyl-homoserine lactones [43,61], a potential quorum-sensing regulation system that might be coupled to environmental fitness factors. The production of exopolysaccharides [30] can protect bacteria against environmental stress like desiccation.
Potential risk factors evaluated P. vagans C9-1 and other P. vagans strains were evaluated for their pathogenicity against Eucalyptus plants. Typical symptoms of bacterial blight were not observed on several Eucalyptus species, we conclude that an association of P. vagans as the causal agent of bacterial blight of Eucalyptus [14] is not confirmed. In addition, P. vagans C9-1 is not pathogenic to pome fruit flowers and does not cause a hypersensitive response when infiltrated into tobacco leaves (V.O. Stockwell, personal communication). Additional virulence factors like the type III secretion systems (T3SS) and effectors of P. agglomerans pathovars gypsophilae or betae [36] were not identified on the chromosome. Essentially this confirms that P. vagans C9-1 is not a plant pathogen.
The close phylogenetic position of P. vagans to P. agglomerans generates problems when identifying this species using some commercial systems [14,15,16,62]. The irregular, and often inaccurate, identification of isolates from clinical samples as ''P. agglomerans'' [15,16] has resulted in both species being classified as BL-2 pathogens, even though clinical assumptions are never supported by attempts to fulfill Koch's postulates, and no evidence has ever been presented demonstrating toxicity, pathogenicity or allergenicity for either species [15]. We confirm that no known or putative genes involved in pathogenicity to animals, humans, plants or other organisms (e.g., T3SS, toxins) are found in the genome and there is no evidence that P. vagans C9-1 interacts with or persists in mammals (http://www.epa.gov/opp00001/biopesticides/ingredients/tech_docs/brad_006470.pdf). Several environmental organisms like E. amylovora or S. proteamaculans have orthologous gene clusters for the T6SSs of P. vagans C9-1 and its role is proposed to be involved in inter-bacterial interactions [48]. The presence of a nontransferable point-mutation conferring streptomycin resistance in the commercialized strain C9-1S allows the combinatorial treatment with the antibiotic without suppressing the growth of populations of P. vagans C9-1 on the treated plants [63].
Strains, media and growth conditions
Pantoea vagans C9-1 was isolated from stem tissue of a Malus6domestica 'Jonathan' in Michigan, USA [17] and evaluated over the past 20 years for biological control of fire blight [8]. A spontaneous streptomycin-and rifampicin-resistant mutant was approved and registered by the USA EPA in 2007 under the trade name BlightBan C9-1S (NuFarm Americas, Burr Ridge, IL, USA) in the USA and Canada. Additional P. vagans strains (LMG 24195, LMG 24196 and LMG 24199 T ) were obtained from T.A. Coutinho (FABI, University of Pretoria, South Africa). Bacteria were grown on LB medium [64] at 28uC. Catabolism assays were done in M9 minimal medium [64] supplemented with carbon sources (e.g., glucose, sorbitol, maltose or sucrose).
Metabolic profiling
The metabolic profile of P. vagans C9-1 was assayed using Biolog GN2 and AN plates. Pre-cultures were grown in M9 medium [64] with 5 mM glucose and allowed to grow to late stationary phase to ensure complete substrate utilization. The cells were washed once and re-suspended in fresh M9 medium. Attenuance at 600 nm (A 600 ) was set to 0.15 per well, 100 ml of inoculum was added. The plates were visually interpreted after incubation for 1, 2 and 5 days at 28uC. Additional data on the metabolism of nutrient sources for P. vagans C9-1 were generated in replicate plates incubated at 22uC by the Biolog Phenotype Microarray Services (Hayward, CA, USA). Further information and comments on nutrient sources is provided in supporting information (Text S1).
Indole acetic acid biosynthesis assays
For production of indole acetic acid (IAA) by P. vagans strains, the method of Lindow et al. [33] was followed. Briefly, three replicate cultures per strain were grown in KB broth amended with 0.2 mg ml 21 L-tryptophan for 48 h at 27uC. Cultures were harvested by centrifugation (5 min, 14,000 rpm) and 1 ml culture supernatant was added to 2 ml of reagent (2% 0.5 M FeCl 3 in 35% perchloric acid) [65]. The samples were incubated at room temperature for 30 min, and OD 530 nm was measured. As reference, non-inoculated broth plus reagent was used. As controls for IAA production, cultures of a known IAA producer (P. agglomerans strain 299R) and known non-producers (Pseudomonas fluorescens A506 and Erwinia amylovora Ea153) were included. This experiment was conducted twice with similar results.
Pathogenicity tests on Eucalyptus
P. vagans strains C9-1, LMG 24199 T , LMG 24195 and LMG 24196 were cultured for 2 days on LB agar at 27uC. Cells were removed from the surface of the agar and suspended in sterile distilled water; cell concentration was adjusted with a spectrophotometer. Eucalyptus grandis, E. gunnii and E. nitens plants were propagated from seeds kindly provided by Windmill Outback Nursery, VA, USA and maintained in a greenhouse. Leaves and petioles of five replicate 3month old plants were inoculated with bacterial suspensions (1610 6 and 1610 8 CFU ml 21 ) by the methods of Coutinho et al. [66] and covered with plastic bags to maintain humid conditions. Additional plants were inoculated by cutting leaves transversely with scissors dipped in 1610 9 CFU ml 21 . Control plants for each method were treated with sterile water. Plants were examined for symptoms of bacterial blight (i.e., water-soaking, necrosis, wilt and/or leaf abscission) periodically over 3 weeks. The experiment was repeated three times with similar results.
Genome annotation, comparative genomics and metabolic reconstruction
Genes were predicted using a combined strategy [67] based on the CDS prediction programs Glimmer [68] and Critica [69]. Subsequently, the potential function of each predicted gene was automatically assigned using the GenDB annotation pipeline [70]. The resulting genome annotation was manually curated, and metabolic pathways were identified using the KEGG pathways tool [71] in GenDB. Transport proteins were classified according to the nomenclature in the Transporter Classification Database [50].
Routine sequence manipulations were done using the programs of the Lasergene package (DNASTAR, Madison, WI, USA). Wholegenome comparisons were done using the progressive alignment option of the Mauve comparison software (Version 2.0 [72]).
The genome sequence of P. vagans C9-1 was compared to those of P. ananatis LMG 20103 and E. amylovora CFBP 1430 to identify the set of common genes composing the core genome for this genus and the set of genes unique to each species, referred to as singletons. For this purpose, an ''all-against-all'' comparison of the genes was accomplished using the BLAST alignment tool [73]. The genes were aligned based on the protein sequence (BLASTP) with an initial e-value cut-off of 1E-5 using the BLOSUM62 scoring matrix. Genes were considered orthologous when a reciprocal best BLAST hit was found between two genes, and when both BLAST hits were based on alignments exceeding 70% sequence identity spanning over at least 70% of the query gene length [74]. The Pantoea genus core genome was calculated as the set of genes of a reference strain for which an orthologous gene could be found in each of the compared genomes. In contrast, genes of one strain were considered to be singleton genes when they had no BLAST-hits comparability with any of the other genomes that satisfied the given criteria [74].
Supporting Information
Table S1 Carbon sources for P. vagans C9-1, as determined with Biolog plates GN2, AN or the Biolog PM system plates PM1 and PM2A. (DOC) Text S1 Further information and comments on nutritional sources and substrates. (DOC)
|
2014-10-01T00:00:00.000Z
|
2011-07-15T00:00:00.000
|
{
"year": 2011,
"sha1": "b6a4eddb3b87af5227c7b569e3699e7052e0e903",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0022247&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6a4eddb3b87af5227c7b569e3699e7052e0e903",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
31531846
|
pes2o/s2orc
|
v3-fos-license
|
Mitochondrial Cysteine Synthase Complex Regulates O-Acetylserine Biosynthesis in Plants*
Background: Cysteine biosynthesis is the exclusive entry point for reduced sulfur in cellular metabolism. Results: The mitochondrial cysteine synthase complex (mCSC) regulates serine acetyltransferase activity in response to cysteine availability. Conclusion: The mCSC is a sensor of sulfur availability and regulates cysteine synthesis. Significance: The integration of cysteine in the regulatory model of the CSC establishes a new sensory function for the mCSC. Cysteine synthesis is catalyzed by serine acetyltransferase (SAT) and O-acetylserine (thiol) lyase (OAS-TL) in the cytosol, plastids, and mitochondria of plants. Biochemical analyses of recombinant plant SAT and OAS-TL indicate that the reversible association of the proteins in the cysteine synthase complex (CSC) controls cellular sulfur homeostasis. However, the relevance of CSC formation in each compartment for flux control of cysteine synthesis remains controversial. Here, we demonstrate the interaction between mitochondrial SAT3 and OAS-TL C in planta by FRET and establish the role of the mitochondrial CSC in the regulation of cysteine synthesis. NMR spectroscopy of isolated mitochondria from WT, serat2;2, and oastl-C plants showed the SAT-dependent export of OAS. The presence of cysteine resulted in reduced OAS export in mitochondria of oastl-C mutants but not in WT mitochondria. This is in agreement with the stronger in vitro feedback inhibition of free SAT by cysteine compared with CSC-bound SAT and explains the high OAS export rate of WT mitochondria in the presence of cysteine. The predominant role of mitochondrial OAS synthesis was validated in planta by feeding [3H]serine to the WT and loss-of-function mutants for OAS-TLs in the cytosol, plastids, and mitochondria. On the basis of these results, we propose a new model in which the mitochondrial CSC acts as a sensor that regulates the level of SAT activity in response to sulfur supply and cysteine demand.
Cysteine biosynthesis is catalyzed by a two-step process in plants. In the first step, serine acetyltransferase (SAT 4 ; EC 2.3.1.30) transfers an acetyl moiety from acetyl coenzyme A to serine and forms O-acetylserine (OAS). Subsequently, OAS (thiol) lyase (OAS-TL; EC 2.5.1.47) replaces the acetyl group of OAS with sulfide and releases cysteine (1). Reverse genetic approaches and biochemical studies of Arabidopsis OAS-TL isoforms demonstrated that cysteine biosynthesis in plants is limited by OAS supply (2)(3)(4). The transcript abundance, protein level, and extractable activity of SAT and OAS-TLs are not significantly altered by sulfur limitation or genetic manipulation of the sulfur assimilation pathway. However, exposure to toxic compounds or harsh stress treatments can induce significant transcription of particular SAT and OAS-TL isoforms in Arabidopsis (5,6). This led to a model based on kinetic studies of free SATs in which SAT activity is regulated mainly at the metabolic level by the cysteine feedback inhibition of SATs (7). Subsequently, a regulatory model for SAT activity based on the reversible interaction of SAT and OAS-TL in the hetero-oligomeric cysteine synthase complex (CSC) was proposed (reviewed in Refs. 8 and 9). In this model, SAT present in the CSC is activated, whereas OAS-TL is catalytically inactive and acts as a regulatory subunit for the SAT (9). As a result of OAS-TL inactivation, OAS leaves the CSC and is converted to cysteine by a large excess of free OAS-TL dimers. Upon sulfur limitation, sulfide availability limits cysteine biosynthesis, which results in an accumulation of OAS. Sulfide stabilizes the CSC, but in its absence, the increase in OAS causes the CSC to dissociate (8). The impact of OAS and sulfide on CSC formation in turn defines this complex as a sensor of sulfur availability that adjusts the SAT activity to the actual sulfur status of the cell.
The CSC is present in the cytosol, plastids, and mitochondria of plant cells, but the activities and amount of SAT and OAS-TL differ significantly between these subcellular compartments (10 -12). In Arabidopsis leaves, ϳ90% of OAS-TL activity is provided by OAS-TL A in the cytosol and OAS-TL B in chloroplasts. The remaining activity comes from mitochondrial OAS-TL C (10,12). In contrast, ϳ80% of total SAT activity is found to originate from the mitochondrial isoform SAT3 (serat2;2) (11,13). The residual SAT activity is contributed by plastid-localized SAT1 (serat2;1) and three cytosolic SATs, of which SAT5 (serat1;1) is the most abundant. T-DNA insertion mutants for each of the SAT genes are viable, demonstrating that individually OAS synthesis in the cytosol, plastid, or mitochondria is not essential (11,13). Knockdown of mitochondrial SAT3 causes significant growth retardation, suggesting a major role for the mitochondria in supplying OAS for cysteine synthesis (2). A prominent role of the mitochondrial CSC (mCSC) in regulation of SAT3 activity and thereby cellular cysteine is also indicated by the growth retardation phenotype of oastl-C, whereas oastl-A and oastl-B mutants are unaffected (12). However, in vitro CSC formation has only minor impact on SAT3 affinities for serine and acetyl coenzyme A, but CSC-bound SAT3 is less sensitive to feedback inhibition by cysteine compared with free SAT3 (14). The question arises of how the constitutively expressed and most abundant SAT of Arabidopsis is regulated in vivo. Here, we used FRET of CSC subunits in mitochondria of plants, NMR spectroscopy of isolated mitochondria, and [ 3 H]serine labeling to unequivocally demonstrate the significance of mitochondria for cellular cysteine synthesis and to merge the two existing concepts for the metabolic regulation of SAT3 in an advanced model of CSC function.
EXPERIMENTAL PROCEDURES
Construction of Vectors-PCR and cloning of DNA fragments were performed as described (15). SAT3 and OAS-TL C cDNAs were fused with restriction and attb sites for cloning by PCR amplification using the primers shown in supplemental Fig. 1. The SAT3 cDNA was cloned into pB7WGY2 for fusion with enhanced yellow fluorescent protein (eYFP) using Gateway TM technology. Then eYFP-SAT3 cDNA was re-amplified by PCR and introduced in the vector pBinAr-SHMT by conventional cloning of a BamHI/SalI restriction fragment. The resulting construct (pBinAr-SHMT/YFP-SAT3) codes for eYFP fused to the N terminus of SAT, which is targeted to mitochondria by the transit peptide of serine hydroxymethyltransferase. The full-length OAS-TL cDNA sequence, including the endogenous mitochondrial transit peptide, was cloned in pB7CWG2 using Gateway TM technology. pB7CWG2-OAS-TL C codes for full-length OAS-TL C fused to the N terminus of enhanced cyan fluorescent protein (eCFP). Expression of OAS-TL C-eCFP and eYFP-SAT in Escherichia coli and purification of recombinant proteins were performed as described (15). The respective cDNAs were PCR-amplified and cloned in the pETM20 vector as described in the legend to supplemental Transient Expression in Leaves-Binary vectors for expression of eYFP in fusion with SAT (eYFP-SAT) and OAS-TL C fused with eCFP (OAS-TL C-eCFP) were transformed in Agrobacterium tumefaciens strain C58C1 and selected with the appropriate antibiotic for presence of the binary and auxiliary vectors. Leaves of 4-week-old soil-grown Nicotiana tabacum plants were infiltrated as described (16) with a 1:1 mixture of A. tumefaciens strain C58C1 harboring the respective binary vectors (A 600 nm ϭ 0.5) suspended in LB medium.
FRET-Three days after infiltration, 1-cm 2 leaf discs were analyzed for FRET between eYFP-SAT and OAS-TL C-eCFP using a confocal laser scanning microscope (Axiovert 200M connected to an LSM 510 Meta confocal module, Carl Zeiss Microscopy GmbH, Jena, Germany) at 512 ϫ 512 pixel resolution. The integrity of the plasma membrane was tested by propidium iodide (0.05 mM) staining. Localization of eYFP-SAT and OAS-TL C-eCFP in mitochondria of N. tabacum cells was assessed by co-staining with 0.01 mM MitoTracker Orange TM (Molecular Probes). FRET emission signals were detected at 485 Ϯ 15 nm upon excitation at 405 nm. FRET efficiency was calculated after photo acceptor bleaching of OAS-TL C-eCFP protein with maximum laser intensity at 514 nm as described (17). Recombinant OAS-TL C-eCFP (0.5 M) and eYFP-SAT (0.5 M) were tested for positive FRET with the confocal laser scanning microscope using the same settings.
Isolation of Mitochondria-Mitochondria were isolated at 4°C from 20 -30 g (fresh weight) of 14-day-old Arabidopsis seedlings using a procedure that was based largely on published protocols (18,19). Seedlings were ground using a mortar and pestle in a total of 600 ml of grinding medium (0.25 M sucrose, 15 mM MOPS, 0.4% (w/v) bovine serum albumin, 0.6% (w/v) polyvinylpyrolidone-40, 1.5 mM EDTA, 100 mM ascorbate, and 10 mM dithiothreitol, pH 7.4). The filtered cell extract was separated by differential centrifugation, and the mitochondria were purified on a 0 -4.4% PVP/Percoll gradient. The isolated mitochondria were washed twice with 0.3 M sucrose and 10 mM TES, pH 7.5, and then resuspended in the same buffer. All buffers were supplemented with cysteine (0.5 or 1 mM) for experiments in which mitochondria were used to examine the effect of cysteine on serine metabolism.
NMR Analysis of [3][4][5][6][7][8][9][10][11][12][13] C]Serine Metabolism-The metabolism of labeled serine was monitored continuously under conditions of state 3 respiration using procedures similar to those described before (20,21). Coupled mitochondria from WT, serat2;2, and oastl-C seedlings (typically 1-2 mg of mitochondrial protein in 1 ml of wash buffer) were diluted in 4 ml of buffer containing 0. 2. The mitochondrial suspension was oxygenated and stirred continuously in a 10-mm diameter NMR tube using an airlift system (22), and proton-decoupled 13 C NMR spectra were recorded at 150.9 MHz on a Varian Unity Inova 600 spectrometer using a broadband probe. Twenty spectra were recorded in 15-min blocks over a period of 5 h using a 90°pulse angle, a 1.016-s acquisition time, and a 6-s relaxation delay. Low power frequency-modulated decoupling was applied during the relaxation delay to maintain the nuclear Overhauser effect, and this was switched to higher power Waltz decoupling during the acquisition time to remove the proton couplings. Chemical shifts are cited relative to the mannitol signal at 63.90 ppm, and the signal intensities of 13 C-labeled OAS, N-acetyl[3-13 C]serine (NAS), and serine were measured relative to the intensity of this peak. The protein content, respiratory coupling ratio, and outer membrane integrity of the mitochondria were measured for every replicate, and the amounts of OAS and NAS were expressed as relative peak area/mg of protein.
Statistical Analysis-Means from different data sets was analyzed for statistical significance with the unpaired t test. Constant variance and normal distribution of data were checked with SigmaStat 3.0 prior to statistical analysis. The Mann-Whitney rank sum test was used to analyze samples that did not follow normal Gaussian distribution.
Formation of the mCSC in Vivo-
The interaction of purified SAT3 and OAS-TL C has been demonstrated to occur spontaneously in vitro (14,23) in the absence of OAS. Mitochondria are the main site of OAS production, so evidence was sought for the formation of the mCSC in vivo. To this end, SAT3 and OAS-TL C were fused with eYFP (eYFP-SAT) and eCFP (OAS-TL C-eCFP), respectively. eYFP-SAT and OAS-TL C-eCFP were transiently expressed in epidermal cells of N. tabacum, and the localization of both eYFP-SAT and OAS-TL C-eCFP to the mitochondria was verified by co-staining with MitoTracker Orange (Fig. 1, A-E). The interaction of eYFP-SAT and OAS-TL C-eCFP was assessed in vivo by quantification of FRET between the donor eYFP and acceptor eCFP using the photo acceptor bleaching technique. FRET efficiency in bleached areas of tobacco cells expressing eYFP-SAT and OAS-TL C-eCFP was significantly higher than the control efficiency in non-bleached areas. As a control for the impact of acceptor bleaching on the donor, the FRET efficiency in bleached areas of tobacco cells expressing only OAS-TL C-eCFP was determined and found to be negligible (Fig. 1G), confirming the validity of the FRET signal between eYFP-SAT and OAS-TL C-eCFP. Note that eYFP was fused to the N terminus of SAT because the C terminus of SAT is responsible for the interaction with OAS-TL. Fusion of YFP to the C terminus of SAT abolished the FRET signal between SAT-eYFP and OAS-TL C-eCFP (data not shown).
Characterization of the mCSC in Vitro-To characterize the interaction of recombinant OAS-TL C-eCFP and eYFP-SAT with respect to the effectors sulfide and OAS, eYFP-SAT and OAS-TL C-eCFP were expressed in E. coli, and the purified proteins were tested for FRET efficiency. eYFP-SAT and OAS-TL C-eCFP spontaneously formed the CSC in the absence of both effectors, which is in agreement with previous studies of untagged recombinant SAT and OAS-TL (15,23). Application of OAS resulted in a significant lower FRET efficiency, demonstrating that OAS can dissociate the recombinant CSC formed by eYFP-SAT and OAS-TL C-eCFP. Preincubation of the CSC with sulfide prevented dissociation by OAS (Fig. 1H). These results strongly indicate that eYFP-SAT interacts with OAS-TL AUGUST 10, 2012 • VOLUME 287 • NUMBER 33
JOURNAL OF BIOLOGICAL CHEMISTRY 27943
C-eCFP via its C-terminal tail, as shown for native SAT and OAS-TLs, and explain why fusion of YFP to the C terminus of SAT abolished the FRET signal.
Mitochondrial Synthesis and Export of OAS-The metabolism of [3-13 C]serine by isolated mitochondria was followed in situ by recording 13 C NMR spectra from a dilute suspension of mitochondria maintained in state 3 respiration. Control experiments have shown that the NMR signals originate entirely from the suspending medium in these experiments, reflecting the very small fraction of the sample volume occupied by the mitochondria (24). Incubating WT mitochondria with [3-13 C]serine led to the detection of signals that could be assigned to O-acetyl[3-13 C]serine and NAS ( Fig. 2A), as confirmed by comparison with spectra of authentic standards (data not shown). NAS originates from OAS by a known spontaneous intramolecular shift of the acetyl moiety from the hydroxyl to the amino group (25). Labeled OAS and NAS were undetectable in the spectra recorded from suspensions of serat2;2 mitochondria (Fig. 2B), and time courses showed faster metabolism of serine by WT mitochondria (Fig. 2C) and negligible production of OAS by the mutant (Fig. 2D). Glucose phosphorylation was observed in all experiments, confirming state 3 respiration of the mitochondria and demonstrating that the absence of OAS accumulation in the serat2;2 experiments was not due to a failure of mitochondrial respiration (data not shown).
SAT Is under the Feedback Control of Cysteine-The metabolism of [3-13 C]serine was compared in suspensions of WT and oastl-C mitochondria in the presence and absence of cysteine to test the functional significance of the interaction between SAT and OAS-TL. There was no difference in the production of labeled acetylserine (OAS ϩ NAS) in the absence of cysteine between WT and oastl-C mitochondria (Fig. 3A), but the addition of 0.5 mM cysteine to the medium greatly reduced the accumulation of OAS and NAS in the oastl-C mitochondria (Fig. 3, B and C, and supplemental Fig. 3), showing that the interaction between SAT and OAS-TL in the CSC prevented feedback inhibition of SAT by cysteine.
Cellular OAS Synthesis Is Regulated by CSC Formation in Mitochondria-The significant export of OAS from mitochondria of WT Arabidopsis plants, together with the importance of mitochondrial SAT in the control of cysteine synthesis (2), prompted us to test whether disruption of the CSC in Arabidopsis loss-of-function mutants for OAS-TLs in different subcellular compartments affects the in vivo activity of SAT. Leaves of soil-grown WT, oastl-A, oastl-B, oastl-C, and oastl-AC plants were incubated in the light for 5 and 25 min with radioactively labeled [ 3 H]serine. The incorporation capacity of serine was significantly decreased to ϳ50% of the WT capacity in leaves of oastl-C mutants after 25 min (Fig. 4). This decrease was already apparent after 5 min of [ 3 H]serine treatment in oastl-C plants, although the change was not significant. In contrast, the shortterm incorporation capacity of serine was not affected by disruption of the plastidic CSC in oastl-B plants and was only marginally affected in oastl-A plants after 25 min. The double mutant oastl-AC showed the same reduction in the incorporation capacity of serine into OAS as the oastl-C mutant, but it was already significantly affected after 5 min (Fig. 4). These results are in agreement with the minor contribution of plastidic and cytosolic SATs to the total SAT activity in leaves (11).
DISCUSSION
Formation of the CSC in Mitochondria-We have demonstrated that SAT3 and OAS-TL C can interact in plant mitochondria to form the CSC. This occurs even though SAT3 produces the bulk of the cellular OAS, which in vitro has the ability to dissociate the CSC (Fig. 1H) (2,11,13). Formation of the mCSC might be promoted by the stabilizing effect of sulfide (Fig. 1H) (8), by efficient export of OAS from mitochondria (Figs. 2 and 3), or by a combination of both. The bulk of cellular sulfide is produced in plastids by the activity of sul-fite reductase, which results in a sulfide gradient between compartments of the plant cell (13,26). Furthermore, sulfide efficiently inhibits cytochrome c oxidase-dependent respiration, and it can be scavenged in mitochondria in an OAS-TL C-independent manner (27). These observations suggest that the steady-state level of sulfide is low in mitochondria, making it unlikely that sulfide alone accounts for an efficient stabilization of the mCSC.
Reverse genetic approaches indicated significant export of OAS from mitochondria to support synthesis of cysteine in the cytosol and plastids (2,11,12). In both compartments, sulfide and OAS encounter a high excess of the free catalytically active OAS-TL dimer (12) that is necessary for full conversion of OAS to cysteine (28,29). In contrast to plastids, which have a Ͼ300fold excess of OAS-TL over SAT activity, the excess of free OAS-TL in mitochondria is ϳ4-fold (10,12,29). The low sulfide level and the ratio of OAS-TL to SAT activity in mitochondria indicate that the major function of OAS-TL C is regulation of SAT in the CSC rather than conversion of OAS to cysteine. In agreement with this proposal, the NMR studies of isolated mitochondria provide direct evidence for the export of OAS from mitochondria. Neutral amino acids have been shown to permeate across the mitochondrial membranes by carrierand/or channel-mediated processes (30,31). To date, however, the identities of the mitochondrial OAS exporter and the transporters for serine and most other amino acids remain unknown in plants (32).
Cysteine is essential in mitochondria to meet demands for efficient biosynthesis of mitochondrially encoded proteins and iron-sulfur clusters. A high demand of cysteine could be the cause of the high mitochondrial OAS synthesis rate if cysteine is predominantly formed in the mitochondria by OAS-TL C. In contrast to this idea, the abundance of mitochondrially encoded proteins and iron-sulfur cluster-containing proteins is found to be unaltered in the oastl-C mutant, indicating suffi- cient cysteine transport from the cytosol to the mitochondria to meet the cysteine demand in oastl-C mitochondria (27).
Predominant synthesis of OAS in mitochondria implies coregulation with plastidic sulfide production for efficient cysteine synthesis. Transport of OAS from the mitochondria to the cytosol allows mitochondrial OAS to control transcription of the nuclear encoded adenosine-phosphosulfate reductase, which is the enzyme with the greatest control over sulfide production in plastids (Refs. 33 and 34; reviewed in Ref. 35).
Regulation of SAT Activity by CSC Formation-Formation of the CSC in mitochondria provides the molecular basis for regulation of the major SAT activity in Arabidopsis in response to OAS and sulfide supply (14,23). Recently, in vitro studies of a soybean CSC and the mCSC of Arabidopsis indicated that modulation of SAT cysteine feedback inhibition by CSC formation is part of this regulatory circle (14,36). Remarkably, the cysteine feedback sensitivity of SAT5 and SAT of E. coli is not controlled by formation of the cytosolic CSC from Arabidopsis and the bacterial CSC, respectively (14). Here, we have provided evidence for the CSC-dependent regulation of the feedback inhibition of SAT3 by cysteine with in situ NMR studies of isolated mitochondria. Whether the cysteine-dependent regulation of the mCSC also applies to cytosolic and plastidic CSCs in Arabidopsis is questionable because cytosolic, plastidic, and mitochondrial SATs are subject to different cysteine feedback inhibition sensitivities (37). The 50% decrease in the incorporation capacity of serine into OAS in oastl-C and oastl-AC is in agreement with the expected predominant role of the mCSC in regulation of total SAT activity (2,11,13) and provides a molecular explanation for the observed growth phenotype of the oastl-C mutant (12). The decrease in the total serine incorporation capacity in the oastl-A mutant after 25 min points to a role for the cytosolic CSC in regulation of cellular SAT activity, although to a minor extent compared with the mCSC. Modulation of the cysteine feedback sensitivity of SAT3 by CSC formation provides an elegant explanation for the so far unexplained biochemical phenotype of the sulfite reductase knockdown mutant (sir1-1). The leaf OAS level only doubled in sir1-1, even though the sulfate incorporation rate was 18-fold lower than in the WT (23), demonstrating a significant downregulation of the in vivo SAT activity. Nevertheless, total SAT activity and transcript levels of all SATs were unaffected in sir1-1. Moreover, the cysteine steady-state level in sir1-1 was unchanged or even higher than in the WT (26). In light of the results presented here, it appears that dissociation of the mCSC by the doubled OAS level in the presence of cysteine would efficiently inhibit SAT3 and thereby adjust mitochondrial OAS synthesis to decreased sulfide production in sir1-1 plastids, without the need to reduce SAT3 expression or protein level.
The incorporation of cysteine into the regulatory model of the CSC establishes a new sensory function for the mCSC. OAS and sulfide are primary readouts for the sulfur supply of the cell. In contrast, cysteine is widely accepted to serve as a signal for the sulfur demand of the cell (1). In the updated model for regulation of SAT activity by CSC formation proposed here, sensing of sulfur demand via cysteine allows specification of the consequences of CSC dissociation. The increase in OAS that leads to dissociation of the CSC could occur for two reasons: (i) as a result of sulfide limitation (sulfur starvation response) and (ii) to meet a high demand for cysteine synthesis (e.g. upon oxidative stress). In the first scenario, SAT activity will be shut down as a result of the maintained cysteine levels (sir1-1), whereas in the second scenario, the cysteine level will drop due to increased cysteine consumption. The lower cysteine level will allow sufficient SAT activity to ensure an adequate flux from serine to cysteine, even when the SAT is not fully incorporated into the CSC (Fig. 5).
In summary, we have provided evidence for the predominant role of the mCSC in regulating SAT3 activity and consequently cysteine production in leaves of Arabidopsis. This regulation is based on the OAS-and sulfide-dependent dissociation state of the mCSC and on the availability of cysteine that regulates free SAT3 by feedback inhibition. These results are integrated into an updated model for the regulatory function of the CSC. A, under normal sulfur supply, sulfate is transported from the extracellular space via the cytosol into the plastids (green oval), where it is reduced to sulfide. Sulfide can leave the plastids to serve in other subcellular compartments as a substrate for cysteine synthesis by free OAS-TL dimers (orange spheres). The bulk of OAS is synthesized by CSC-associated SAT3 (dark blue ellipses). OAS leaves the CSC because complex-associated OAS-TL dimers (yellow spheres) are catalytically inactive. B, limitation of sulfate results in a decrease in sulfide and an increase in OAS concentrations, resulting in dissociation of the CSC. SAT activity decreases upon dissociation of the CSC because free SAT3 hexamers (light blue ellipses) are sensitive to inhibition by cysteine.
|
2018-04-03T01:57:35.259Z
|
2012-06-22T00:00:00.000
|
{
"year": 2012,
"sha1": "4d40a4e2844b3485b6e4cdd04e8ab3a9f1bef84c",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/287/33/27941.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "05ab83f565e0a25c5aee60d1d2a2c3d5a1d3918e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
85447121
|
pes2o/s2orc
|
v3-fos-license
|
Stress and perceived stigma among parents of children with epilepsy
Purpose The present study aimed at understanding the stress and perceived stigma among parents of children with epilepsy seeking treatment at a tertiary referral center for neurology in South India. Materials and methods Parents of sixty children suffering from epilepsy in the age group of 4–15 years were interviewed to explore parental stress and perceived stigma. They were recruited consecutively over a period of 6 months in 2015. Tools administered were Childhood-Illness related Parenting Stress Inventory (Manford in J Neurol 264(8):1811–24, 2017) and the Parent Stigma Scale (Baca et al. in Value Health 13(6):778–786, 2010). Results The mean age of parents was 37.2 years, and the majority of parents who used to bring their child to the hospital were male (71.7%) and educated up to the secondary/intermediate level (36%) and were from lower socio-economic status. The mean age of children with epilepsy was 8.4 years with the majority of them being male (66.7%), affected with chronic seizures (58.3%) with most commonly occurring seizure type being generalized seizures (50%), with a co-morbid diagnosis of cerebral palsy (26.7%). A significant number of parents reported difficulty in communicating with medical team (58.3%) and significant others (51.7%) about their child’s seizures and difficulty in making decisions related to their child’s medical care (43.3%) which strained their financial resources and created difficulty in adequate role functioning. Findings indicated that most of the parents of children with chronic seizures perceived reactions of others to be negative (53.3%) and would limit family social interaction which resulted into emotional reaction in the form of anger, guilt, fear, anxiety, and depression. Conclusion Parents are important figures in the process by which children with epilepsy came to acknowledge themselves being different from other children. Parents often feared divulging their child’s epilepsy to their friends and relatives because they experienced a sense of shame, self-blame, and rejection which also increased their stress.
Introduction
Epilepsy is a common neurological disorder of childhood which has complex ramifications. Defining epilepsy can be quite problematic as it is characterized by seizures and epilepsy-like febrile seizures and drug-induced seizures [1]. Children with epilepsy because of seizures have other co-existing health conditions that can significantly affect a child's physical health as well as psychological and social well-being.
Parental stress can be defined as the psychological and physiological reactions of the parents as they attempt to meet the challenges of caring for their sick child. Raising a child with epilepsy involved an often state of uncertainty, apprehension, and need for continued surveillance. Parents need to learn to cope with special diets, medication, schooling challenges, repeated hospitalizations, behavioral problems, and much more [2]. Diagnosis of epilepsy in a child brought with it a series of consequences for the family, and most parents got affected by it: the Bloss of a perfect child^and the realization that the child might always be different from other children because of their illness [3].
Perceived stigma may have two different components: the shame associated with having epilepsy based on a sense of being not able to have control over the child's seizures and the fear of encountering enacted stigma which may cause a parent to take efforts to hide his or her child's health condition [4]. A negative attitude of the general public towards a person with epilepsy led to a belief that epilepsy was a disease affecting biological, cognitive, emotional, and social ability resulting into a person with epilepsy being treated differently by society even though their seizures are well controlled [5]. The structural causes like poverty, unemployment, homelessness, and violence may act as risk factors which further aggravate parental stress and stigma related to epilepsy [6,7].
Objective of the study
We aimed to understand the stress and perceived stigma among parents of children with epilepsy and to find out the association between parental stress and perceived stigma.
Material and methods
A cross-sectional descriptive study was conducted in the outpatient consultation in the neurology department of a tertiary referral center in South India. Parents of 60 children who met exclusion and inclusion criteria were recruited through convenient sampling. Participants in the study were 18 years of age or older who had a child in the age-group 4-15 years affected with generalized or partial seizures. Parents of children with a co-morbid diagnosis of ADHD, autism, intellectual developmental disorder/mental retardation, and cerebral palsy were also included. Children who have been diagnosed with nonepileptic seizures, febrile seizures, and neurodegenerative disease of infancy and childhood or any other medical or psychiatric illness were excluded.
Measures
Socio-demographic profiles of the child and parents were assessed using a self-designed performa in the form of a semi-structured interview schedule. It consisted of background information about the children, parents, and clinical profile of children. Stress was assessed using the Childhood Illness-related Parenting Stress Inventory [5]. It consists of four domains: communication, emotional functioning, medical care, and role functioning. The total score comprised of the sum for each of the four domains. The Parent Stigma Scale [6] was used to assess the stigma. It shows parental perception about how others form an opinion and view child because of epilepsy. It measures confidence in seizure management, worry, mood, and family life/leisure. Parents were asked to respond on a 5-point scale. A higher score reflects greater perceptions of stigma associated with their child having epilepsy and vice versa.
Ethical approval to conduct the study was taken from an institutional ethics committee. A written informed consent was taken from all parents prior to participating in the study.
Procedure
Parents who came along with a child with epilepsy for outpatient consultation in the neurology department of a tertiary referral center in South India since July 2015 to December 2015 were recruited. Children were diagnosed with epilepsy by two independent neurologists and have been coming regularly for follow-up since year 1. Parents who fit into the inclusion-exclusion criteria were contacted and explained the nature of the study, confidentiality, and their right to withdraw. The parents were divided into two groups, i.e., parents of children with epilepsy with co-morbid condition and parents of children with epilepsy without co-morbid condition. The parents' written consent was taken before they participated in the study. For parents who were not literate, the researcher read out the questions and marked the answers.
The researcher then spent some time with the child. Initial rapport had to be built with the child by engaging the child in coloring tasks or by giving a puzzle book to solve so that parents can be interviewed. Appropriate psycho-social intervention was provided post-assessment.
Data analysis
Statistical analysis was carried using R software. The data from the questionnaires were analyzed using a descriptive statistic like (frequency and percentage) mean and standard deviation, and non-parametric tests like Mann-Whitney U test were done.
Socio-demographic profile of children with epilepsy
The age range of children was 6 to 10 years with a mean age of 8.4 years. The majority of children (66.7%) affected with epilepsy were male with only 33.3% of females affected with epilepsy. A large number of children (33.3%) had not yet started going to school or dropped out after the onset of seizures (Table 1).
Socio-demographic profile of parents
Sixty parents have been recruited for the study. The age range of parents was 25 to 35 years with a mean age of 37.2 years. The majority of parents were male (71.7%) and educated up to the secondary/intermediate level (36%) and was doing a semiskilled job (43.3%). Most of the parents (66.7%) were from lower socio-economic status with only 3.3% from higher socio-economic status. An average number of children came from a nuclear family (55.0%). For a maximum number of children (85.0%), their mother was the primary caregiver. There was no consanguinity among the majority of parents (78.3%) with only 21.7% reporting consanguinity predominantly third-degree relative ( Table 2).
Clinical profile of children
Clinical details of children were assessed by systemically reviewing case files and treatment details. The result significantly indicated that 58.3% of children have cases of chronic seizures whereas 41.7% have cases of new-onset seizures. Although the chronic sample included children with seizures that had begun as early as birth, the new-onset sample limited the lowest age of onset to 4 years. The findings also show that an average number of children (50%) had generalized seizures with only 31.7% of children having partial seizures followed by 18.3% of children having a combination of both generalized and partial seizures. In terms of frequency of seizure episodes, a significant number of children (45%) had episodes of seizures less than ten times a day. Most of the children had seizures less than 5 s (63.3%). The majority of children (73.3%) seizures have not been controlled. In terms of co-morbid conditions, 26.7% of children had cerebral palsy followed by 16.7% of children having an intellectual developmental delay with only 3.3% of children having autism and attention deficit hyperactivity disorder. When it comes to other associated problems along with seizures, many children (31.7%) had memory problems followed by 23.3% having difficulty in speech, temper tantrums, and anger outburst (Table 3).
Parental stress
Parenting stress was assessed using the Childhood Illnessrelated Parenting Stress Inventory [5]. In communication In medical care domain, most of the parents find it difficult to bring the child to the clinic for the treatment and had difficulty in attending to the child's hygiene needs (48.3%). Many parents felt sad and worried to see their child having trouble eating (45%). A large number of parents had difficulty in taking decisions related to their child's medical care (43.3%). For being with the child during medical care and handling changes in medicines and treatment, the majority of the parents had difficulty (41.7%).
In emotional distance domain, an average number of parents felt isolated. The majority of parents had frequent mood changes, felt numb inside and helpless, and had mood worsen on learning upsetting news (46.7%). Most of the parents were worried about the impact of seizures, and their mood worsens on knowing the child is in pain or getting hurt due to seizure episodes (43.3%).
In role function domain, an average number of the parents reported significant changes in their relationship with the spouse; spending more time in an unfamiliar setting like hospitals, clinics, and lab; and missing important events in their life (51.7%). The majority of the parents find it difficult to attend to the needs of other family members (48.3%). Most of the parents found it difficult and uncertain to discipline their sick child and had little time for their own needs (46.7%). A large number of parents were unable to go to work regularly (45%) ( Table 4).
Perceived stigma among parents of children with epilepsy
Perceived stigma was assessed by using the Parent Stigma Scale [6]. An average number of parents felt that their child was being labeled or stigmatized due to having frequent and active seizures (53.3%). Majority of the parents reported that their child was given differential treatment because of having frequent episodes of seizures. Most of the parents worried about finding prospect groom or bride for their sick child (41.7%). Many parents reported that people have perceived notions about their child's seizures (36.7%) and that their child has to always prove him/ herself because of seizures (35%) ( Table 5).
Comparison of stress and perceived stigma among parents of children with epilepsy
The table indicates that the frequency of parental stress (U = 278, ρ = .011) is significantly higher increasing difficulty to cope with stress (U = 275, ρ = .010) for the parents of children with epilepsy and co-morbid condition as compared with those of the parents of children with epilepsy without co-morbid conditions. The results significantly indicated that parents of children with epilepsy and co-morbid conditions have a higher perceived stigma (U = 243, ρ = .002) as compared with parents of children with epilepsy without co-morbid condition (Table 7).
Discussion
The current study was an attempt to understand the stress among parents of children with epilepsy. Some of the demographic factors associated with high parenting stress were young parental age, lower education status, and lower socio-economic status.
Multiple studies have shown that if parents are less educated and have financial instability then they spent most of their income, time, and effort on child's treatment and care. This results in exhaustion of existing economic and social resources which negatively affects a parent's quality of life [7,8].
The current research showed that parents of children whose seizures are not well controlled reported more stress. One of the previous studies highlighted that the seizures when poorly controlled may be disabling and interfere with the child's ability to learn, grow, and develop normally [9]. Most of the children in the present study who had co-morbid conditions like cerebral palsy and intellectual developmental delay needed supervision and assistance in activities of daily living like feeding, bathing, taking medicines, communication, and mobility, thus increasing physical and emotional dependence on parents which resulted into a high level of parental stress. One of the studies has reported that apart from the physical dependency of the child on parents there were secondary factors such as myths and misconceptions about epilepsy, enacted stigma, and lack of knowledge of families about epilepsy directly related to parental stress and quality of care provided to the child [10,11].
The majority of children in this study has either not yet started going to school or dropped out after the onset of seizures. These were those children for whom seizure started in quite a young age mostly when they were infant which affected their socioemotional and cognitive development. Parents also feared that their child will have an episode of seizures at school and teachers would be unable to handle it. There were also concerns about if school authority and children came to know about the child's seizures that they will treat the child differently, doubt the child's ability to perform well, or labeled the child to be epileptic which made the parents further isolate the child by restricting family and social activities. This finding did not appear elsewhere in the epilepsy literature, but similar findings have been categorized differently in different studies.
In the present study, the majority of parents reported that friends and relatives who knew that the child had epilepsy treated the child differently in terms of feeling uncomfortable to be left alone with the child or considering the child not as intelligent as children of his age group. Jacoby and Austin [12] highlighted that friends or relatives would feel nervous around a child with epilepsy and would become afraid to be left alone with the child as they did not know how to perform first aid if the child had an episode of seizure. Studies have indicated that when seizures occur quite early in life and the person is being frequently quiet and resistant to treatment then the person is at higher risk of cognitive deficits which could also depend upon other factors like the number, duration, type of seizure, and antiepileptic drug therapy [13,14].
Most of the time, mothers undertake the job of nursing the sick child and fathers played an assistive role. Etemadifar and colleague [15] reported that the majority of caregivers for patients with epilepsy are female housekeepers who care for many hours daily which significantly increase their levels of stress, anxiety, and depression. Mothers always had to be highly alert and vigilant because of the uncertainty of where and how their child will get seizures which made them over-protective and over-concerned about the child's health and well-being. They were also described to be permissive and uncertain about disciplining the child or excessive restrictive towards the child in non-health domains like participating in sports activities or not allowing the child to move around freely in the neighborhood. One of the study concluded that the fear of a child having an episode of seizures when parents are not around and sense of helplessness on seeing the child in pain made parents to be permissive or exert control and restrictions in their child's day to day life which often occurred for longer period of time than what can be considered reasonable or appropriate [16]. Parents felt helpless and sad on seeing the child's life being adversely affected by epilepsy; as the severity of seizures increased, parents became more desperate to find a cure for their child's illness. The more desperate they became, the worst they felt. Parents reported that worrying would give them mental peace and help them deal with feelings of guilt and failure as a parent. The excessive worrying often resulted in negative emotions like fear, anxiety, and sadness. Jensen and colleague [9] concluded that parents kept worrying about their child's health which could deteriorate at any time which affected parents' physical and mental health leading to sleep deprivation, easy fatigability and feeling of helplessness, despair, and anger.
Disease management issues were frequently reported by parents in the current study in the form of bringing the child to the hospital for consultation; taking decisions related to medical tests and change in medication, medicine supervision, managing sideeffects of medication; and being with the child throughout the treatment procedure at the hospital. Streisand [5] found that parents caring for children with a debilitating illness like epilepsy are at greater risk of having stress and increase level of stress could negatively affect the quality of care provided to the child.
In the current study, most of the parents had difficulty in terms of discussing with the doctor about child's seizure condition, feeling confused about the information provided to them because of technical terms and medical jargons used by the doctors. Parents felt quite hesitant to discuss about their child's illness with relatives and friends or affected child as they did not know what to say and how to say so they kept worrying about it.
Hobdell and colleague [17] found that parents' inability to effectively manage child's seizure condition which was due to lack of adequate information and skills took an emotional toll on family members by increasing their worries and concern leading to negative emotions like anxiety, despair, helplessness, and sadness.
Most of the parents were missing important events in their life and has made their child as the center of their attention. As a result, they had little time to meet their own needs or spend time with family members. Camfield [4] discussed in his study that parents missed important events in their lives and socially isolated themselves because of the fear of divulging their child's epilepsy to their friends and relatives as they experienced a sense of shame, self-blame, and rejection. Higher stigma was associated with more worry, parent negative mood, and the adverse impact of epilepsy on parent life and leisure activity [18]. Epilepsy literature suggests that parents' quality of life deteriorates after the onset of a child's illness which have an effect on their adaptation, role functioning, and coping styles [8,19].
Most of the parents were concerned about the impact of epilepsy on their child's future if it continues in adulthood. Some of their concerns were whether parents would be able to find a bride or groom for the child and child's ability to conceive and perform conjugal responsibilities adequately. Previous studies have reported that most of the parents of children with chronic seizures had concerns regarding child's marriage especially concerns regarding the ability to conceive, fear of disclosure about epilepsy before marriage, and consequences of disclosure. They perceived reactions of others to be negative, and this belief was shaped by seeing the general public's negative attitude towards a person with epilepsy [20,21].
One of the factors which played an important role in helping the child to adjust to epilepsy was a parental reaction to the child's diagnosis which set the stage for their child's own interpretation of its significance. If parents' reactions were negative, the child learnt to think about epilepsy as something they should be ashamed of which resulted into social isolation [22,23]. Child's extended family members, neighborhood friends, and teachers acted what Hintermair [23] called as Bstigma coachesŵ hich was positively associated with a behavior problem and socio-emotional problems in their children [24]. Some of the constraints in the generalization of findings are small sample size, cross-sectional nature of the study, time constraint, and sampling bias. The acceptance of the parents' emotional reactions of grief, anger, fear, or guilt is essential to facilitate parents' coming to terms with their child's illness. Parents also need the emotional support of treating team when having to negotiate appropriate restrictions on their child's activities and when being faced with difficult decisions such as whether a change in medication or new treatment options, e.g., surgical treatment which to them may seem terrifying, should be explored.
Conclusion
Parental reaction to the child's diagnosis set the stage for the child's own interpretation of its significance. As the child grew older, parental stress was likely to increase due to management difficulties, financial strains, and increased concern about the child's future. The addition of behavioral problems, a common occurrence in adolescents due to hormonal changes and sideeffects of antiepileptic medication, further increased stress and burden of parents, thus increasing stigma related to epilepsy. Treating team can be sensitive to these needs and spend some time with the parents before making any treatment-related changes, and attention can be given to parent's involvement in the child's management.
|
2019-03-23T13:44:28.421Z
|
2019-03-22T00:00:00.000
|
{
"year": 2019,
"sha1": "601f853e4a245ef7decbd2dbc126db21a2c39bac",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10072-019-03822-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "601f853e4a245ef7decbd2dbc126db21a2c39bac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
137677505
|
pes2o/s2orc
|
v3-fos-license
|
Oxidative Hydrometallurgy of Sulphide Minerals
Sulphide minerals are one of the most important sources of value metals, such as gold, silver, copper, zinc, etc. Due to the strong sulphur binding to these minerals, metals are usually extracted by pyrometallurgical route or hydrometallurgy with chemical oxidation. Of these, hydrometallurgy apparently has a lower environmental impact, which has received increased attention in last decades. The main stages of the hydrometallurgical route comprise leaching, extraction and precipitation or electrowinning. For several decades, a number of processes have been developed to leach sulphide ores and concentrates and the conditions are well established. However, there is a renewed interest in hydrometallurgical processes for copper production due to environmental issues and the increasing need to exploit mixed and low grade ores and relatively small isolated deposits.
Introduction
Sulphide minerals are one of the most important sources of value metals, such as gold, silver, copper, zinc, etc. Due to the strong sulphur binding to these minerals, metals are usually extracted by pyrometallurgical route or hydrometallurgy with chemical oxidation.Of these, hydrometallurgy apparently has a lower environmental impact, which has received increased attention in last decades.The main stages of the hydrometallurgical route comprise leaching, extraction and precipitation or electrowinning.For several decades, a number of processes have been developed to leach sulphide ores and concentrates and the conditions are well established.However, there is a renewed interest in hydrometallurgical processes for copper production due to environmental issues and the increasing need to exploit mixed and low grade ores and relatively small isolated deposits.
Processing of these ores and deposits is very slow and requires a significant amount of reagents.Therefore, to make the process profitable, the treatment of large quantities of ore is required.Aqueous oxidation can be conducted under elevated temperature and pressure, but also at ambient conditions, which makes it environmentally and economically attractive.For this reason, studies to optimize aqueous oxidation and to explore more efficient oxidants have been made.However, in mining industry (especially in precious metals extraction), the use of advanced oxidation process or ozone as an oxidant has not been discussed in detail, although lab-scale experiments indicate that ozone may be an alternative to overcome economic and ecological disadvantages of aqueous extraction existing process.
In this Chapter, we will treated the use of ozone and advanced oxidation process, including microwave system, as methods to improve or to help the leaching of different sulphide minerals.For example, it is well known that ozone is a powerful oxidizing with high oxidation potential (2.07 V) compared with hydrogen peroxide (1.77 V) and chlorine (1.4 V), making it advantageous to use in several applications.Importantly, ozone can create favorable conditions to oxidize sulphide minerals in aqueous media.In this context, oxidative leaching with ozone is relevant in copper-iron sulphide and gold-and silvercontaining sulphides.Moreover, oxidative leaching of coal-containing iron sulphide might also have a positive impact on coal cleaning prior to its use in energy related applications.
The hydrometallurgy of different sulphide minerals will be treated.We will discuss and analyze the lab result that we obtained with these type of minerals.Cyanidation of goldsilver pyritic minerals with ozone pre-treatment, chalcopyrite and sphalerite leaching with oxidation and microwave as complementary methods, and pyrite dissolution present in coal by oxidants aqueous media, will be treated here.In each case including aspects as chemical reactions, thermodynamics (Pourbaix´s Diagrams), kinetics and analysis of factors with statistical tools are discussed.
Statistical tool, as Factorial and Taguchi experiment´s design and analysis of variance (ANOVA) will receive a particular attention.These methods are now widely used to provide the optimal selection of parametric values based on their intraparametric interactions to accomplish a process and determine the optimum leaching conditions.
Fundamentals
Valuable metals are recovering worldwide relevance due to the development of a whole new range of potential applications in electronics, environmental catalysis, material science, biomedicine, among other fields with significant impact in daily life activities.
Sulphide minerals, as pyrite, FeS2, chalcopyrite and CuFeS2, are one of the most important sources of value metals, such as gold, silver, copper, zinc, etc. Due to the strong sulfur binding to these minerals, metals are usually extracted by metallurgical process of chemical oxidation.
In Extractive Metallurgy, process can be divided in Pyrometallurgy and Hydrometallurgy.Particularly, chemical oxidation can be classified generally as roasting and aqueous dissolution.Roasting under oxidizing condition is a very extensive and well established commercial technology.However, roasting has been considered as a high energy consumer technology, with stringent environmental controls on the emission of gases.Hence, aqueous chemical oxidation methods have attracted increasing attention.The aqueous oxidation can be operated under elevated temperatures and pressures or ambient conditions.Definitely, low pressure and temperature are seen as environmentally and economically attractive (Deng, 1992).Aqueous oxidation can be conducted under elevated temperature and pressure, but also at ambient conditions, which makes it environmentally and economically attractive (Deng, 1992).For this reason, studies to optimize aqueous oxidation and to explore more efficient oxidants have been made.However, in mining industry (especially in precious metals extraction), the use of ozone as an oxidant has not been discussed in detail, although lab-scale experiments indicate that ozone may be an alternative to overcome economic and ecological disadvantages of aqueous extraction existing process.
Thermodynamics of oxidation process
Ozone has a very high oxidation potential (2.07 V) compared with hydrogen peroxide (1.77 V) and chlorine (1.4 V), making it advantageous to use in several applications (Rice, 1997).Importantly, ozone can create favorable conditions to oxidize sulphide minerals in aqueous media.According to the Pourbaix or Eh -pH diagrams shown in Figures 1 and 2, the sulphide species such as pyrite or pyrrhotite (Fig. 1) and chalcopyrite (Fig. 2), can be oxidized to sulfate in presence of an oxidant such as ozone, in a pH range from 2 to 14; the oxidized products could be solids or solutions.At very acid conditions (i.e, pH < 2), it is possible to dissolve metals as Fe and Cu ions.In this context, oxidative leaching with ozone is relevant in copper-iron sulphide and gold-and silver-containing sulphides.Moreover, oxidative leaching of coal-containing iron sulphide might also have a positive impact on coal cleaning prior to its use in energy related applications.In this paper, we show the
CuS
Cu 2 S CuSO 4 *3Cu(OH) 2 Cu +2 (a) beneficial effect of using ozone on process of environmental and commercial importance, and outline the role of ozone layer in process optimization.The practical significance of the study cases is briefly discussed next.
Chemical reactions
In this context, it should emphasize the process of oxidation of sulfides as exemplified by oxidation of pyrite, one of the most abundant minerals on earth.In general, under oxidant condition and low pH, pyrite oxidation proceeds through two basic steps: In the first step, the dissolution of pyrite to ferrous ions in an acid medium proceeds through the formation of an iron-deficient or a sulfur-rich layer rather than elemental sulfur.
In the second step, further oxidation of this layer occurs, forming sulfides of lower iron content, and eventually are converted to elemental sulfur.In severely oxidizing conditions, the elemental sulfur could be oxidized to oxy-sulfuric species.Anodic reactions, such as pyrite and sulfur oxidations, are sustained by cathodic processes, which could involve oxygen, hydrogen peroxide, or even ozone reduction.The importance of this analysis is based on the fact that, under certain conditions, such as pH, redox potential, temperature, etc., the product layer is protective, thus limiting pyrite oxidation.
Despite the existing discrepancies about the exact composition of the oxidation products, the most well-known general mechanism of pyrite oxidation is described in Eq 1.
Elemental sulfur is stable at low pH and redox potential and could be oxidized to sulfate by molecular oxygen and ferric ions at higher potentials (Eq 2).
FeS 2 + 8H 2 O = Fe 3+ + 2SO 4 2-+ 16H + + 15e - (2) The pyrite dissolution has been characterized in the following media: i. in the presence of oxygen at high pressure and temperature In the case of chalcopyrite, the acid leaching in presence of Fe occur according to the following reaction: According to Havlik et al. (7,9), the global reaction of chalcopyrite under the action of O 3 can be represented by: 3CuFeS 2 + 8O 3 = 3CuSO 4 + 3FeSO 4 (16)
Kinetics
In hydrometallurgy, most of leaching process follows the kinetic models for heterogeneous solid/liquid reactions, known as shrinking core models (SCM), as showed in Figure 3: the SCM controlled by chemical reaction and the SCM controlled by diffusion trough the solid product layer (Habashi, 1999;Levenspiel, 1999;Sohn and Wadsworth, 1986).A third model, the stochastic model for control by chemical reactions on the non-reacted particle surface (Ciminelli and Osseo-Assare, 1995) is considered.
In the mentioned models, the fraction of iron reacted at any time t, can be predicted from the following Equation.
1. Shrinking core model controlled by the chemical reaction Where, x is the fraction of iron reacted and can be calculated from the following relation: www.intechopen.comAnd , is the apparent rate constant, and can be calculated from the following relation: Where, k s is the rate constant of the reaction, is the density of the FeS 2 ore, R 0 , is the radius of the un-reacted particle, and C A , is the reactive concentration in the solution.The above equations are applied to mono-sized particles, thus the average size of a narrow fraction of particles can be used in the kinetic model.
2. Shrinking core model controlled by the diffusion of the reagents or dissolved species through the layer of solid reaction products, the fraction of iron reacted at any time t can be predicted from the following equation.
Where can be calculated from the following relation: Where, D, is the diffusion coefficient of the iron species.
3. Stochastic model.It takes into account the heterogeneity of solid minerals by introduction a stochastic distribution for the rate constant.Then, the rate constant, k s from the shrinking core model is transformed into a variable that changes with time or conversion, according to following relation: where, k s = k max /2 According to Ciminelli and Osseo-Assare (1995), the resulting equation has the following expression:
Costs
Ozone can be produced by many ways.There are more than 700 patented ways of ozone production.But commercially three most popular methods are being used: a) The UV method of ozone production, b) The plate types Corona ozone production and c) The tube types Corona ozone production (Baratharaj, 2011).
For UV method ozone production, it is therefore necessary to utilize a short wavelength ~185nm.In theory, the yield of O 3 from 185nm UV light is 130g/kWh of light.As lamp efficiencies are so low, ~1%, the production per kWh from the power source is greatly reduced.In practice, with the present state of development, UV lamps can only produce about 20g O 3 /kWh of ozone when using oxygen as the feed gas (Smith, 2011).
Ozone production by electrical discharges has been, and remains, the most commercially viable method.Essentially a corona is characterised by a low current electrical discharge across a gas-filled gap at a relatively high voltage gradient.The amount of ozone produced in a given corona ozonator design is relative to the concentration of oxygen in the gas feeding the corona.Basically, the more oxygen in, the more ozone out.In general, ozone concentrations of 1-3% using air, and 3-10% using oxygen can be obtained.The amount of energy applied to the gas gap between the electrodes is critical to the concentration of ozone produced.It is a combination of the voltage and frequency that results in a given energy input.Typically, voltages of between 7 to 30 kV are used with frequencies ranging from mains supply of 50 or 60 Hz, medium up to 1000 Hz, and high up to 4000 Hz.Then, the net effect is that less power is consumed to generate a given quantity of ozone as the oxygen concentration increases: Oxygen ~ 5 to 8kW/kg; Air ~ 15 to 18kW/kg (Smith, 2011).
Although there are no reports on the cost of using ozone in industrial mining applications, Botz et al (2000) reported a pilot scale study for oxidation of cyanide in mining effluents.In this case, the cost of using ozone operation was U.S. $ 0.97 per kg cyanide removed, compared with SO 2 /air and chlorination methods, $ 1.35 and $ 1.67, respectively.
Therefore, economical feasibility of the use of ozone in oxidation of sulphide minerals is possible, taking into account the amount of ozone used and the energy cost of production.
Oxidation of sulphide ores-containing gold and silver
Cyanidation is the most aqueous leaching process used to extract gold and silver.However, it has some disadvantages when precious metals are encapsulated in matrixes of iron sulphide minerals, such as arsenopyrite and pyrite (Shoemaker, 1990).In this case, the minerals receive an oxidation pretreatment (as oxidation roasting, chemical oxidation under pressure or biological oxidation) to facilitate gold and silver extraction by cyanide solution (Weir and Berezowsky, 1986;Chen and Reddy, 1990;Burbank et al., 1990).An alternative to these methods is the use of ozone, which increases the oxidation potential and the oxygen content of solution during cyanidation (Haque, 1992;Roca et al., 2000;Salinas et al., 2004;Elorza et al., 2006;Carrillo et al., 2007).Ozone can create favourable oxidation conditions for sulfide minerals in aqueous mediums.According to the Eh -pH diagram shown in Figure 1 and Eq. 10 to 13, the sulfide species, such as pyrite or pyrrhotite, can be oxidized to sulfate in oxidant conditions and within a pH range from 2 to 14, making it possible to obtain solids or solutions.In both cases, the product formed during the oxidizing reaction of pyrite with ozone permits favorable conditions to the contact of cyanide and oxygen with precious metals containing in the ore, thus increasing the efficiency of the cyanidation process and, on the other hand, the sulfur oxidized to sulfate will no longer react with cyanide to form the thiocyanate ion SCN -, one of the causes of the increased consumption of cyanide during cyanidation.
Although in the mining industry, especially in the case of extraction of precious metals, the use of ozone has not been much discussed, laboratory experiments indicate that ozone may be a valid alternative for resolving or surmounting the disadvantages of the already mentioned cyanidation process.In the case of refractory minerals, there have been reports of increases in the recovery of gold and silver which vary from 25% to more than 100% for both cases, as well as a significant reduction in the time of cyanidation (Salinas et al. 2004;Elorza et al., 2006).With non-refractory pyritic minerals, the results obtained have shown that pretreatment with ozone not only permits a greater extraction of gold during cyanidation, but also causes less cyanide consumption.
Table 1 shows gold and silver composition of samples of pyrite containing gold and silver, with a size distribution of 75% -75 μm.The detailed process for experimental tests was previously reported (Carrillo et al., 2007(Carrillo et al., , 2011)).Experiment included a pretreatment (before cyanidation) with ozone directly in mineral slurry at pH of 6.Subsequently, solid sample was treated for 48 h under conventional cyanidation conditions.Table displays the amount of metal recovered from the cyanidation process.It is evident that ozone pretreatment increased dissolution of gold in cyanidation, particularly for the sample with the highest gold composition.For samples A and B, dissolution value increased significantly with ozone, and in sample C is interesting to note that it is still possible to recover gold by cyanidation method.Table 1 shows silver dissolved percentage during cyanidation process.When ozone pretreatment is carried out, the amount of silver dissolved increased to 82, 83 and 75 %, respectively.The increase in samples A and B was about 15%, but sample 3 showed a very significant increment in extracted silver.The mineralogy of this metal could explain the difference: the sample was probably in form of argentite, a silver sulfur that is not extracted during cyanidation process.Previous results suggested that ozone introduced in the slurry, chemically reacts with pyrite´s sulfur increasing the oxidation potential of the slurry.In a previous work, we have shown that ozone treatment leads to partial oxidation of sulphide minerals and to sulfate ion formation, specifically in oxidation of sulfur (Carrillo et al., 2007).Improvement in gold and silver recovery from ore with ozone pretreatment indicated that reaction intermediate products promote the conditions for cyanide diffusion to the precious metals in the subsequent cyanidation process.Table 1 also shows the consumption of cyanide during the cyanidation of the samples, with and without pre-treatment.It can be seen that, after pre-oxidation, the consumption of cyanide was high.5.8, 3.12 and 3.6 kg/ton.For the same samples, the consumption of cyanide decreased considerably with the pre-oxidation, achieving a significant saving compared to untreated samples.According to the results obtained, ozone treatment before cyanidation permits partial oxidation of sulphide minerals, specifically the oxidation of sulphur.The conditions necessary for these reactions would be acidic pH less than 6, and oxidants conditions according to Figure 1.These conditions, pH of 6 in the slurry, can be maintained during the ozone pre-treatment test.The ozone introduced in slurry reacts by chemical reaction with sulfur of pyrite or by ozone decomposition at oxygen, increasing the oxidation potential of the slurry.The fact that recovery of gold and silver improved with oxidation pretreatment of the ore indicates that the obtained product of reaction contributes to creating better conditions for the diffusion of cyanide to the precious metals.However, the low consumption of ozone in the tests permits one to suppose that the principal function of ozone is increase the oxidation potential of slurry, and obtain better conditions for partial pyrite oxidation.In addition, treatment with ozone permits increasing the content of soluble oxygen in the ore slurry, the oxygen that is the product of the decomposition of the ozone during its reaction with the ore.The presence of more oxygen in the slurry permits better conditions for the complexes formation during the cyanidation reaction, since it has been reported already that cyanidation is an electrochemical reaction produced by the cathodic reaction of the oxygen in the surface of the metal, which permits the anodic dissolution of the precious metals in order to achieve the complete cyanidation (Habashi, 1970;Parga et al., 2003).On the other hand, the oxidation of sulfur to sulphate or sulfate ion prevents sulfur species from reacting with cyanide to form thiocyanate, reaction which, as was mentioned above, is the main cause for the consumption of cyanide with this type of sulfide ores.Therefore, the consumption of cyanide decreases, thus increasing the efficiency of the process and decreasing its cost.
Oxidation of sulphide copper minerals
Sulphide copper minerals, such as chalcopyrite (CuFeS2), are the most abundant copperbearing minerals, and represent approximately 70 % of the world's known copper reserves (Davenport et al., 2002).Chalcopyrite is also the most stable of copper minerals due to its structural configuration (face-centered tetragonal lattice) and, consequently, the most refractory for aqueous extraction processing.
Industrially, copper ore leaching is almost always accomplished by diluted sulfuric acid medium and ferric sulphate, which are low-cost reagents and could be regenerated when ores are lixiviated.Several studies have been conducted to optimize the process conditions and to explain the basics of chalcopyrite leaching process.Thus, it has been suggested that a layer of elemental sulfur is formed on the external surface.The type of sulfur layer formed on the surface, according to Eq. 15, depends upon the reagents used, as well as on the process conditions (i.e., temperature and agitation); importantly, this layer inhibits the dissolution of the chalcopyrite, thus reducing the overall leaching rate and the process efficiency.Several approaches have been recommended to accelerate the chalcopyrite dissolution.However, there is an increasing interest in optimizing the aqueous extraction process for copper production due to the negative environmental impact caused by chemical reagents used (Shijie, 2005;Peacey et al, 2003).Although leaching of copper ores is carried out in diluted sulfuric acid medium and ferric sulfate as oxidant (Ukasik and Havlik, 2005;Antonijevic and Bogdanovic, 2004), low-cost reagents, different approaches have been suggested to increase chalcopyrite rate dissolution.The most common is to increase process temperature, but this implies higher energy requirements.Another suggested alternative is the use of strong oxidants such as ozone (Havlik et al., 1999), hydrogen peroxide (Antonijevic et al., 2004) and manganese nodules (Havlik et. al., 2005).
On the other hand, the use of ferric ion (Fe +3 ) to dissolve copper has an economic constraint and, therefore, Fe +3 has to be regenerated.This could be accomplished by oxidizing ferrous ions (Fe +2 ) with air and oxygen, although this step is usually very slow in acid medium.Pressure oxidation is an alternative for this oxidation process, but it is only applied on concentrated ores .Nevertheless, a thermodynamic point of view, the most effective way to improve the process efficiency is to eliminate the formation of any sulfur layer on the chalcopyrite surface and, at the time, regenerate Fe +3 , and this is possible under strong oxidation conditions.
In the leaching of a mixed ore, the high redox potential required in an acidic medium to avoid the sulfur layer formation can be met by using ozone (O 3 ) as oxidizing agent.
Havlik et al. ( 9) studied the leaching kinetics of a chalcopyrite concentrate in 0.5 M sulfuric acid (H2SO4) solutions, using O 3 as oxidizing agent, in the range of 4 to 75°C.Under the studied conditions, the reaction showed parabolic kinetics.No evidence of the formation of an elemental sulfur layer, or any other product layer, was found.The authors also indicated that the overall reaction rate was controlled by diffusion of O 3 in the interface solid-liquid; in addition, they reported that solubility of ozone decreases as the temperatures rises above 40 º C, thus limiting the beneficial effect of a temperature increase.Carrillo et al. (2010) mentioned that, according to equation 1, as the reaction takes place, a significant amount of ferrous ion (Fe +2 ) is formed.The continuous addition of O 3 into the solution favors the oxidation of Fe +2 to Fe +3 , according to following equation.
The occurrence of reaction 24 in preferential conditions such as the reaction rate is enhanced (i.e., large O 3 concentration, low pH) might cause an increase in (Fe +3 ), which in turn should favor reaction 1 and, in consequence, copper dissolution.
Therefore, a possible mechanism for chalcopyrite dissolution in the presence of Fe +3 and O 3 is that Fe +3 quickly react with the mineral surface to produce copper ions, Fe +2 ions and also an sulfur compounds (sulphate) layer on the surface.Then, Fe +3 and Fe +2 ions must diffuse through this layer to continue with the dissolution process.In addition, O 3 must diffuse from the gas bulk to the solution and to the interface of chalcopyrite particles and react with chalcopyrite (equation 2) and Fe +2 ions (equation 6).This last step takes place in the solution and might be faster, leading to the formation of more Fe +3 in the solution.
Figure 4 show copper profiles based on a Taguchi L9 experimental design.The figures show the main effects, as determined with a S/N (Signal/Noise) ratio, which is based on the concept of the "greater-the-better", was used to characterize the response (amount of copper extracted).The S/N ratio was defined as: where, the MSD (mean-square deviation) was calculated by: Where, n was number of tests, and yi was the value of Cu extracted (%) obtained from the ith test., as a function of the amount of copper extracted.Accordingly, figure shows that under the studied conditions, (Fe 3+ ) is the most important factor during the recovery of copper by chemical dissolution of chalcopyrite.Results also indicated that, within the analyzed range, (H 2 SO 4 ) had no effect on the amount of copper extracted.An increase in the levels of (Fe 3+ ) and O 3 concentration from first level to second level, and from second level to third level, resulted in a increase in the amount of copper extracted.Similar results were found for a decrease in the levels of particle size.However, when the levels of (H 2 SO 4 ) were increased, no significant effect was observed in the response.
The results showed in figure suggest that Fe +3 react at the interface increasing copper the dissolution.Obviously, reduction of the particle size increases the liberation grade of chalcopyrite and the reaction surface area, thus exposing a larger fraction of the copper mineral, and promoting a better contact between the metal and the chemical agents (Fe 3+, O 3 and H 2 SO 4 ) for faster dissolution.In addition, the strong oxidant ion condition is enhanced when O 3 it used.This beneficial effect is found for all (Fe +3 ) used.Therefore, a possible mechanism for chalcopyrite dissolution in the presence of Fe +3 and O 3 is that Fe +3 quickly react with the mineral surface to produce copper ions, Fe +3 ions and also an sulfur compounds (sulphate) layer on the surface.Then, Fe +3 and Fe +2 ions must diffuse through this layer to continue with the dissolution process.In addition, O 3 must diffuse from the gas bulk to the solution and to the interface of chalcopyrite particles and react with chalcopyrite and Fe +2 ions.This last step takes place in the solution and might be faster, leading to the formation of more Fe +3 in the solution.Then, the increased concentration of Fe +3 might promote the copper dissolution process, but diffusion of Fe+3 ion in sulfur compounds layer could be slower, and thus gradual stop of the overall rate of copper extraction.
Based on results, the effect of adding Fe +3 and O 3 is favourable for small particle sizes.For larger particles, only (Fe +3 ) seemed to affect copper dissolution since an increase in O 3 had no beneficial effect.It has to be considered that for larger particle sizes, there is less chalcopyrite exposed to the reagents and, therefore, the presence of a single oxidant is sufficient to promote copper dissolution.Due to the relatively small liberation of chalcopyrite, the oxidation of Fe +2 with O 3 is not a limiting step to promote copper dissolution.Finally, the results obtained here suggest that only minimal quantities of acid in the solution are required for the dissolution of copper, just enough to prevent hydrolysis and precipitation of Fe +3 by OH -.
Iron sulphide oxidation in coal
Another process is ozone application for iron sulphide oxidation in coal, one of the most important fossil fuels used for energy production.However, due to its nature, coal requires a cleaning stage based on physical methods before its use to meet air pollution regulations (Apenzaller, 2006), but organic sulfur and syngenetic pyrite is removed with low efficiency to the required level (Ozbayoglu, 1998).Previous to the combustion, coal cleaning techniques based in physical methods are extensively used, but are less efficient to remove organic sulfur and syngenetic pyrite (FeS2).Syngenetic pyrite is one of the two forms of pyritic sulphur, found as a very fine and highly disseminated mineral in coal, which makes it difficult to separate by conventional cleaning process (Baruah and Khare, 2007;Pysh`yevl et al., 2007;Li and Cho, 2005;Ayha et al., 2005;Baruah et al., 2006).
Many studies have been realized to explore the pyrite dissolution by oxidants aqueous media.For this purpose, various oxidizing agents such as oxygen, hydrogen peroxide, ferric sulfate, ferric chloride, potassium permanganate, perchloric and nitric acids have been used to oxidize pyrite (Elliot, 1978;Bonn and Heijnen, 2001;Borah, 2006;Kawatra and Eisele, 2001;Antonijevic et al., 2003;Karaca et al., 2003;Mukherjee and Srisvastava, 2004).Previous work reported author indicate that hydrogen peroxide, ozone, and combined ozonehydrogen peroxide in acid medium can to help to the pyrite removal (Davalos et al., 2009;Carrillo et al., 2009).
The oxidation of pyrite in an acid medium has been extensively studied and documented due to its importance in sulfur processing.This anodic process is recognized as a complex method that involves chemical and electrochemical equilibrium.Exactly as Chander et al. have summarized, pyrite oxidation processes have been classified into two mechanisms: (a) the preferential release of iron ions from pyrite and (b) the preferential release of oxysulfuric species.In the first process, the outer reacted layer of pyrite has been identified as elemental sulfur (S•), ''polysulfide,'' or ''metal deficient,'' which corresponds to the theory proposed by Buckley et al.In the second process, the sulfur in pyrite oxidizes to sulfates or thiosulfates, leaving a reacted layer composed of iron hydroxides.
The use of ozone for pyrite removal in coal and its effect in different conditions were investigated by Dávalos et al. (2009) and Carrillo et al.(2010).The use of ozone and its effect in different conditions were investigated.These works were based on an experimental design with the following parameters and levels: type and concentration of reagents (NaOH, HCl, HNO 3 and H 2 SO 4 ; 0.3, 0.8 and 1.3 M; and distilled water) and presence and concentration of O 3 (0, 0.16 and 0.33 L⋅g/hr).The main factors affecting the FeS 2 dissolution were determined by analysis of variance (ANOVA).Table 2 shows the main effect ANOVA of the results as a function of the amount of Fe extracted at 90 minutes of treatment.According to the table, ANOVA shows that, under the studied conditions, the type of acid and concentration of O3 employed are the most important factor of the chemical dissolution of pyrite, followed of the reagents concentrations.Test results confirms that the maximum pyrite dissolution is reached when sulphuric acid is used.Increasing the acid concentration (average of the different acids), does not clearly affect the response of these curves.An increase in the levels of factors O 3 , results in an increase in the mean value.Based on this, the results indicate a qualitative way to know the behaviour of pyrite dissolution at different combinations of chemicals (aqueous medium, concentration, oxidants).In FeS 2 dissolution, the previously assumption that the extent of the chemical reaction rate at the interface is similar than that of the diffusion, can be precise.The presence of a strong oxidant conditions prevents the formation of the aforementioned product layer of elemental sulphur due to the high anodic potential achieved, or, at least, the electrochemical conditions influence the layer texture, favouring the formation of a porous cover of product.
The dissolution of pyrite proceeds into a coal matrix, thus the diffusion through the porous in the coal could limit the flux of the reagents and products to or from the reactive layer of pyrite.In addition, to the diffusion, the reagents (alkali and acid) could modify the chemical properties of coal, therefore a consumption of oxidative reagents would occur, thus leading the process to the chemical control.The diffusion and the reagents consumption, could explain the fit of the results to the diffusion and chemical controlling stage.In order to compare the results of O 3 treatment (leaching) with washability and flotation tests, different coal samples were treated, as shown in Table 4, showing the sulfur content of each product obtained in each test.The O 3 treatment was carried out using H 2 SO 4 as aqueous medium solution.The results indicate that sulfur is largely free, so from the first separation in dense medium (washability test), with density 1.3, can get a 27% removal of sulfur.This indicates a ratio of sulfur released that has a higher density than coal.Significantly, the decrease depends on the initial sulfur content in the sample, but the trend continues in the different test.
In subsequent steps of washability separation, sulfur removal rate of 27% is maintained, except the last, which is 18%.Clearly, the washability test is cumulative, so that the sulfur content is relative to the sample proportion remaining in each stage.In the flotation to recover fine coal used, it is possible to obtain low sulfur content, although the removal with respect to the original sample is on average 11%.Although a smaller particle size (100% -600 mesh Tyler), and therefore possibly a greater release of sulfur, it is also possible that a proportion of pyrite released to interact with the collector or pyrite fines being swept by the foam.On the other hand, it is interesting to note that the sulfur content obtained by the O 3 treatment of sample M1-CAO (sample obtained of coal mixed samples, named coal all-inone) is similar to the first separate density 1.3 (25% removal) and a sample washed coal and may further reduce the sulfur content of such leaching, reaching up to 8% sulfur removal.To confirm the previous results in industrial sample, CAO (labeled M2) samples were obtained from different steps (P) of industrial washing plant.These samples were treated with oxidant leaching tests, which were repeated 4 times.The objective of this procedure was to determine the reduction of sulfur, also determine the degree of repeatability and reproducibility (R) of the tests.
Figure 5 shows the initial sulfur content in each sample (hollows circles), as well as the average, minimum and maximum sulfur analysis obtained for each sample treatment.The error bar indicates the range of results from 4 trials (repetitions) performed on each sample.Results indicate that removal of sulfur is greater if the initial sulfur content is higher, although the variability obtained is also high.For example, with P1 step coal, an average 18 % sulfur removal was obtained.When the initial sulfur is lower, it also reduces the variability in treatment result, but the removal percentage is lower, between 8 and 9%.This may be due to the particle size distribution that exists in the wash steps, which consists of steps processes using cell dense media (box Daniels), hydro-cyclones and spirals.The particle size is related to the degree of liberation of sulfur (pyrite).Then, a narrower size distribution, the variability is lower.
The results indicate that ozone is the stronger oxidant to remove the pyrite and its dissolution is enhanced by sulphuric acid medium.The analysis of the results with O 3 as oxidising agent using ANOVA and multiple lab-test, was applied in order to support that diffusion of oxidant agents is the rate controlling step of the overall process: O 3 diffusion from the gas bulk to the dissolution to react with pyrite and diffusion of Fe 3+ through the layer formed in the boundary surface pyrite and coal particles.The results clearly showed that ozone contributed to the use of lower concentrations of H 2 SO 4 , with the consequently economical savings.
Conclusion
Sulfur is the element that, in copper, gold and silver cases, inhibits metal dissolution present in minerals.On the other hand, the removal of sulfur is greatly needed in case of coal, due to emissions of SO 2 gas during coal fire-combustion.In this Chapter, oxidative hydrometallurgy of different sulphide minerals using ozone has been analyzed.Results show that the treatment with ozone increases sulfur oxidation in the different cases.
Pyrite containing gold and silver results suggests that the treatment with ozone before cyanidation process improves the recovery of gold and silver and reduces the consumption of cyanide.The results obtained from low metal value content sample indicate that is still possible to recover more than half of the gold and silver content by means of cyanidation.
The results indicate a significant reduction in the consumption of cyanide which permits the recovery of these metals with less cost.But, the determination of the cost of ozone consumption in order to evaluate its use is very important.
In the case of chalcopyrite, the approach suggested in this work using experimental design with Taguchi L9 design matrix and statistical analysis to study the acid leaching of chalcopyrite with ozone and ferric ions was helpful to optimize the experimental conditions and to rationalize the results.This approach may be used to decrease development costs and ultimately operational costs of copper extraction from low-grade chalcopyrite.
For sub-bituminous coal, the dissolution of pyrite from in different medium using ozone was studied.The results, using ANOVA, indicate that ozone is the stronger oxidant to remove the pyrite and its dissolution is enhanced by sulphuric acid medium, the results support that diffusion of oxidant agents is the rate-controlling step of the overall process: O 3 diffusion from the gas bulk to dissolution to react with pyrite and diffusion of Fe 3+ through the layer formed in the boundary surface pyrite and coal particles.
Thus, the use of ozone can be a promising auxiliary agent in the actual process of obtaining metals and coal with the following advantages: 1) decreasing operational costs of low-grade chalcopyrite leaching, 2) increasing gold and silver recovery in cyanidation and decrease cyanide consumption, 3) decreasing the sulfur-containing coal since coal cleaning plants, to be used as clean in energy generation and iron and steelmaking.
Fig. 4 .
Fig. 4. One factor graphic of the L9 experimental design for the leaching of chalcopyrite.
Table 1 .
Chemical Assay of pyrite samples containing gold and silver, used in dissolution of precious metals by cyanidation (48 h) with and without ozone pre-treatment.
Table 2 .
ANOVA for experimental design used in pyritic removal of coal samples. www.intechopen.com
Table 3 .
Table3shows the kinetics data of pyrite dissolution form coal with the different reagents.This table shows that the SCM for product layer diffusion control, describes well the experimental data with respect to the others.Then, the evidence of kinetics behavior of the experimental data indicates that the reaction can be controlled by diffusion through a layer or film conformed by the surrounding surface coal.Correlation coefficient (r2) obtained from the kinetics models for FeS 2 dissolution.
Table 4 .
Total sulfur, %, for the diferent removal sulfur test.
|
2019-04-28T13:06:11.598Z
|
2012-03-23T00:00:00.000
|
{
"year": 2012,
"sha1": "1b02921abfd4936b27b06b7389bf06a0d15e585d",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/32749",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fbf1819288dc22a3a03d1784cd95fae37e75e2ee",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
14295194
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Implications of Septal Deviation in Lateralized Olfaction
Objectives Results of butanol threshold tests (BTTs) have shown that birhinal olfaction tends to converge toward monorhinal olfaction of the dominant nostril. However, birhinal olfaction may also be worse than dominant-side monorhinal olfaction. The goal of our study was to investigate the effect of deviated nasal septum on birhinal olfaction in patients with lateralized olfaction and to examine the effect of septoplasty in these patients. Methods A retrospective study with planned data collection was conducted in 518 patients who underwent BTTs. Lateralized olfaction was defined as monorhinal BTT scores that differed by >2 between sides. Underestimated birhinal olfaction was defined as a birhinal BTT score >2 lower than the dominant nostril monorhinal BTT score. Patients with lateralized olfaction were divided into 2 groups: group 1, underestimated birhinal olfaction; and group 2, without underestimated birhinal olfaction. Results Among 518 patients, 112 with lateralized olfaction were enrolled in this study. Group 1 included 23 patients (20.5%) and group 2 included 89 patients (79.5%). The severity of septal deviation (ratio of the distance of narrower side to wider side) did not differ between the 2 groups. Septal deviation to the dominant nostril was more common in group 1 than group 2 (73.9% vs. 37.6%; P=0.002). Five patients with septal deviation to the dominant nostril with underestimated birhinal olfaction underwent septoplasty. Improved lateralized olfaction occurred in all 5 patients postoperatively (P=0.041). Conclusion Septal deviation of the dominant nostril in patients with lateralized olfaction is associated with underestimated birhinal olfaction. Septoplasty may improve olfaction by increasing airflow in the dominant olfactory side.
INTRODUCTION
Approximately 15% of the normal population have lateralized olfaction, and this percentage rises to 26%-32% in individuals with chronic rhinosinusitis or a sinonasal tumor [1]. Patients with neuropsychiatric diseases, such as schizophrenia or Parkinson disease, also have significant differences in olfaction between the two sides of their nose [2,3].
Numerous studies have examined the relationship between monorhinal olfaction and birhinal olfaction, but the findings have been contradictory. Differences between monorhinal olfaction and birhinal olfaction have been found to depend on the type of odor. Monorhinal and birhinal olfaction differ for pleasant odors but are similar for unpleasant odors [4]. However, many researchers report that birhinal olfaction offers no significant improvement in olfaction compared to monorhinal olfaction. Birhinal olfaction is mainly influenced by olfaction from the dominant nostril [5][6][7]. The relative lack of importance of birhinal olfaction differentiates olfaction from acoustic localization or binocular vision, in which the capacity to detect sensory in-put bilaterally is clearly advantageous over unilateral hearing or vision [7]. Furthermore, authors have commented that olfaction is worse in birhinal olfaction than monorhinal olfaction in a considerable number of patients.
In this study, we investigated the effect of deviated nasal septum on birhinal olfaction in patients with lateralized olfaction and examined the effect of septoplasty in these patients.
Subjects
In this retrospective study, we screened the medical records of 518 patients who underwent butanol threshold tests (BTTs) from November 2012 to October 2013. When a patient smells significantly better with one nostril than with the other nostril, he is said to have lateralized olfaction. In this paper, we defined lateralized olfaction as more than 2 points of BTT scores between 2 nostrils. The better olfaction side was defined as the dominant nostril and the other side was defined as the nondominant nostril.
Patients were included if they had lateralized olfaction. Patients were excluded if they had a history of head trauma, corticosteroid use before the olfactory test, or a recent upper respiratory tract infection. The final study cohort consisted of 112 patients with lateralized olfaction (86 males, 26 females). Their mean age (±standard deviation) was 35.6±14.3 years.
If the birhinal BTT score was lower than the dominant monorhinal BTT score by 2, he or she was said to have underestimated birhinal olfaction. The patients were classified into 2 groups based on their BTT results. Group 1 included patients who had underestimated birhinal olfaction and the remaining patients were assigned to group 2. This study was approved by the Institutional Review Board of Seoul Metropolitan Government-Seoul National University Boramae Medical Center (16-2013-53).
Olfactory function tests
Odor threshold testing was performed using the BTT. A series of 10 concentrations of N-butanol (Sigma-Aldrich, St Louis, MO, USA) was generated by serially diluting 4% N-butanol with mineral oil (Sigma-Aldrich). The test was performed in each nostril separately, to evaluate monorhinal olfaction, and in both nostrils simultaneously, to evaluate birhinal olfaction. The study subjects were presented with two polyethylene bottles, one with mineral oil and the other with butanol, and they were asked to choose the bottle with butanol. The test was repeated until the butanol bottle was correctively identified by the examinees in five consecutive trials. The examination was started at a concentration level of 10. The lowest concentration at which the butanol bottle was correctly identified 5 times consecutively was designated as the threshold level.
Eleven patients in group 1 underwent septoplasty. With these patients, BTT tests were conducted 6 months after surgery in 5 individuals who did not have a disease potentially affecting olfaction, such as acute/chronic rhinosinusitis, allergic rhinitis, or a tumor.
Paranasal sinus computed tomography
The paranasal sinuses, degree of septal deviation, and status of the olfactory cleft were evaluated using paranasal sinus computed tomography. The degree of septal deviation was evaluated quantitatively by measuring the distance from the lateral wall of each nasal cavity to the nasal septum at the most severely deviated point of the nasal septum and calculating the ratio of the distance of narrower side to wider side.
Statistical analyses
Statistical analyses were performed with IBM SPSS ver. 18.0 (IBM Co., Armonk, NY, USA). The Mann-Whitney test and Wilcoxon signed rank test were used to determine statistical differences (P<0.05). Data are expressed as the mean±standard deviation.
DISCUSSION
Previous studies showed that a deviated septum is associated with functional lateralization of nasal resistance [8], nasal cycle [9], and mucocilliary clearance [10]. In a recent study, septal deviation resulted in decreased olfactory function at the narrower side including odor identification, odor discrimination, and odor thresholds [11]. In odor threshold tests, the difference in mean scores was about 1.7 points. Therefore, in this study, we defined lateralized olfaction as more than 2 points of BTT scores between two nostrils.
In this context, septal surgery may improve olfactory function by enhancing transport of odor molecules to the olfactory cleft [12]. However, the sense of smell is not always improved after surgery. Indeed, olfactory function is often compromised after surgery because of direct trauma to, or vascular compromise of, the olfactory epithelium [13]. All nasal surgery may disrupt olfaction, even when it is performed at a distance from the olfactory epithelium [14]. Nevertheless, only a few studies have quantitatively investigated the association between septal surgery and olfaction. As a result, it is exceedingly difficult to predict whether the sense of smell will improve after septal surgery.
In this study, lateralized olfaction was observed in 21.6% of 518 patients who visited the clinic with nasal symptoms and who underwent BTTs. Considering the diversity of chief complaints and comorbidities, this percentage is generally comparable to that noted in prior studies: 26% in patients with chronic rhinosinusitis and 32% in those with a sinonasal tumor [5]. As in a previous study, the mean value of birhinal BTT converged to the dominant olfaction side BTT, although it was slightly lower than the dominant olfaction side BTT [7]. This can be explained by the inclusion of patients in whom birhinal olfaction was not superior to monorhinal olfaction. Similarly, our group 1 patients, whose birhinal BTT score was lower than the BTT score of the dominant olfaction side, comprised a substantial percentage of all patients in our study.
We hypothesized that septal deviation of the dominant or nondominant olfaction side may influence birhinal olfaction. For patients with deviation of the nondominant olfaction side, the dominant olfaction side BTT score did not differ from the birhinal BTT score (Fig. 4A). However, for those with deviation of the dominant olfaction side, the birhinal BTT score was lower than the monorhinal BTT score of the dominant olfaction side in some cases (Fig. 4B).
Septoplasty has become the standard procedure for the treatment of septal deviation. In this technique, the olfactory epithelium may be influenced indirectly through changes in air-flow [13,15]. A computational fluid dynamics experiment demonstrated that a small decrease in nasal valve area (1.45%) resulted in a large decrease in airflow to the olfactory cleft (18.7%) [16]. By contrast, another study of 16 patients awaiting septal surgery has demonstrated no correlation between the volume around the inferior turbinate and olfactory function [17].
In our group 1 patients who underwent surgery, those with septal deviation of the dominant olfaction side exhibited signifi- cantly reduced lateralized olfaction after septoplasty. Previously, improved olfaction after septoplasty has been reported by subjective evaluation using a questionnaire [18]. However, subjective evaluation of olfaction may not be reliable. Objective olfactory function testing has been strongly correlated with nasal airway patency in healthy, untrained subjects. It is controversial whether subjective improvement of olfaction is a valid index of objective olfactory function [19][20][21]. In this study, we found that septoplasty improved birhinal olfaction by reducing lateralized olfaction using objective olfactory function testing before and after surgery. It may be hypothesized that these findings result from changes in intranasal airflow following septoplasty. However, additional studies should be performed with larger sample sizes and control group analyses to further address this issue.
Despite some limitations, this study is the first report of the effects of septal deviation on birhinal olfaction. Our results provide valuable information to allow appropriate clinical counseling if olfactory function declines in patients with severe septal deviation.
Septal deviation of the dominant olfaction side is associated with underestimated birhinal olfaction. Septoplasty in these cases may be helpful for correcting the lateralized olfaction. Preoperative monorhinal and birhinal olfactory tests may be useful for appropriate counseling of patients scheduled for septoplasty.
|
2016-05-12T22:15:10.714Z
|
2016-03-01T00:00:00.000
|
{
"year": 2016,
"sha1": "ce876d6e011693068e726e2c8fa569d00556606f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.21053/ceo.2016.9.1.39",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce876d6e011693068e726e2c8fa569d00556606f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219604150
|
pes2o/s2orc
|
v3-fos-license
|
Cytoplasmic CPSF6 Regulates HIV-1 Capsid Trafficking and Infection in a Cyclophilin A-Dependent Manner
HIV is the causative agent of AIDS, which has no cure. The protein shell that encases the viral genome, the capsid, is critical for HIV replication in cells at multiple steps.
pentamers, which encapsulates two copies of the RNA genome (1,2). An important yet understudied aspect of HIV-1 infection, which is referred to as uncoating, describes the process of capsid dissociation that occurs during reverse transcription and before viral DNA integration into host chromatin (3,4). HIV-1 capsid uncoating is dependent on microtubule trafficking (5)(6)(7) and may occur in a multistep process (8). HIV-1 infection requires active transport of the viral reverse transcription complex/preintegration complex (RTC/PIC) through the cellular nuclear pore complex (NPC) (9), and the majority of capsid is likely uncoated at the NPC and/or within the nucleus (10)(11)(12)(13)(14). Single mutations in CA can greatly impact the stability of HIV-1 capsid, altering its uncoating and affecting virus infectivity (15,16).
HIV-1 capsid has been shown to bind several host proteins during infection (17). Two examples are cyclophilin A (CypA) (18,19), which is relatively abundant in cells (20), and cleavage and polyadenylation specificity factor 6 (CPSF6) (21). Disruption of HIV-1 capsid binding to CypA can occur via amino acid substitution, such as G89V and P90A, in the loop between helices 4 and 5 in CA (22,23) or by treatment with smallmolecule inhibitors, such as cyclosporine A (CsA) (18). The inability to bind to CypA in target cells can affect the infectivity of the invading HIV-1 particle (24). CypA dependence is cell type specific and has been shown to affect multiple steps in the virus life cycle, including capsid uncoating, reverse transcription, nuclear import, integration, and evasion of TRIM5a restriction (25)(26)(27)(28)(29)(30).
CPSF6 is an arginine/serine (RS) domain-containing protein expressed predominantly in the cell nucleus and is involved in splicing and polyadenylation of host RNAs (31). CPSF6 binds to HIV-1 capsid at an interface between two CA monomers defined by helices 3, 4, and 5 (21,32). The beta-karyopherin transportin 3 (TNPO3) is the key mediator of RS domain protein nucleocytoplasmic transport in cells, and TNPO3 directly binds the C-terminal RS domain in CPSF6 to affect its nuclear import (33,34). Accordingly, truncation of the C terminus of CPSF6 leads to increased cytoplasmic expression and inhibition of HIV-1 nuclear entry and infection (21,33) in a TNPO3-dependent manner (35,36). Disruption of CPSF6 binding to HIV-1 capsid is mediated by alteration of CPSF6 at amino acid F321 in the central proline-rich domain (37) or by mutations in CA, including residues 57, 70, 74, 77, and 105 (21,32,38,39). While reduction of CPSF6 expression or loss of capsid binding to CPSF6 in cells does not affect overall HIV-1 infection in most cell types, viral DNA integration is mistargeted outside gene-dense, transcriptionally active host chromatin to heterochromatic lamina-associated domains (28,40,41).
In this study, we investigated whether HIV-1 capsid interacts with CPSF6 in the host cell cytoplasm and whether this interaction affects capsid trafficking and subsequent virus infectivity in different cell types. Using live-cell microscopy, we visualized wildtype (WT) HIV-1 complexes colocalized with cytoplasmic CPSF6 that trafficked together on microtubules. By negative-stain transmission electron microscopy (TEM), we show that purified CPSF6 protein forms oligomers that bind and disrupt CA tubular assemblies. Inhibiting HIV-1 capsid interaction with CypA led to increased association of viral particles or in vitro CA assemblies with CPSF6 and changes in WT HIV-1 complex trafficking that corresponded to reduced infectivity. Depletion of CPSF6 affected capsid trafficking, albeit differentially depending on the cell type.
RESULTS
CPSF6 is expressed in the perinuclear region and traffics on microtubules with WT HIV-1 complexes. As CA protein may dissociate from HIV-1 nucleic acid complexes prior to entry into the nucleus where CPSF6 predominantly is expressed, we examined whether CPSF6 was expressed in the cell cytoplasm. Antibody staining of endogenous CPSF6 (NBP1-85676; Novus) or expression of green fluorescent protein (GFP)-tagged CPSF6 (CPSF6-GFP) in HeLa cells showed mostly nuclear expression as well as punctate cytoplasmic expression mainly near the nuclear membrane, which may indicate higher-order complex formation (Fig. 1A). Highly inclined and laminated optical sheet (HILO) live-cell microscopy enabled precise tracking of rapidly moving fluorescent complexes at high temporal resolution with relatively low photobleaching. Perinuclear CPSF6-GFP puncta were shown to be dynamic in cells with linear movement and were colocalized with microtubules (Fig. 1B, Movie S1). Inhibition of microtubule polymerization with nocodazole inhibited CPSF6-GFP movement in cells, suggesting that CPSF6 traffics on microtubules itself or by binding another host protein (Fig. 1C).
As HIV-1 RTCs also traffic on microtubules (42) (Movie S2), we examined the association of fluorescently labeled HIV-1 particles with cytoplasmic CPSF6-GFP in cells. Vesicular stomatitis virus glycoprotein (VSV-G)-pseudotyped WT or N74D HIV-1 encoding firefly luciferase and labeled with integrase (IN) tagged with the fluorophore mRuby3 or tagRFP was used at equal amounts (10 ng p24) to infect cells expressing CPSF6-GFP. Labeled IN colocalized with viral RNA in particles or with viral RNA and CA FIG 1 CPSF6 puncta are detected in the perinuclear region and traffic on microtubules. (A) Endogenous CPSF6 stained with antibody or expression of CPSF6-GFP is shown in HeLa cells (dotted lines, cell outlines). CPSF6 is expressed as two different isoforms composed of 551 or 588 amino acid residues; exogenously expressed proteins throughout this study were based on the 588 isoform. Quantification of endogenous cytoplasmic CPSF6 puncta by antibody staining (n = 103) or CPSF6-GFP expression (n = 96) was performed and compared to control staining (n = 53) in HeLa cells. (B) Movement of a CPSF6-GFP higher-order complex (green) is shown in a HeLa cell stained with SiR-tubulin (red) by HILO live-cell imaging. The white arrow indicates the location of the complex at the first time point; the yellow arrow indicates the location at subsequent time points, and the white line indicates the trajectory. (See also Movie S1.) (C) The percentage of cytoplasmic CPSF6-GFP complexes that trafficked .0.5 mm/s in HeLa cells treated with or without nocodazole. Error bars indicate standard deviations (STDEV) of n $ 185 complexes.
Cytoplasmic CPSF6 and CypA and Capsid Trafficking ® protein early after infection, the latter of which demarcates RTCs/PICs (43). WT complexes were colocalized with perinuclear CPSF6-GFP in HeLa cells, while N74D complexes were not ( Fig. 2A). WT HIV-1 complexes were also colocalized with endogenous cytoplasmic CPSF6 in SupT1 CD4 1 T cells (Fig. 2B). Multiple WT or N74D virus particles that colocalized with CPSF6-GFP were assessed by live-cell imaging over time. The fluorescence intensity of WT HIV-1 complexes remained associated with CPSF6-GFP, whereas the fluorescence intensity of N74D viral particles did not (Fig. 2C). This is consistent with a recent study showing WT HIV-1 particles associated with CPSF6 initially outside the nucleus (14). WT HIV-1 particles associated with CPSF6-GFP trafficked rapidly and linearly, suggestive of microtubule movement (Fig. 2D, Movie S3). Consistent with this interpretation, trafficking of WT HIV-1 particles associated with CPSF6-GFP was inhibited with nocodazole treatment (Fig. 2E). Finally, CPSF6 tagged with iRFP670 trafficked toward the nucleus with its TNPO3 binding partner, which was tagged with GFP ( Fig. 2F, Movie S4). These results suggest that HIV-1 complexes traffic with CPSF6 on microtubules.
Changes in the CPSF6 RS domain alter WT HIV-1 complex trafficking. Truncation of CPSF6 at residue 358 (CPSF6-358), which removes the RS domain, or alteration of four positively charged amino acids (K547, R549, R559, and R561) in the RS domain to glutamic acid (CPSF6-4Glu) (Fig. 3A) leads to decreased nuclear localization of CPSF6 and restriction of WT HIV-1 infection at the step of nuclear entry (21,33). Expression of iRFP670-tagged CPSF6-4Glu or CPSF6-358 led to increased cytoplasmic localization in cells, with the truncated protein showing a greater effect than the 4Glu mutant, which revealed greater variability in cytoplasmic expression levels than the other constructs (Fig. 3B). Similarly, expression of CPSF6-358 resulted in greater restriction of WT HIV-1 infection than expression of CPSF6-4Glu (Fig. 3C). N74D HIV-1, which does not bind to CPSF6, was not restricted by either mutant.
To determine if relocalization of CPSF6 to the cytoplasm affects HIV-1 complex trafficking, live-cell imaging of WT and N74D HIV-1 complexes was performed in cells expressing fluorescently tagged CPSF6, CPSF6-4Glu, or CPSF6-358. To avoid imaging particles that have not yet fused out of endosomes into the cytoplasm, imaging was performed using mRuby3-IN-labeled WT HIV-1 particles that were also labeled with the glycosylphosphatidylinositol targeting motif of decay-accelerating factor (44) tagged with fluorogen-activating protein (FAP-GPI) to label the virus membrane. Loss of FAP-GPI signal from the mRuby3 signal signified that the HIV-1 membrane had fused with the endosome and the contents of the virus were released into the cytoplasm. During synchronized infection, we observed that nearly all mRuby3 signal separated from the viral membrane by 50 min (data not shown). Thus, for virus tracking experiments, acquisition of images began at 60 min postinfection.
Fluorescent IN complexes were analyzed for average speed, track length (distance), and track straightness (calculated by the distance between the end points over the trajectory length; also known as displacement). In HeLa cells expressing CPSF6-GFP, WT and N74D viral complexes had similar speeds and track lengths but differed in track straightness (Fig. 3D, Fig. S1), suggesting that both capsids utilize microtubules but differences in binding host proteins affect microtubule movement. WT HIV-1 particle movement in cells expressing CPSF6-GFP did not differ from normal HeLa cells (data not shown). In contrast, WT HIV-1 particles increased in all three measurements when CPSF6-4Glu-iRFP670 or CPSF6-358-iRFP670 was expressed in cells (Fig. 3D, Fig. S1). Little or no change in particle speed, track length, or track straightness was observed for N74D complexes in the presence of CPSF6-4Glu or CPSF6-358. Similar to the effect of increased CPSF6 cytoplasmic localization on WT HIV-1 infectivity, average WT virus particle speed and track length inversely correlated with the intensity of nuclear CPSF6 expression (Fig. 3E). These data suggest that HIV-1 complex trafficking is altered in the cytoplasm by enhanced CPSF6 cytoplasmic localization. Furthermore, increased HIV-1 mobility is associated with an infectivity defect.
CPSF6 oligomerizes and disrupts assembled WT CA. Previously, we showed that purified recombinant CPSF6-358 protein formed oligomers and disrupted WT HIV-1 CA (43). To determine whether full-length CPSF6 had similar properties, it was purified and characterized with WT and N74D CA tubular assemblies. To obtain soluble CPSF6, an N-terminal maltose binding protein (MBP) fusion construct was expressed and purified, resulting in two peaks in size exclusion chromatography (Fig. S2A, labeled P1 and P2) that corresponded to the tagged full-length CPSF6, as confirmed by Western blot analysis (Fig. S2B). This suggests that the purified fusion protein may adopt different oligomeric states similar to what was observed for CPSF6-358, which also displayed two peaks in a size exclusion chromatography profile with dimer and large oligomers (43). Removal of the MBP-tag with HRV-3C protease resulted in precipitation of CPSF6 from both P1 and P2 (Fig. S2C). Therefore, MBP-tagged soluble MBP-His 6 -CPSF6-588 (denoted here as "MBP-CPSF6") was used for further binding experiments.
Incubation of in vitro preassembled WT HIV-1 CA tubes with MBP-CPSF6 (both P1 and P2) resulted in cosedimentation of MBP-CPSF6/CA complexes in the pelleted fractions (Fig. 4A). The ratio of CPSF6 to WT CA in the pelleted fraction was 0.097 6 0.005. Negligible binding of MBP-CPSF6 to N74D HIV-1 CA tubes was observed under the same assay conditions, giving a CPSF6/CA ratio of 0.006 6 0.008. TEM of the negative stained samples showed a drastic structural disruption of capsid tubes when incubated with MBP-CPSF6 (P1 or P2), while N74D CA tubes remained intact (Fig. 4C). MBP-CPSF6, by itself, formed protein oligomers for both P1 and P2 fractions in CA assembly buffer (Fig. 4B). Binding of MBP-CPSF6 to WT CA tubes resulted in dissolution of tubes and an appearance of distinct curved capsid remnants associated with MBP-CPSF6 densities. Intriguingly, the amount of pelletable capsid did not change upon capsid disruption (Fig. 4A), suggesting that the predominant effect of MBP-CPSF6 is HIV-1 capsid fragmentation without dissociation into soluble proteins. Dose-dependent binding of MBP-CPSF6 to CA tubes was observed for both MBP-CPSF6 P1 and P2 by TEM (Fig. S3). This represents the first direct evidence of full-length CPSF6 binding to and disruption of WT CA tube assemblies.
HIV-1 induces cytoplasmic higher-order CPSF6 complex formation in a CypAdependent manner. In cells, CPSF6-358 forms puncta around WT HIV-1 mRuby3-IN complexes early after virus entry, leading to premature capsid permeabilization (43). These did not form in the presence of a small-molecule inhibitor, PF74, which blocks CPSF6 binding to capsid (45), or if N74D HIV-1 was used. To determine if full-length higher-order CPSF6 complexes form in the cytoplasm, immunostaining of CPSF6 was performed before and after HIV-1 infection. Higher-order CPSF6 complexes were visualized in the perinuclear region after WT HIV-1 infection but not after N74D HIV-1 infection ( Fig. 5A and B) and were associated with IN-containing complexes (Fig. 2) and CA (p24) staining (Fig. S4A). CPSF6 puncta were also observed in the nuclei of cells at later time points after WT HIV-1 infection (data not shown). Consistent with our in vitro results (Fig. 4), these data suggest that CPSF6 binds to WT capsid in the cytoplasm of infected cells.
Our previous work demonstrated that higher-order CPSF6-358 complexes were larger and formed more rapidly when CypA binding to CA was inhibited by CsA (43). Therefore, we examined whether the same would be true for full-length CPSF6. Indeed, when cells were treated with CsA and infected with WT HIV-1, greater numbers of CPSF6 higher-order complexes were observed ( Fig. 5A and B). The volume of the complexes that formed in the presence of WT virus increased with increasing Cytoplasmic CPSF6 and CypA and Capsid Trafficking ® concentrations of CsA (Fig. 5C), suggesting that inhibiting more CypA binding allowed more CPSF6 to bind to HIV-1 capsid. In contrast, CPSF6 complex formation was indistinguishable from background after infection with N74D HIV-1 ( Fig. 5A and B). To confirm that the loss of CypA binding to capsid was responsible for the increase in CPSF6 higher-order complexes, cells were infected with HIV-1 CA mutant G89V, which is defective for CypA binding (23). Because G89V HIV-1 is restricted by CPSF6-358 and thus can still bind CPSF6 (35), we expected that this virus would induce CPSF6-GFP puncta that would not increase in number in the presence of CsA, which is what was observed (Fig. 5B). As removal of CypA from HIV-1 capsid enhanced CPSF6 higher-order complex formation, we hypothesized that CPSF6 complex formation may be prevented if CypA binding to capsid was enhanced. Thus, virus was produced in the presence of CypA-DsRed, an oligomeric form of fluorescently labeled CypA with higher avidity to HIV-1 capsid than unlabeled CypA (46). Cells were infected with WT HIV-1 labeled with CypA-DsRed or with mRuby3-IN in the presence or absence of CsA and stained for CA (p24; Fig. S4B). Similar levels of p24 staining were observed under all conditions. Virus containing mRuby3-IN led to formation of many CPSF6-358-GFP puncta associated with p24 that increased with CsA treatment (Fig. S4A). However, cells infected with CypA-DsRed-labeled HIV-1 had significantly fewer GFP puncta, and CsA treatment did not increase their formation, suggesting that enhanced CypA binding to capsid prevents HIV-1 interaction with CPSF6-358 and, likely, also CPSF6.
To directly test the ability of CypA to shield CPSF6 binding to HIV-1 capsid, MBP-CPSF6 protein (P1 and P2) binding to nanotubes composed of recombinant WT CA-SP1-nucleocapsid (CA-NC) protein was quantified in the presence or absence of CypA-DsRed and in the presence or absence of CsA. (Fig. S4C). Binding of CypA-DsRed to CA-NC tubes was not affected by subsequent MBP-CPSF6 binding but was inhibited by CsA treatment (Fig. S4D). MBP-CPSF6 binding to HIV-1 CA-NC tubes decreased when CypA-DsRed was already bound, effects that were rescued partially (P1) or completely (P2) by the presence of CsA (Fig. 5D). Collectively, these data demonstrate that CypA binding to capsid prevents CPSF6 from binding.
Loss of CypA binding leads to altered cytoplasmic trafficking of WT HIV-1 complexes in a CPSF6-dependent manner. Although CypA is packaged into virions, HIV-1 capsid interactions with target cell CypA modulate infectivity (24). In contrast to CPSF6 expression, endogenous CypA localized to the cytoplasm of HeLa cells with somewhat of a filamentous appearance (Fig. 5E, Fig. S5A). Interestingly, not only was CypA excluded from the nucleus, but its expression was absent from portions of the perinuclear region (Fig. S5A) that corresponded to the microtubule-organizing center (Fig. S5B). Similar expression of CPSF6 and CypA was observed in SupT1 CD4 1 T cells (Fig. S5C).
To determine whether the loss of CypA binding to HIV-1 capsid could affect virus trafficking, live-cell microscopy was performed on WT, N74D, and G89V HIV-1 in the presence and absence of CsA in HeLa cells (Fig. 5F) or HeLa cells expressing CPSF6-GFP (data not shown). In the absence of drug, the average speed and track length of viral complexes were similar for WT HIV-1 and N74D HIV-1, while G89V HIV-1 complexes trafficked significantly faster and had similar track lengths. As observed for WT HIV-1 trafficking in cells expressing mutant CPSF6 (Fig. 5), the higher rate of speed of G89V viral complexes was associated with lower infectivity (Fig. S5D). However, CsA treatment led to significantly increased speed and track length of WT HIV-1 particles. CsA did not affect N74D or G89V viral particles. These results suggest that the loss of CypA binding to WT capsid influences trafficking of HIV-1 complexes in the cytoplasm in a CPSF6-dependent manner, as N74D viral particles were not affected by CsA treatment. G89V complexes that bind to CPSF6 but not CypA had altered trafficking and lower infectivity irrespective of CsA treatment. The expression and trafficking data together suggest that CypA prevents virus cores from binding prematurely to CPSF6 during trafficking to the nucleus.
Depletion of CPSF6 rescues the HIV-1 complex trafficking defect caused by loss of CypA binding. To validate that the effect of CsA treatment on WT viral complex trafficking is mediated by CPSF6, CPSF6 was depleted from cells using short hairpin RNA (shRNA) knockdown. HeLa cells were transduced with lentiviruses expressing a shRNA targeting CPSF6 or a scrambled control shRNA. Knockdown (KD) of CPSF6 was verified by immunofluorescence staining (Fig. 6A). In the absence of CsA, CPSF6 KD had no effect on WT virus particle tracking (Fig. 6B). Infectivity of WT HIV-1 in untreated cells decreased after CPSF6 depletion (Fig. 6C), which may be attributable to previously described reduced cell proliferation of CPSF6 knockout cells (40). As shown above ( Fig. 5F), CsA led to a significant increase in WT HIV-1 complex speed and track length in HeLa cells (Fig. 6B).
The CA mutants showed different cytoplasmic trafficking patterns compared to WT HIV-1. N74D complex trafficking was unaffected by CPSF6 KD and/or CsA treatment (Fig. 6B). Interestingly, depletion of CPSF6 led to a significant decrease in G89V HIV-1 complex trafficking with or without CsA treatment (Fig. 6B), which corresponded to a rescue of the infectivity defect of this mutant (Fig. 6C). Our data indicate that CypA alters HIV-1 trafficking in a CPSF6-dependent manner, further suggesting that CypA binding protects HIV-1 capsid from binding too prematurely to or too much of CPSF6.
As the HeLa cell model may not fully recapitulate HIV-1 trafficking or infection in primary CD4 1 T cells, infectivity of WT HIV-1 and CA mutants was performed in phytohemagglutinin (PHA)-stimulated primary human peripheral blood mononuclear cells (PBMC) from 3 donors. Cells were infected in the presence or absence of CPSF6 depletion and CsA treatment. Depletion of CPSF6 was verified by Western blot quantification (Fig. 6D, Fig. S6A). Similar to the HeLa cell data, infection of N74D and G89V HIV-1 was inhibited in primary PBMC (Fig. 6E, Fig. S6B), as previously reported (21,47). Depletion of CPSF6 partially rescued G89V HIV-1 infectivity (Fig. 6E, Fig. S6B). Although the luciferase values were comparatively low for this mutant, the data reproducibly trended with what was observed in HeLa cells (Fig. 6C).
Depletion of CPSF6 in macrophages results in decreased HIV-1 infectivity independent of CypA binding. As we and others previously showed that N74D HIV-1 has an early infectivity defect in monocyte-derived macrophages (MDM) (28,48), we investigated whether intracellular trafficking of HIV-1 is dependent on CPSF6 and CypA. MDM were stained for endogenous CypA and CPSF6 expression. As in HeLa cells, CPSF6 expression was predominantly nuclear in MDM (Fig. 7A). However, infection with WT HIV-1 failed to induce higher-order CPSF6 complex formation in the cytoplasm (data not shown), consistent with previous results (49) and indicative of less cytoplasmic CPSF6 expression in MDM than in HeLa cells. In contrast, CypA expression differed greatly between cell types. MDM had pronounced nuclear CypA expression in addition to patches of plasma membrane and cytoplasmic expression (Fig. 7B).
CPSF6 was depleted with shRNA in MDM, which was confirmed by immunostaining ( Fig. 7A and C). Trafficking of mRuby3-IN complexes was evaluated in MDM from 3 donors with and without CPSF6 depletion. CPSF6 depletion led to a significant increase in speed and track length of WT HIV-1 complexes (Fig. 7D). Loss of CPSF6 led to a modest decrease in WT HIV-1 single-cycle infectivity, with only donor 2 being significant (Fig. 7E). However, CPSF6 depletion led to a significant decrease in spreading infection with WT HIV-1 but not with N74D HIV-1 in MDM (Fig. 7F). In normal MDM, both N74D and G89V HIV-1 had significantly decreased infectivity, which corresponded to faster mRuby3-IN trafficking (Fig. 7E). CPSF6 depletion had no effect on the trafficking of mutant CA complexes or subsequent infection. These results suggest that loss of CPSF6 in macrophages affects WT HIV-1 CA trafficking, leading to decreased infectivity.
CsA treatment of MDM significantly increased WT HIV-1 complex trafficking speed (Fig. 7G), with a corresponding decrease in infection in all donors (Fig. 7E). CPSF6 KD with CsA treatment (i.e., loss of CPSF6 and CypA binding) led to a significant increase in infectivity of WT virus in all donors compared to CsA treatment alone. N74D HIV-1 infection was significantly reduced during CsA treatment, with or without CPSF6 KD. Although the loss of CypA binding significantly reduces virus infectivity, these results suggest that alteration of HIV-1 trafficking and infectivity by depletion of CPSF6 is independent of CypA binding in macrophages.
DISCUSSION
HIV-1 capsid has been shown to interact with many host cell factors (17). Included in this list is CPSF6, which is involved in mRNA cleavage and polyadenylation in the nucleus (21). In this study, we detected cytoplasmic, punctate expression of endogenous or fluorescently tagged CPSF6 in the perinuclear region of cells. Full-length MBP-CPSF6 protein formed oligomers in vitro that bind to HIV-1 CA assemblies, similar to what we previously reported for CPSF6-358 (43), which lacks the RS domain but retains the central proline-rich domain that mediates CA binding. In cells, CPSF6 puncta likely represent higher-order complexes that bind to HIV-1 capsid in the cytoplasm after viral entry. This is consistent with a recent study in which examples of CPSF6 associated with HIV-1 complexes in the cytoplasm were shown prior to nuclear import (14). Removal or truncation of the CPSF6 RS domain leads to mislocalization of CPSF6 to the periphery of the cell due to loss of TNPO3 binding and reduced HIV-1 nuclear import and infectivity (21,33,35). Here, we demonstrate with live-cell imaging that HIV-1 complexes trafficked with CPSF6 in a capsid-dependent manner. In addition, virus complexes increased in speed in the presence of CPSF6 RS mutants or by introducing the Cytoplasmic CPSF6 and CypA and Capsid Trafficking ® CA N74D mutation that abolishes CPSF6 binding. Interestingly, increased microtubule trafficking of HIV-1 was associated with reduced infectivity. Previously, we showed significant colocalization of CPSF6-358 to mRuby3-IN complexes in the cytoplasm, likely due to a high concentration at the cell periphery that was not seen with full-length CPSF6 (43). It is possible that overall faster and longer trafficking is due to CPSF6-mediated modulation of capsid integrity, which may alter the accessibility of capsid to other host proteins, such as certain microtubule motor proteins or motor adaptors that can alter cargo trafficking speed, track length, and bidirectional transport (50).
Previously, it was shown that HIV-1 capsid traffics on microtubules on its way to the nucleus (42) and that CA binds key NPC component proteins (17). HIV-1 capsid uncoating is delayed by destabilization of microtubules or knockdown of the microtubule motor proteins kinesin and dynein (5,6), suggesting that microtubule trafficking and NPC binding are linked to capsid uncoating. Here, we demonstrate that HIV-1 capsid trafficked with CPSF6 and TNPO3 on microtubules and that CPSF6 facilitated CA tubular disassembly. Our previous work demonstrated that CPSF6-358 associates with HIV-1 complexes in a CA-dependent manner and leads to more rapid uncoating kinetics and reduced virus infectivity (43). Therefore, CPSF6 may play a role in both HIV-1 capsid uncoating, which may initiate in the cytoplasm, and nuclear import (49,51).
CypA binding to HIV-1 capsid was described nearly 3 decades ago (18), yet until recently, its role in HIV-1 infection was ill defined. Here, we show that the loss of CypA binding to HIV-1 capsid in infected cells due to CsA treatment coincided with increased capsid binding to CPSF6, which is consistent with our previous results with fluorescently labeled CPSF6-358 (43). Conversely, production of HIV-1 in the presence of CypA-DsRed, which has increased binding to capsid compared to untagged CypA (46), reduced CPSF6 binding to capsid. This was also seen in vitro in a competitive binding assay with CA-NC assemblies in the presence of CypA-DsRed and MBP-CPSF6 proteins. Trafficking of HIV-1 complexes in cells increased in the presence of CsA or with the G89V CA mutation, which correlated with decreased infectivity. These results suggest that CypA binding to capsid prevents CPSF6 binding. As CypA expression in HeLa cells was restricted to the cell periphery, from which CPSF6 was excluded, we hypothesize that CypA interacts with HIV-1 capsid in the cell periphery first to prevent CPSF6 binding to HIV-1 capsid before the perinuclear region, ensuring that uncoating does not occur prematurely (Fig. 8).
Depletion of CPSF6 in cells did not affect HIV-1 infection in HeLa cells or PBMC, as previously demonstrated (21), nor did it alter HIV-1 trafficking to the nucleus in HeLa cells. As N74D HIV-1 was associated with microtubules and trafficked in a linear fashion like WT HIV-1, we suggest that HIV-1 microtubule trafficking and nuclear import can be independent of CPSF6. Surprisingly, the loss of CPSF6 expression affected WT and G89V HIV-1 trafficking during CsA treatment but did not affect N74D HIV-1 trafficking. Alterations in HeLa trafficking under these conditions corresponded with HeLa infectivity data. These results again highlight the interplay of CPSF6 and CypA in capsid interaction. As it has not been possible to knockout CPSF6 or completely deplete CPSF6 from these cells, these data further suggest that the loss of CypA binding allows more CPSF6 binding to occur outside the nucleus, which affects trafficking and infectivity.
Although high-speed imaging of HIV-1 particles in CD4 1 T cells was not possible, infection results after CPSF6 depletion and/or CsA treatment in primary PBMC largely mimicked those seen in HeLa cells. In addition, expression of CPSF6 and CypA are similar in CD4 1 T cells and HeLa cells. However, CsA treatment significantly reduced the infectivity of WT HIV-1 as well as the CA mutants. Recent studies revealed that the loss of CypA binding, by either depletion, CsA treatment, or CA mutations, led to significant restriction of HIV-1 infection by human TRIM5a in CD4 1 T cells (30,52). Similarly, we show that trafficking of G89V HIV-1 complexes or WT HIV-1 in the presence of CsA was aberrant in MDM, which corresponded with reduced infectivity. As in CD4 1 T cells, the loss of CypA binding to HIV-1 capsid in macrophages led to human TRIM5a restriction prior to completion of reverse transcription (29). Furthermore, N74D HIV-1 is restricted in macrophages and CD4 1 T cells by TRIM34 in a TRIM5a-dependent and CPSF6-independent manner (52). Thus, CypA binding to HIV-1 capsid may be protective against multiple capsid-binding cellular factors. Further structural studies will be needed to understand the interplay of CypA, CPSF6, and TRIM5a/TRIM34 binding to WT and mutant HIV-1 capsids.
While some CA may be lost from the capsid in the cytoplasm, CA is required for docking at NPCs and for nuclear entry (11,12,53). Recent work from several groups has shown that capsid uncoating and reverse transcription are completed inside the nucleus in multiple cell types (14,54,55). However, some differences have been observed in different cell types. For example, imaging of HIV-1 capsids showed they were more stable in the cytoplasm of MDM than HeLa-derived cells (56). Similarly, most CA staining was lost from HIV-1 complexes in the nucleus of HeLa cells, while more viral complexes were CA positive in MDM (51,56,57) and colocalized with CPSF6 puncta (49,55). Here, we show that CypA localized to both the cytoplasm and nucleus in MDM, whereas it was only expressed in the periphery of the cell in HeLa cells. Thus, HIV-1 capsid may be shielded from CPSF6 and other host factors by CypA for longer periods of time in MDM (Fig. 8). Also, little to no cytoplasmic CPSF6 puncta are detected in MDM. As we showed that CPSF6 binding to capsid was prevented by CypA binding and promoted capsid dissociation, CypA expression in the perinuclear region and nucleus of MDM could explain why more CA remains associated with the viral genome in these cells compared to HeLa cells. Our results contribute to the growing literature on the ability of HIV-1 capsid to bind multiple host cell factors in a highly orchestrated manner to promote viral infectivity, which differs depending on the specific cellular environment.
The CPSF6 gene and the MBP tag were amplified by PCR and subcloned into the pcDNA3.1(1) mammalian expression vector (Thermo Fisher Scientific) using the NEBuilder HiFi assembly kit (New England Biolabs) after linearization with the restriction enzymes Eco RV and Xba I. The resulting insert, designated MBP-His 6 -CPSF6-588, has a leading Kozak sequence, an N-terminal MBP tag, followed by a hexahistidine tag (His 6 ).
HIV-1 CA and CA-NC were previously described (60,61). In brief, they were cloned from the cDNA of Pr55 Gag , which was obtained from the NIH AIDS Research and Reference Reagent Program, Division of AIDS, NIAID, NIH. Briefly, CA and CA-NC regions were amplified and subcloned into pET21 (EMD Chemicals, Inc.) using the NdeI and XhoI sites. Proteins were expressed and purified as previously described for Gag (DMA 15-100 Dp6) (2, 62). CypA-DsRed-Exp2 (gift from Greg Melikyan) was cloned into the pET28 vector, resulting in an N-terminal His 6 -tagged protein.
Human PBMCs were isolated from leukapheresis obtained from the Central Blood Bank (Pittsburgh, PA) using Ficoll-Paque Plus (GE Healthcare) density gradient centrifugation following the manufacturer's instructions. PBMCs were cultured in RPMI 1640 medium supplemented with 10% FBS, PSG, and 20 U/ml recombinant interleukin-2 (IL-2; Thermo Fisher Scientific) at 37°C and 5% CO 2 . To expand T lymphocytes, PBMCs were stimulated with 50 U/ml IL-2 and 5 mg/ml phytohemagglutinin (PHA; Sigma-Aldrich) for 72 h prior to infection or transduction. CD14 1 monocytes were isolated from PBMCs using human anti-CD14 magnetic beads with LS columns (Miltenyi Biotec). CD14 1 monocytes were differentiated into MDM in RPMI 1640 medium supplemented with 10% FBS, PSG, and 50 ng/ml recombinant granulocyte-macrophage colony-stimulating factor (R&D Systems) for 7 days at 37°C and 5% CO 2 prior to experimentation.
HIV-1 infection assays. HeLa cells and differentiated macrophages were seeded in 24-well plates overnight and then transduced with shRNA-encoding viruses. Next, 48 h postransduction, cells were infected with equal p24 amounts of luciferase reporter viruses for 48 h. Cells were lysed and assessed for luciferase production (Promega) with a 1450 MicroBeta TriLux microplate luminescence counter (PerkinElmer). CPSF6 KD efficiency was assessed by CPSF6 antibody staining (NBP1-85676). PHA-stimulated PBMCs were transduced with viruses encoding shRNA and selected with 2 mg/ml puromycin for 72 h. KD efficiency was measured by Western blot analysis. PBMCs were restimulated with PHA and challenged with luciferase reporter viruses for 72 h prior to luciferase measurement. For assays including treatment with CsA, cells were treated with CsA (10 mM) at the time of plating and remained in drugcontaining medium throughout the assay.
Replication of HIV-1 in macrophages was performed in duplicate by infecting transduced macrophages with WT or N74D HIV-1 NL4-BAL at a multiplicity of infection of 0.1. Supernatant was collected and new medium was added every 2 days. Viral replication was quantified by p24 production in the supernatant by ELISA (XpressBio) at day 8.
Fluorescence microscopy. For fixed HeLa cell or macrophage imaging, cells were plated in MatTek dishes overnight. Synchronized infections with VSV-G pseudotyped HIV-1 were performed by incubation at 4°C for 10 min, followed by aspiration of medium, addition of cold fluorescently labeled HIV-1 (5 ng p24), and further incubation at 4°C for 15 min to allow virus attachment. Cells were then incubated at 37°C for 20 min, followed by washing with warm medium and incubation in fresh medium. At 1 h postinfection, cells were washed with phosphate-buffered saline (PBS), pH 7.4, and fixed with 2% paraformaldehyde (PFA). For SupT1 cell imaging, cells were infected in 6-well plates at room temperature with 10 ng p24 equivalent of virus per million cells for 30 min and then at 37°C for 3.5 h. Cells were centrifuged and washed twice with PBS and fixed with 2% PFA for 15 min. MatTek dishes were precoated with Cell-Tak (Corning) following the manufacturer's instructions at least 2 h before seeding SupT1 cells. After permeabilization with 0.1% TritonX-100 for 15 min, fixed samples were blocked with serum matching the secondary antibody for 45 min. Primary antibodies were added to the fixed cells in protein binding buffer (PBB) (2% bovine serum albumin in PBS) for 1 h and washed with PBB. Secondary antibodies were added to the cells in PBB for 1 h. After washing with PBB and PBS, the cells were stained with Hoechst (1:2,000) and mounted with a coverslip using gelvetol.
A Nikon A1 confocal microscope was used to acquire three-dimensional (3D) stack images of fixed samples with a 100 Â 1.49 NA oil-immersion objective. LU-NV laser launch (Nikon) was used to emit lasers at 405 nm, 488 nm, 561 nm, and 640 nm. Fields of view were randomly chosen by quick scanning in the Hoechst channel. The ND Acquisition option in NIS-Elements software (Nikon) was applied to collect 3D multichannel imaging (1,024 by 1,024 pixels) with 2Â line averaging. Images of 488-nm and 561nm channels were acquired by gallium arsenide phosphide (GaAsP) detectors (Nikon). 3D stacks were acquired with 0.15-to 0.5-mm step intervals to cover the entire cell volume (6 to 10 mm) with a motorized piezo Z stage (Nikon).
For live-cell HILO imaging, a Nikon Ti TIRF microscope with a 100 Â 1.49 NA oil-immersion objective and a Photometrics Prime 95B sCMOS camera was used. In multicolor live-cell imaging experiments, a Flight Lakes Instrumentation high speed filter wheel was used. Synchronized infections in HeLa cells or macrophages were performed as described above. After shifting to 37°C for 20 min, cells were washed with prewarmed fresh medium-FluoroBrite medium (Thermo Fisher Scientific) for HeLa cells or RPMI 1640 medium for macrophages. After 1 h postinfection, the MatTek dish was loaded on the stage insert and maintained at 37°C (Tokai Hit stage chamber). Each image was acquired at least 1 frame per second (FPS) to track viruses for 10 min. For visualizing microtubules, 1 mM SiR-tubulin (Cytoskeleton) was added to the medium 30 min prior to imaging. For visualizing viral membranes, the MG-B-Tau FAP dye (64) was added to the virus at 500 nM for 10 min prior to addition to cells. RAM capture in Nikon Elements was used to achieve faster multicolor live-cell imaging ($2 FPS).
Imaging quantification and data analysis. All imaging quantification was performed with General Analysis 3 in Nikon Elements (5.20.00 or above). Briefly, a cell nuclei binary mask was created using Hoechst signal to calculate the number of cells in each field of view. CPSF6 localization and quantification were determined by creating binary masks of CPSF6 within the cells. Cytoplasmic CPSF6 was determined by subtracting the CPSF6 binaries from ones colocalized with Hoechst (nucleus) signal. Mean intensity and volume were recorded for each binary. Virus localization was determined with the spot detection function to create binary masks for spots positive for mRuby3/tagRFP, FAP-GPI, CypA-DsRed, or p24 signals. Trafficking data of HIV-1 was determined by using the track function with the spot binaries in random and constant motion mode. Any tracks with less than 20 frames were excluded from the data analysis.
Protein expression and purification. The full-length CPSF6 protein was expressed in a suspensionadapted HEK293 cell line (Expi 293F; Thermo Fisher Scientific) by transfection of expression plasmid using ExpiFectamine 293 (Thermo Fisher Scientific) according to the manufacturer's instructions. Following transfection, the cells were grown at 37°C by shaking at 125 rpm in 8% CO 2 and 80% humidity for 2 days. The cells were harvested after 48 h of transfection by centrifugation at 100 Â g for 10 min. The cell pellet was washed with cold PBS and flash-frozen and stored at 280°C.
The thawed cell pellet was resuspended in buffer A (50 mM HEPES-KOH [pH 8], 500 mM NaCl, and 5% glycerol, 2 mM dithiothreitol [DTT]) supplemented with detergents (1% Tween 20 and 0.3% NP-40) and DNase I (50 mg/ml; Sigma-Aldrich) in the presence of a cocktail of protease inhibitors (Roche). After 2 h of rotation at 4°C, the lysate was homogenized by 15 strokes in an ice-cold, tight-fitting Dounce homogenizer. The homogenate was then centrifuged at 21,000 Â g at 4°C for 30 min. After centrifugation, the supernatant was collected and mixed with 1 ml of amylose agarose resin (New England Biolabs) preequilibrated with buffer A per 50 ml of cell homogenate. The mixture was incubated with rotation at 4°C for 2 h and then transferred to a column. The resin was washed with 50Â resin volume of buffer A. To elute the recombinant protein, the resin was incubated with buffer A containing 100 mM maltose for 15 min at 4°C, and the flowthrough was collected as eluate. The eluate was applied to a Hi-Load Superdex 200 16/60 column (GE Healthcare) in a buffer A, and fractions containing target protein were collected and concentrated to 6 to 8 mg/ml using Amicon concentrators (Millipore, Billerica, MA, USA), flash-frozen with liquid nitrogen, and stored at 280°C.
SDS-PAGE and Western blot analysis. For in vitro CPSF6 experiments, an equal volume of cell lysate and each fraction from the column were mixed with 4Â NuPAGE lithium dodecyl sulfate (LDS) sample buffer (Thermo Fisher Scientific) supplemented with 10 mM DTT and loaded onto a 10% Bis-Tris NuPAGE gel (Thermo Fisher Scientific), alongside a protein molecular weight marker (BLUEstain protein ladder; Gold Biotechnology). Gels were run at 100 V for 15 min and then 150 V for 40 min in NuPAGE morpholineethanesulfonic acid (MES) SDS running buffer, and the proteins were subsequently transferred onto PVDF or nitrocellulose membranes using iBlot transfer stacks (Thermo Fisher Scientific). The membranes were blocked at ambient temperature for 1 h in bovine serum albumin (BSA) blocking buffers, followed by overnight incubation at 4°C with rabbit anti-maltose binding protein antibody (ab9084; Abcam) or rabbit anti-CPSF6 antibody (EPR12898; Abcam) and then a further hour with monoclonal anti-rabbit immunoglobulins-alkaline phosphatase antibody at ambient temperature. Between each antibody incubation, the membranes were washed three times with Tris-buffered saline (TBS) buffer containing 0.1% Tween 20, and finally, the membranes were developed with BCIP (5-bromo-4chloro-3-indolylphosphate)/nitroblue tetrazolium (NBT) color development substrate (Promega) to enable visualization of protein bands (Promega, USA). Each experiment was performed at least three times.
For measurement of CPSF6 expression in cells, an equal number of transduced and puromycinselected HeLa cells or PBMCs were lysed with RIPA buffer (Bio-Rad), mixed with sample buffer (Bio-Rad), and heated to 100°C for 5 min. Denatured cell lysate was run on precasted 4 to 15% Criterion Tris-HCl gels (Bio-Rad) at 150 V for 1.5 h. Proteins were transferred to nitrocellulose membranes using a semidry transfer apparatus (Thermo Fisher Scientific) at 160 mA for 1 h. The membranes were blocked with 5% milk in PBS containing 0.1% Tween 20 at room temperature for 20 min. The primary antibodies anti-CPSF6 (NBP1-85676) and anti-a-tubulin (T5168; Sigma-Aldrich) were used with secondary anti-mouse IgG or anti-rabbit IgG conjugated with horseradish peroxidase antibodies (A9917 and AP132P; Sigma-Aldrich). SuperSignal West Pico chemiluminescent substrate (Thermo Fisher Scientific) was used to visualize protein bands with Amersham hyperfilm (GE).
Capsid binding assay. Tubular assemblies of WT HIV-1 CA protein were prepared at 80 mM (2 mg/ ml) in 1 M NaCl and 50 mM Tris-HCl (pH 8.0) buffer at 37°C for 1 h. N74D CA was dialyzed against 1 M NaCl and 50 mM Tris-HCl (pH 8.0) buffer at 4°C overnight at the concentration of 20 mg/ml. Before binding, the assembled mixture was diluted to 80 mM (2 mg/ml). For the binding assays, the binding buffer was the same as the stock buffer for MBP-CPSF6 proteins described above. Different concentrations of MBP-CPSF6 were added to preassembled CA tubes at a CA concentration of 64 mM. The reaction mixtures were incubated on a rocking platform at room temperature for 1 h with gentle mixing at 10-min intervals. Then, 5-ml samples were withdrawn from the reaction mixtures and immediately used for electron microscopy (EM) analysis. The remaining samples were pelleted at 21,000 Â g for 30 min, and supernatants (s) and pellets (p, resuspended in the same volume) were mixed with 4Â LDS loading buffer for gel analysis. Supernatant and pellet samples, without boiling, were loaded on 10% SDS-PAGE and stained with Coomassie blue. Each experiment was performed at least three times.
To determine the binding ratio of MBP-CPSF6:CA, SDS-PAGE gels were scanned using an Epson 4990 scanner. The integrated intensities of CA and MBP-CPSF6 protein bands were measured using the ImageJ 1.40 program (NIH). The molar ratios were calculated according to the formula (MBP-CPSF6 intensity/MBP-CPSF6 molecular weight)/(CA intensity/CA molecular weight) and calibrated using the input ratios as standards.
For binding of CypA and MBP-CPSF6 with HIV-1 capsid, 5 mM CypA-DsRed was added to 10 mM preassembled WT CA-NC tubes, and at the same time 15 mM competitive inhibitor CsA was added as a negative control. The reaction mixtures were incubated on a rocking platform at room temperature for 1 h with gentle mixing at 10-min intervals. Then, 5 mM MBP-CPSF6 P1 or P2 was added to the reaction and incubated on a rocking platform at room temperature for 1 h with gentle mixing at 10-min intervals. At the end of the incubation, the samples were pelleted as described above. Each experiment was performed at least three times. Nikon Elements 5.0 was used to quantify the binding ratio of CypA and MBP-CPSF6 with preassembled CA-NC tubes.
TEM analysis. The morphologies of different variants of CA assemblies and CA-MBP-CPSF6 complexes were characterized by TEM. Samples were stained with fresh 2% uranyl formate, deposited onto 400-mesh carbon-coated copper grids, and dried for 30 min. TEM images were acquired on a Tecnai T12 transmission electron microscope at 120 kV.
Statistics. Each virus infection experiment and associated imaging analysis was performed in at least two separate replicates. Compiled data were obtained from at least two independent experiments or three donors. Statistical significance was determined by two-sided unpaired Student's t test using Prism (GraphPad). P values of ,0.05 were considered statistically significant. ns, P . 0.05; *, P = 0.01 to 0.05; **, P = 0.01 to 0.001; ***, P = 0.001 to 0.0001; ****, P , 0.0001.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. MOVIE S1, MOV file, 0.5 MB.
|
2020-06-11T09:04:12.140Z
|
2020-06-05T00:00:00.000
|
{
"year": 2021,
"sha1": "b41c4ed939b358140d7a633e9d28766dee74e8e7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/mbio.03142-20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9834e54e1e710e1c8f7991d9101e9fc4824c774",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
]
}
|
237545956
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19 as an opportunity to reveal the impact of large hospital expansion on the healthcare delivery system: evidence from Shanghai, China
Background The expansion of large hospitals on the medical service market’s supply side has always been an intensely debated topic. In this study, we conducted statistical analysis on the natural shock of COVID-19 to investigate whether the large hospitals will draw health demand from the small hospitals when a supply capacity surplus is present, a phenomenon otherwise known as the “siphon effect”. Methods We collected the monthly hospital income and service data, including outpatient income, inpatient income, number of visits, and discharges, from all public hospitals, from January 2018 to July 2020 in Shanghai. A difference-in-differences (DIDs) method was applied to analyze the existence of the large hospitals’ siphon effect by identifying the differences in the healthcare service market share change between large and small hospital groups at the height of pandemic (February and March, 2020) and the postpandemic period (April and May, 2020). Case mix index (CMI) was used to verify whether the reduction in healthcare amount and market share of small hospitals was due to unnecessary care. Results In total, 156 public hospitals, including 46 large hospitals and 110 small hospitals, with an average number of beds of 1,079.21 and 345.25, respectively, were involved in this study. At the height of the pandemic, the healthcare service volume and revenue in public hospitals in Shanghai experienced a sharp decline, especially for large hospitals and inpatient services. Compared to small hospitals at the height of the COVID-19 pandemic, large hospitals’ market share decreased significantly in outpatient and inpatient services for overall and nonlocal patients (P<0.05). During the postpandemic period, large hospitals’ market share increased significantly in outpatient and inpatient services for overall and local patients (P<0.05). This increase was more substantial in inpatient services. Conclusions Under conditions of the COVID-19 pandemic of higher care-seeking costs in the large hospitals, some of the healthcare services typically provided by large hospitals were then supplied by small hospitals. Furthermore, the siphon effect of large hospitals could be clearly observed when a supply capacity surplus was present and external constraint on patients’ care-seeking behavior was absent.
Introduction
Hospitals have been the target of health reforms for improving efficiency, access and value, and control costs (1). In China, public hospitals are a critical part of the healthcare system, accounting for 85% of total visits in 2019. In 1949, the People's Republic of China built a three-tier healthcare system, and generally, the level of the hospital has been strictly associated with its size throughout the country.
Since healthcare reform was initiated in the 1980s, public hospitals in China have been required to balance their operating expenses and drug sales. Hence, China's marketoriented medical reform shifted hospitals from medical service providers heavily reliant on financial subsidies to profit chasers (2). This induced hospitals to establish incentives for capital-incentive investments while ignoring human capital and have driven medical staff and patients to higher-level hospitals, which has been termed the "siphon effect" of higher-level hospitals on lower-level hospitals (3). In 2017, the First Affiliated Hospital of Zhengzhou University, in central China, was the world's largest hospital, with about 10,000 beds. At least 19 super hospitals-those with more than 4,000 beds-have popped up in China in recent years.
In 2009, to resolve the issue of "too difficult and too expensive to access" in health sector, which, to some extent, associated with the profit-chasing of large public hospitals, China launched a new comprehensive healthcare reform to provide all citizens with equal access to primary healthcare (PHC) services with reasonable quality and financial risk protection (4). The new reform responded to the public discontent with government underfunding by injecting massive funding into the health sector. Many policies were implemented to deal with the profit-driven hospitals, like a zero-makeup policy on drug and medical consumables and payment for Diagnosis-Related Groups (DRG) (5). Of greater importance, however, was the reconstruction of a PHC-based integrated delivery system (6).
Unfortunately, despite the increase in funding, the share of outpatient visits at PHC centers have decreased in relation to tertiary hospitals, and the share of hospitalizations at tertiary hospitals has increased (7). In 2019, the tertiary hospitals accounted for 40.46% of total beds, 72.16% of total medical revenue, 53.56% of total visits, and 49.49% of total discharges (8). Previous studies reported the extensive coexistence of congestion in higherlevel hospitals and idle resources in lower-level hospitals, even in the same areas in China (3,6,7). This was indicative of the resource wasted and low efficiency in China's healthcare system.
The role of the large hospitals in the health delivery system is a controversial topic across the globe. Some economics-based studies support the larger hospitals' scale effect as substantial, and large hospitals have also been associated with lower prices and higher efficiency (9)(10)(11)(12). The critical role of small hospitals has also been noted by authors. Smaller hospitals also can provide highquality, safe, care to their local population (13), and many studies have asserted that the hospital expansion in China is misguided and might harm the PHC-based integrated delivery system (14), increase health expenditure, exacerbate overtreatment, and erode the trust between patients and physicians (15), etc.
Contrary to the orientation of the PHC-based integrated delivery system, patients seem to prefer larger hospitals and their better reputation (16,17). In China, this preference is also apparent, and easier to observe due to the less strict referral system and lax medical insurance restrictions (18,19). As a consequence of the preference to large hospitals, those patients previously treated in small hospitals could be easily drawn to the large hospitals should the large hospitals accept them, which is the siphon effect at work. This phenomenon is partly a result of supply capacity and the scale of the hospital. Thus, it has hard to identify this siphon effect due the gradual expansion of hospitals and their supply capacity. Although, the expansion was prominent.
However, since December 2019, COVID-19 has spread extremely quickly around the world. The pandemic's tragedy has paradoxically produced an opportunity for researchers to disclose something that is difficult to identify or prove in normal conditions (20). Most Chinese hospitals, which were neither located in high-risk districts nor appointed as the specialized treatment hospital of COVID-19, suffered a huge decrease in medical service volume from 2020 February to 2020 March-the height of the COVID-19 pandemic (21)(22)(23). Thus, the pandemic suddenly created a large surplus in healthcare supply capacity in many hospitals. Under this situation, the larger hospitals' siphon effect could be more observable, and whether the poaching of health demand from small hospitals to fill the supply capacity surplus of larger hospitals could potentially be seen on a much larger scale.
In this study, based on all public hospitals' health service data in Shanghai, we constructed difference-in-difference (DID) models for the novel conditions of COVID-19 to investigate whether the larger hospitals poached health demand from the small hospitals.
The characteristics of the healthcare delivery system in Shanghai
Shanghai is one of the four direct-administration municipalities of China. With 24.28 million permanent resident population, Shanghai is a center for finance, research, technology, manufacturing, transportation, and healthcare in China. By the end of 2019, there were 387 hospitals and 77.7 thousand doctors in Shanghai, servicing 171.75 million visits and 4.60 million discharges (available online: http://wsjkw.sh.gov.cn/tjsj2/20200724/6ac31287f7 074c869f563fefe79c75d3.html), mostly provided by large hospitals. Additionally, as a healthcare center in China, Shanghai plays a significant role in providing healthcare service to nonlocal patients (24). In 2018, Shanghai provided 6.67% visits and 29.24% discharges to nonlocal patients, accounting for 14.65% of the total expenditure, with 76.65% of the visits and 83.05% of those discharges being provided in tertiary hospitals, these are typical large hospitals in China (25).
The temporary supply surplus of hospital service capacity during the COVID-19 period and study design
Given the population mobility restrictions among the areas afflicted with the COVID-19 pandemic (26), the healthcare volume of tertiary hospitals decreased more than the other lower-level hospitals, especially for the nonlocal types. In our previous study, we have found that, from February 2020 to May 2020 in public hospitals in Shanghai, the number of visits and discharges of all public hospitals fell dramatically, compared to the same months in 2019 ( Figure 1). Consequently, the pandemic lead to a sudden and massive healthcare supply capacity surplus for all public hospitals in Shanghai (27).
To explore the siphon effect of large hospitals, we divided the all hospitals into two groups according to each hospital's grade information on January 2020: the large group (tertiary) and the small group (nontertiary). In Shanghai, the tertiary hospitals were given more resources and more helpful policies as compared to the small hospitals.
The market shares of hospitals of each were relatively stable before the sudden shock brought by the COVID-19 pandemic. Thus, the market share change at the height of the COVID-19 pandemic of the large hospitals, compared to the small ones, was investigated to identify the market share change under a sudden shock with certain control measures. Further, as siphon effect only can be clearly observed with a substantial height difference between top and bottom of the down pipe, the poaching behavior of large hospitals from the small hospitals can be better observed with a substantial healthcare supply capacity surplus in the large hospitals and no specific restriction, like mobility control measures, on the patients' selection (like water in the pipe). Thus, the treatment group was identified to be the large hospitals during the postpandemic period, considered to be between April 2020 and May 2020 in this study, according to the schedule of COVID-19 control measures in China.
Hospitals selected and data source
We chose all public hospitals [159] as the basic sample used in this research from 387 hospitals in Shanghai. These hospitals were chosen because the public hospital is the main player in the healthcare market in China (3,6). These public hospitals accounted for 89.10% of total visits and 89.64% of the hospitalization service in 2019 in Shanghai.
Each public hospital has 566.19 mean beds, and there are 23 hospitals with at least 1,000 beds. Furthermore, to guarantee the data quality and consistency of measurement methods, hospitals with noncontinuous data from January 2018 to July 2020 were excluded.
To determine the presences of the siphon effect, or to verify whether the large hospitals poached patients from the small hospitals, longitudinal monthly hospital healthcare service data from January 2018 to July 2020 were examined. All hospitals' data were collected from the China Statistical Survey of Health Resources and Services Program (SHRSP), which has kept records of monthly hospital economic operation data from 2007 onward (http://www.nhc.gov.cn/ bgt/pw10709/200709/c2f58da8d8754fe09f3b364da335b9 5f.shtml). We obtained the data from Shanghai Municipal Health Commission.
Outcomes measurement
The market shares of healthcare service indicators, including outpatient income, discharge income, numbers of visits, and discharges of each hospital in each month, were used as the outcome indicators for the allocation of patients in the healthcare service market. The market share of a specific hospital-month indicator was expressed as the hospital's level percentage of all hospitals. Due to the different care-seeking preferences and the interprovince population mobility control policies, we collected those market-share indicators from local and nonlocal patients.
Statistical analysis
DID models were constructed to analyze the large hospitals' characteristic effect on the healthcare delivery system at the height of the COVID-19 pandemic. The change was identified by the differences in market share change between the large and small hospitals. Based on our data source, the DID model study was constructed as follows: [1] where RS i,t stands for a series of the outcome indicators mentioned above. The outcome indicators in different hospitals in the same month were taken as different individuals to eliminate the monthly effect in the healthcare market. Treat i is a dummy variable that has a value of 1 if a sample belongs to the large hospitals group; pandemic t is an indicator variable for which a value of 1 indicates the worst period of COVID-19 in China, specifically February 2020 to March 2020. The key explanatory variable was treat i *pandemic t , which denotes the sample belonging to the large hospitals and being afflicted by the Nationwide spread of COVID-19 pandemic. The associate parameter β 3 , denotes the relative changes of the market share of large hospitals compared to the small ones under the epidemic's shock. Type i is a categorical variable, controlling for the type of each hospital. In this study, the hospitals were divided into three categories: general hospitals, traditional Chinese medicine (TCM) hospitals, and specialized hospitals. Thus, in the process of regression, the variable type i is replaced by two dummy variables.
A similar set of DID models was used to verify whether the large hospitals poach patients from the small hospitals via an analysis of the differences between the market share changes of large and small hospitals during the postpandemic period, that is, from April 2020 to May 2020. The basic models were built as follows: RS i,t , treat i , type i , i, and k have the same denotations as in Eq. [1]. The month id m indicates April and May in Eq. [2]. The ppandemic t is the postpandemic period, which refers to 2020 in this study. Thus, the parameter β 3 , which is associated with the key explanatory variable, treat i *ppandemic t , represents the large hospitals' market share changes after the widespread national pandemic as compared to the small ones.
Additionally, an alternative explanation for the increase in the large hospitals' market share during the postpandemic period is that the market share reduction of small hospitals might have occurred due to the decrease in unnecessary medical services, and not due to the siphon effect of large hospitals. Generally, medical services provided by small hospitals are less complicated and more likely to be unnecessary. In this study, the case mix index (CMI), which reflects the diversity, clinical complexity, and resource needs of all the hospitalizations of each hospital, was used to investigate the market share's association with the reduction of unnecessary medical services (28).
A parallel trend is a key assumption that enables DID to account for unobserved variables (29,30). We were limited to examining only 3 years in this study, and probing into other sample periods before the beginning of the COVID-19 was not deemed feasible. Instead, to characterize the trend before the pandemic, we plotted the market share of the large and small hospital groups from January 2018 to July 2020.
A value of less than 0.05 was considered statistically significant. The Stata software version 16 for Windows (StataCorp, College Station, TX, USA) was used for statistical analysis.
Ethical statement
The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by the registration number of Medical Ethics Committee of Shanghai Health Development Research Center (No. 2021002). The need for written patient consent was waived because of the observational nature of this study, the subject can no longer be found, and the research project does not involve personal privacy or commercial interests.
Characteristics of the study hospitals and the shock of the COVID-19 pandemic
Among all public hospitals in Shanghai, three hospitals were excluded due to the noncontinuous data. Among the total of 156 hospitals, 46 tertiary hospitals were placed into a large hospital group, and 110 nontertiary hospitals were allocated into a small hospital group. The average number of beds of large hospitals (1,079.21±21.79) was much higher than that of the small hospitals (345.35±6.10), which strongly supports our group classification. Among the large hospitals, 24 (52.17%) were general hospitals, 6 (14%) were TCM hospitals, and 16 (53%) were specialized hospitals. Additionally, the small hospitals included a lower ratio of general hospitals (43, 39.09%) and a higher ratio of specialized hospitals (53, 48.18%; P<0.01). Furthermore, 4,836 hospital month year observations were included in this study.
Overall, after the COVID-19 outbreak, both the healthcare service volume and revenue experienced a sharp decline for public hospitals in Shanghai, especially in February 2020 and March 2020, at the height of the pandemic (Figure 1). Healthcare revenues did not recover until June 2020, while healthcare volume did not recover for the entirety of 2020.
Specifically, at the height of COVID-19 in China (February 2020 to March 2020), all public hospitals in Shanghai suffered severe shock. The hospitals' average decrease percentage in total medical revenue, outpatient income, number of visits, discharge income, and number of discharges ranged from 34.03% to 47.3%, while the large hospitals' reduction ratios were significantly higher than those of the small hospitals (P<0.05; Table 1). Notably, the average change percentage of the market share of those hospitals was positive. That is, for most hospitals, their market share increased during this period. This means, most small-scale hospitals' market share increased, while a smaller portion of large-scale hospitals' market share decreased. For the market share of the number of visits, discharge income, and number of discharges, the small hospitals' average increase rate was significantly higher than that of the large hospitals (P<0.05).
Conversely, during the postpandemic period (April 2020 to May 2020), the recovery of healthcare services in large hospitals was better than that of the small hospitals, according to the average change percentage, although this difference was only significant in regard to the number of discharges (P<0.01). The market share of each hospital began to restore to 2019 levels at this time. Furthermore, large hospitals' market share increased (positive average change percentage) compared to the same month in 2019, while that of the small hospitals decreased. This may indicate that the large hospitals began to poach the market share from the small hospitals. Overall, the difference in change of number of discharges between large hospitals and small hospitals was the largest and most significant across the pandemic and postpandemic period. Table 2 presents the estimated impacts of the COVID-19 pandemic on the large hospitals' market share in comparison to the smaller ones. There were 156 hospitals in 2 specific months (February and March) across 3 years (2018, 2019, 2020) involved in the random effect model. Compared to the general hospitals, TCM and specialized hospitals were associated with lower market share; meanwhile, large hospitals were associated with a higher market share in total medical revenue (Table S1), outpatients, and discharge (P<0.05); the market shares across the hospitals did not change significantly during the height of the COVID-19 pandemic period; and, at the height of the COVID-19, the large hospitals suffered more severe shock compared to the small hospitals.
The large hospitals suffered more than the small hospitals during the pandemic period
In terms of the outpatient service during the COVID-19 pandemic, the large hospitals' experienced a significantly reduced market share of the outpatient income from the nonlocal patients (−0.03; P<0.01), and the share of visit numbers from three sources (total, local and nonlocal: In brief, the COVID-19 induced a more severe shock to the large hospitals compared to the small hospitals, and the shock was more severe for the discharge services and the nonlocal patients, which are more typically provided by large hospitals.
The large hospitals poached patients from the small hospitals during the postpandemic period
The estimated effect of the postpandemic on public hospitals' market shares is displayed in Table 3. There were 156 hospitals in 2 specific months (April and May) in 3 years (2018, 2019, 2020) involved in the random effects model. Similar to the analysis in the height of the pandemic, compared to the general hospitals, the TCM and specialized hospitals were more likely to have a lower market share, while large hospitals were more likely to have a higher market share. However, what we should draw attention to is that the postpandemic situation significantly increased the large hospitals' market share of outpatient income, the number of visits and discharges (P<0.05) and the medical revenue (P<0.05; Table S2).
According to the statistical analysis, during the postpandemic period, large hospitals had an average 0.03% (P<0.01) and 0. 02% (P<0.05) increase in monthly market share in outpatient income for total and nonlocal patients, as compared to the same months in previous years. Furthermore, there was a rise in large hospitals' market share of visit numbers in all patients (0.03%; P<0.01) and local patients (0.03%; P<0.05). The postpandemic situation, on average, significantly increased the large hospitals' market share in the number of discharges for all patients (0.07, P<0.01) and local patients (0.08, P<0.01), while there was no significant impact on the market share of nonlocal patients in the discharge numbers.
CMI of large and small hospitals during and after the pandemic
One alternative explanation for the dramatic decline in healthcare volume and healthcare revenue for hospitals to the siphon effect during the pandemic is that people cut down use of unnecessary healthcare services to avoid getting infected in crowded hospitals. This is consistent with the increase in use of internet hospitals and the rise in long-term prescriptions in Shanghai. To test whether this decline was due to the limiting of unnecessary healthcare services, we compared the average CMI from January to October between 2019 and 2020, in large hospitals and small hospitals. If unnecessary medical service use reduced, the CMI in 2020 should have increased significantly.
According to the t-test, there was no significant difference in CMI between 2019 and 2020 (P=0.53). Specifically, the average CMI of large hospitals was 1.00 in 2019 and 1.03 in 2020; the average CMI of small hospitals was 0.74 in 2019 and 0.76 in 2020. Both groups' average CMI increased in 2020 compared to 2019. However, this increase was not significant (P=0.72 vs. P=0.48; Figure 2). The increase in ratio of small hospitals (3.08%) was a little bit higher than that of large hospitals (2.43%).
Robustness
A potential challenge to the DID strategy was that differential changes between large and small hospitals could have driven by preexisting differences in the time trends of the outcomes. The market share for healthcare services from January 2018 to July 2020 for large hospitals and small hospitals was depicted in Figure 3. Patients were divided into local and nonlocal groups. According to Figure 3, from January 2018 to July 2020, the average market share of each hospital group was relatively stable both for the overall and the local patients, especially for the number of For both of large hospitals and small hospitals, their CMI density graph peak right shifted slightly in 2020 compared to 2019. The average increase in the ratio for small hospitals (3.08%) was a slightly higher than that of large hospitals (2.43%). Nevertheless, neither of the increases were significant. CMI, case mix index. visits and discharges. Additionally, the market share showed substantial monthly fluctuation. Therefore, we taken a hospital in a given month, such as January or February, as an individual item in our statistical analysis.
The primary findings of this study
Using the hospital-based longitudinal data of all public hospitals in Shanghai from January 2018 to July 2020, we conducted a study on the short-term surplus of healthcare supply capacity in all hospitals due to the COVID-19 pandemic in Shanghai and compared the market share change differences between large hospitals and small hospitals during and after COVID-19. We speculated that pandemic control measures would affect patients' careseeking preferences and that the postpandemic period would see a huge reduction in health service as restrictions on patients were lifted. According to the results presented above, due to the shock of COVID-19, all health services were substantially reduced, for the whole 2020, which led to a large surplus of the health service supply capacity in hospitals in Shanghai during this period. Compared to the small hospitals, the large hospitals suffered a more severe reduction in market share at the height of the pandemic. The reduction was more considerable for the discharge services and the nonlocal patients. As for the postpandemic period, the retaliatory rebounds of the large hospitals' market shares were significant for outpatient income, and numbers of visits and discharges, particularly for the overall and local patients. This indicates that the large hospitals could have poached local patients from the small hospitals to compensate for the surplus of the healthcare supply capacity. An alternative explanation for the relatively increased market share was the reduction of unnecessary health services typically provided by small hospitals. However, the comparison of the CMI from January to October in 2019 and 2020 indicated that this alternative could be rejected.
The health service supply capacity surplus in hospitals due to the COVID-19
This study showed that the COVID-19 brought about a substantial reduction in the health service of public hospitals in Shanghai, same as previous studies on China and other counties (21,(31)(32)(33)(34). Therefore, the large hospitals had a massive supply capacity surplus which could be filled by accepting a large number of patients previously treated in small hospitals; that is, the large hospitals resorted to poaching patients from the small hospitals. The massive supply capacity surplus is the precondition for the obvious siphon effect. This reduction, which persisted for the entirety of 2020, might have been the result of a number of causes, those have been discussed in our previous study (27). Including the population control measures, treatment postponed, changes in the disease's spectrum during this pandemic period and the reduction on potential overtreatment. The potential existence of treatment postponed may lead to terrible health lost to patients in need (31,32). It indicated that the health care suppliers should establish a quick response mechanism to prevent from those lost due to the future nature shock.
Large hospitals lost their market share under the COVID-19 pandemic conditions
We found that large hospitals' market share in healthcare service reduced more than that of the small hospitals at the height of the COVID-19 period. Different from the nonlocal health demand, the number of local residences in Shanghai was relatively stable. In the local market, we from January 2018 to July 2020 according to visits and discharge healthcare services for total, local, and nonlocal patients. For both the outpatient service and discharge service, the market share distribution between the large and small hospitals for total and local patients was more stable than that for the nonlocal patients. In the 31 month-year depicted in the figure, the market share was relatively stable, but fluctuated at the height of the COVID-19 pandemic beginning in February 2020. The height of COVID-19 95%CI Large hospitals Small hospitals
Discharge income
Number of discharges did observe the significant market share shift from large hospitals to small hospitals in the health services for the outpatient services. This suggests that the small hospitals served the patients that used to be treated in the large hospitals. However, this change might have only been relevant for minor diseases that can be treated by outpatient services. It remains us to reconsider the rationality of the market share of the healthcare delivery system in China. Under normal conditions, most patients in China prefer to visit large hospitals even for common and minor illnesses, as they can freely choose which doctors and medical institutions to visit (18,19,35). These might be due to the COVID-19 pandemic increasing the care-seeking cost of large hospitals, which may include the risk of being infected and transportation convenience (18). This suggests that a way to change the current healthcare delivery system into a PHC-based system is by increasing the cost gap between large hospitals and small hospitals.
The comparison of the market share of the local and nonlocal patients proves the presence poaching behavior
In this study, we found that large hospitals took the market share of health service from the small hospitals during the postpandemic period when there was a surplus in their healthcare supply capacity and a large amount of patients served by small hospitals. During the post-COVID 19 period, the significant market share shifts of local patients from small hospitals to large hospitals on outpatient and inpatient services volume were observed. The shift was more substantial for discharge services and surgery services. This may have occurred due to patients being more likely to choose larger hospitals when they needed more complex treatment (18,35). This means the patients might be more likely to choose larger hospitals for conditions perceived to be nonminor diseases. As there was no strict referral system and a lax medical insurance restriction, the patients can choose which doctors and medical institutions to visit freely (18,19,35). Before the pandemic, the large hospitals in China were always crowded, and some patients left these hospitals due to this crowding (3).
The differences in market share changes in the local patients and the nonlocal patients supported the existence of this poaching phenomenon (i.e., the siphon effect). Different from the local patients, who chose small hospitals and large hospitals for their minor diseases and severe diseases, respectively, most nonlocal health services are usually provided by large hospitals (24,25). There was no available nonlocal patient resources for large hospitals to poach after the control measures were relaxed.
Generally, the large hospitals poached the market share from small hospitals, by treating the local patients' minor diseases that had previously been treated in small hospitals. This was supported by a greater increase in the percentage of the small hospitals' CMI.
The implications of the findings
The pandemic will pass, but its effects will last. Going beyond just the cause of disease or death, pandemics can impact many areas, chiefly psychological, social, and economic ones (36,37). The more important thing is that what can be learned from these difficult times. What we have in our study might be useful for the construction of the PHC-based integrated delivery system for the whole healthcare system. First, the existence of unnecessary healthcare was clear before the COVID-19 pandemic, which provided us a rare opportunity to observe it in a real-world scenario (20,38). Further studies should be conducted based on the reduction of health services during the pandemic to find what should be eliminated and what should be made up to improve patients' health. Second, the patients that used to be served by large hospitals turned to seek healthcare service from small hospitals under pandemic conditions, which was a desirable change to construct the PHC-based integrated delivery system. It showed that changing the factors affecting patients' care-seeking cost, such as transportation convenience and cost of referral, would be one plausible way to reduce unnecessary treatment and improve the hierarchical diagnosis and treatment model. Finally, some caution should be held on the unreasoning health expansion and even the ongoing hospital vertical integration under governmental intervention and the emergence of much large hospitals under the existence of the "siphon effect". This point was mentioned in the previous official document and study (3,39).
Strength and limitations
The siphon effect has long been discussed, but no statistical analysis has been conducted to verify the existence of this effect. Like a ghost, everyone talks about it, but no one has yet proven its existence. This paper conducted a statistical analysis on the natural experiment of COVID-19 to confirm the existence of the siphon effect: large hospitals do poach patients from the small ones if they have a supply capacity surplus. Our findings can serve as a powerful evidence to be considered when discussing the ideal size of hospitals, especially in countries like China which do not have strict referral restrictions.
Nonetheless, some limitations were unavoidable in this study, and are described below. First, the statistical model was modified from the traditional DID model, but both the control group (small hospitals) and the treated group (large hospitals) were affected by the pandemic. The effect we intended to analyze is the patient-absorbing ability of large hospitals compared to the small hospitals under COVID-19 pandemic conditions, rather than the effect of COVID-19 itself. Thus, there was little doubt concerning the validation of the model we used. However, a better choice would be to choose large hospitals with no health service capacity surplus as the control group. However, due to the realworld situation, it was too difficult to obtain data in this manner. A second limitation was the number of years: the sample we used was short panel data, so the pseudmodel constructed could note include alternative months as the intervention period. Third, the monthly CMI we presented is the average level of each month rather than the original CMI level of each hospital in each month due to the availability of data. The nonchanged CMI during this period could only be used to reject the assumption of a reduction in unnecessary health service and not help prove that large hospitals directly poach minor-issue health service seekers from the small hospitals.
Conclusions
In this study, we found a dramatic reduction in all healthcare services in Shanghai's public healthcare delivery system and a pattern of large hospitals with a large supply capacity surplus poaching patients from other small hospitals. On the one hand, the market share losses of the large hospitals at the height of the COVID-19 pandemic indicate that with higher care-seeking cost in the large hospitals, some healthcare services that had been provided by large hospitals were supplied by small hospitals. This reminds us of the irrationality of the current market share in healthcare delivery system. We clearly observed the large hospitals' siphon effect, which means the large hospitals can poach patients from small hospitals if they have the supply capacity surplus. To reconstruct the PHC-based integrated health delivery system, the irrational expansion of large hospitals should be controlled and policies should be implemented to induce patients to use primary care.
|
2021-09-09T20:34:04.811Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "be9571b7a561ff32aa79b398cb1f610984e53fa7",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/77638/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bcd6e8c9e6aff55bfe184a841496b091866ed526",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240794902
|
pes2o/s2orc
|
v3-fos-license
|
Formulation and evaluation of peel-off gel mask from whole milk yogurt and seaweed ( Eucheuma cottonii ) as antioxidants sources
Skin needs care to avoid premature aging. One of them is treatment with natural-based masks that have antioxidant activity. This study aims to see the antioxidant potential of various concentrations of natural ingredients yogurt and seaweed Eucheuma cottonii as active substances in the manufacture of peel-off gel masks. The method used in this research is the fermentation method for making yogurt, stability test for 28 days at room temperature and antioxidant test with DPPH, to determine the antioxidant power of the five variations of the mask formulation by measuring the IC50 value obtained by reading the absorbance with UV Spectrophotometer -Vis. The results obtained from this study are Formula 4 (F4) has the lowest IC50 value of 18.647 where the IC50 value can be categorized as an antioxidant with very strong strength. The results of the stability test can be stated that the preparation remains stable at room temperature storage for 28 days. The conclusion from the results of this study is that F4 (yogurt: seaweed 1:1) has the highest antioxidant activity strength among other formulations.
Introduction
Skin is the outermost organ in the human body. The skin is an organ that functions as a protector of internal organs from exposure to external materials, biological, physical and chemical substances. One of the external exposures that are harmful to the skin is sun exposure. Exposure to sunlight that emits ultraviolet (UV) radiation can cause black spots on the face and make the face look dull. 1 Therefore, it is necessary to take care of facial skin to overcome this, one of which is taking care of facial skin by regularly using face masks.
Face masks are one of the cosmetics for skin care that are generally often used by women so that the appearance of facial skin becomes healthier and more beautiful. The benefits of face masks include cleaning the pores, moisturizing and nourishing facial skin. 2 Peel-off gel masks have the advantage of being practical, because they are easy to peel off and lift like an elastic membrane. As time goes by, currently the demand for the use of natural ingredients as active substances in the manufacture of cosmetic products is growing rapidly. 3 Therefore, in this study, a peel-off gel mask was made as a skin care product with natural active substances, namely a combination of cow's milk yogurt and seaweed.
Yogurt is a processed product of fermented milk that has excellent nutritional value for the human body. Milk is one of the foodstuffs that have very high nutritional value because milk contains many nutrients including lactose, fat, protein, various vitamins, and minerals. Yogurt has a characteristic sour taste produced by the bacteria Lactobacillus bulgaricus and Streptococcus thermophillus. Yogurt acts as a source of calcium, vitamin D and contains protein that is very good for the skin. Yogurt also contains lactic acid and alpha hydroxy acid (AHA) where this AHA acts to help moisturize and remove dead skin cells which in turn can make facial skin look smoother and brighter. 4 In addition to yogurt, the natural ingredient used as an active ingredient in making this mask is seaweed. Seaweed is believed to have benefits for beauty because it contains several vitamins and minerals which are certainly needed by the skin. Seaweed also acts as an antioxidant for the skin because it contains vitamin C. 5 Therefore the combination of yogurt and seaweed is very good. In addition, it is packaged in a peel-off gel mask preparation which can be said to be practical in its use. In addition, this study also seeks to enrich yogurtderived products, because so far yogurt has only been used as food, not yet widely used as an alternative to antioxidant basic ingredients for cosmetic preparations.
The masks formulated in this study are also made from natural ingredients which are certainly not harmful to facial skin. Given that many chemicals that make facial skin experience several skin problems, for example, irritate and cause premature aging. The combination of natural ingredients from yogurt and seaweed face masks can also brighten the face and reduce dark spots on the face by 20% every week. 3 Therefore, this study aims to understand the potential antioxidant of variations concentration of yoghurt and seaweed for peel off masks preparation. Also to know whether the preparations is comfortable/not to skin and whether it fulfills pharmaceutical requirements.
Formulation and evaluation of peel-off gel mask from whole milk yogurt and seaweed (Eucheuma cottonii) as antioxidants sources
Research location and time
The research was carried out in two places, namely the Ayra Mini Yogurt Laboratory in the Pasir Impun area as a place to produce yogurt and the formulation until the evaluation test was carried out at the Pharmacy Laboratory of Bhakti Kencana University Bandung. The time of the yogurt making research was carried out in November 2020. The implementation of the formulation and evaluation test in February -May 2021.
Method of collecting data
This research is an experimental research in a laboratory which has been passed through several stages, including:
Yogurt production
Making yogurt is done using the fermentation method from fresh cow's milk obtained from cattle farmers in the Ciporeat area, East Bandung. The yogurt produced is formed from four types of bacteria, namely Lactobacillus bulgaricus, Streptococcus thermophilus, Lactobacillus acidophilus and Bifidobacterium. Fermentation is carried out through an incubation process for approximately 5-9 h at a temperature of 40 o C the following temperature is the optimal temperature at which bacteria can grow. 6
Seaweed porridge (Eucheuma cottonii) production
Seaweed of fresh Eucheuma cottonii type is made into a slurry preparation before being mixed with the peel-off gel mask formulation. The manufacture of seaweed slurry is done by mixing Seaweed with Aquades (1:1) then crushed (Luthfiyana et al., 2019).
Peel-off gel mask production
The peel-off gel mask is made by mixing several ingredients that have been prepared. Five variants of the formulation were made, with a comparison between yogurt and seaweed. The first formula (F1) is yogurt and seaweed (4:0). The second formula (F2) is yogurt and seaweed (0:4). The third formula (F3) is yogurt and seaweed (3:1). The fourth formula (F4) is yogurt and seaweed (2:2). The fifth formula (F5) is yogurt and seaweed (1:3). In this process, polyvinyl alcohol (PVA) was used as a film-forming agent, HPMC (hydroxypropyl methyl cellulose) as a gel base, DMDM Hydantoin as a preservative, propylene glycol as a humectant, and aquades as a solvent.
Antioxidant test
Antioxidant tests were carried out on five samples, namely, antioxidant testing of yogurt and seaweed peel-off gel mask preparations in each formulation (F1, F2, F3, F4 and F5). In this antioxidant testing was carried out by the DPPH method. In this DPPH antioxidant test method with the principle of measuring free radical scavenging power by antioxidants. The levels and properties of antioxidants were determined by measuring IC 50 . IC 50 (Inhibitory Concentration 50) is a parameter of the effectiveness of the sample in counteracting free radicals in the DPPH method. IC 50 is also defined as a concentration that can reduce 50% of free radicals from DPPH. 7
Consumer preference test (hedonic test)
This consumer acceptance test was conducted with the aim of knowing the level of preference and convenience of using this peeloff yogurt and seaweed gel mask. This test will be carried out on 20 female or male respondents with an age range of 17-40 years and then a questionnaire will be conducted from the results of using this peeloff gel mask product.
Antioxidant evaluations
Antioxidant activity depends on the structure of the compounds; some compounds have very poor antioxidant even they are in the purest form. Based from the summary of the IC50 value (Table 4) above, it can be seen that pure ascorbic acid has the lowest IC 50 value of 3.876 and can be categorized as very strong antioxidant strength. This is because the ascorbic acid has its own and unique structure. A compound is said to be antioxidant is very strong if it has a value IC 50 less than 50g/mL, strong for IC 50 between 50-100g/mL, while if IC 50 is 100-150 g/mL and weak if IC 50 value 150-200g/mL. 8 In Table 4, it can be seen that antioxidant activity is varied from one substance to others. The pure seaweed has a lower IC 50 value than yogurt, or it can be said that seaweed has better antioxidant activity when compared to yogurt. The IC 50 value of yogurt is 15.548 and the IC 50 value of seaweed is 11.971, while both still are categorized as antioxidants with very strong strength.
Between the formulations, each formula has an IC 50 value that varies significantly, this can be caused because the preparation contains additional substances such as PVA, PVP which may affect the absorbance. Formula 1 (F1) contains 4% yogurt and has an IC 50 value of 79.422 which can be categorized as strong antioxidant. In previous studies, which also similar, made yogurt from various species of bacteria, the yogurt can inhibit the growth of "bad" bacteria B subtilis and E coli. 6 The bacteria used was similar which were Lactobacillus bulgaricus, Streptococcus thermophilus and Lactobacillus acidophilus.
Formula 2 (F2) contains 4% seaweed and has an IC 50 value of 51.110 and is categorized as a strong antioxidant. In previous studies, which study mask from just seaweed jelly, the mask can reduce the appereance of aging spot in human skin 3 perhaps because the antioxidant level of seaweed is high. On other previous studies, which exactly same material and method used, peel off gel mask from seaweed jelly has moderate antioxidant level. 5 It used DPPH also as material and Vitamin C as an antioxidant standard, Formula 3 (F3) contains 3% yogurt and 1% seaweed which has IC 50 value 104.803 which can be categorized as moderate strength antioxidant. In F4 there was 2% yogurt and 2% seaweed where this formula has low IC 50 value or can be said high antioxidant activity. The IC 50 F4 value is 18.647 and can be categorized as an antioxidant with very strong strength and the highest antioxidant level of all formulas. Then in F5 there was of 1% yogurt and 3% seaweed which has an IC 50 value of 22,477 where this value can be categorized as very strong antioxidant activity. The strongest antioxidant activity is in F4 which contains the same amount of yogurt and seaweed, this is likely to happen because the same amount of active substances can cause synergistic antioxidant activity between the two active substances with the appropriate concentration.
Dispersion evaluations
Based on the results of data processing with the One-way ANOVA test, it can be said that there is a significant difference from the data of the dispersion evaluation test that has been carried out. From the data obtained, there is an increase and decrease in the spread of power measurements every week. Formula 2 (F2) has the lowest dispersion measure, and then F5, F4, F2 and the highest is F1. This can be caused because F2 contains seaweed with the largest concentration of 4%. The dispersion value of all formulations has met the requirements for the dispersion value of 5-7cm.
Based on the results of data processing with the One-way ANOVA test, it can be said that there is a significant difference from the dispertion evaluation test data that has been carried out. From the observation data for 28 days, there was an increase and decrease in the length of time the preparations dried. However, all time data from all formulations still included in the dry time test range, which was 15-30 minutes.
pH evaluations
Based on the results of data processing with the One-way ANOVA test, it can be said that there is a significant difference from the data of the pH evaluation test that has been carried out. It can be seen that there is an increase and decrease in pH in each formula. However, when compared to the pH value of each formula, formula 1 (F1) has the lowest pH value among other formulas. This can be due to the fact that this formula contains yogurt with the greatest concentration of 4%, considering that the pH of the yogurt itself is 3.85, where the pH is categorized as low pH. The opposite is seen in Formula 2 (F2) where pH F2 is the highest pH. This is because at F2 contains the active substance of seaweed with the largest concentration of 4%, given that the pH of the seaweed itself is 6.08. However, even though the pH value of each preparation has increased and decreased every week, the pH of the preparation of each formula is still within the skin pH range, which is in the range between 4.5-6.5.
Viscosity evaluation
Based on the results of data processing with the One-way ANOVA test, it can be said that there is a significant difference from the viscosity evaluation test data that has been carried out. From the observational data obtained, there was an increase and decrease in the viscosity value of each formulation. When viewed from the magnitude of the viscosity value, Formulation 2 (F2) has the highest viscosity value among all formulations. This can be caused because the formulation contains seaweed with the highest concentration of 4%. Seaweed has a very thick consistency compared to yogurt. Therefore, of all formulations, the higher the concentration of seaweed, the greater the viscosity value. From all formulation data, the viscosity value is still within the viscosity requirement range for the peel-off gel mask preparation, which is 6000-24.000 cps.
Conclusions
a. Peel-off gel mask made from active ingredients of yogurt and seaweed has very high antioxidants, b. The best concentration comparison between yogurt and seaweed for peel off gel is 1:1, c. The peel off gel mask is convenient for consumers to use; this can be seen from the preference test.
|
2021-09-28T15:31:13.383Z
|
2021-07-22T00:00:00.000
|
{
"year": 2021,
"sha1": "d259dac2e2c76a0e0a0c19ab2396f41b9a58b6d1",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/PPIJ/PPIJ-09-00338.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "389040ae052f8eb68968fb355678952b1c44814d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
201739139
|
pes2o/s2orc
|
v3-fos-license
|
Why Is Mom Stressed: Homeorhesis as the Potential Problem and Nicotinamide Riboside as the Potential Solution
The remodeling of female mammalian physiology to support the development of a fertilized egg into an externally breathing individual and then to provide all the nutrition to this individual while remodeling back to nearly her pregestational state is without parallel in male mammalian physiological transitions. While it is common parlance to refer to postpartum depression as a not infrequent stress in women, the postpartum physiological changes after every birth constitute profound metabolic stresses that are understudied and have important nutritional, behavioral, and neurodevelopmental implications for the maternal and neonatal health of every mammalian species. We discovered that the postpartum liver of a lactating female mouse has a depressed nicotinamide adenine dinucleotide (NAD) metabolome linked to circulation of higher levels of NAD metabolites in support of a >20-fold increase in NAD coenzymes in the mammary. Furthermore, by supporting a new mother’s apparent higher demand for NAD precursors, we increased circulation of prolactin, superinduced mammary biosynthetic programs, increased her time of arched-back nursing, enhanced mammary production of brain-derived neurotrophic factor, promoted postgestational weight loss, advanced the neurobehavioral development of her offspring, and allowed them to mature as stronger and more resilient adults with advantages in hippocampal neurogenesis and body composition. These results show that a new mother’s capacity for biosynthesis and functionally important nurturing is apparently limited by NAD. Here, we discuss homeorhetic flow of resources from a new mother to her offspring in the context of NAD metabolism and suggest avenues for future investigation.
Dale E. Bauman and W. Bruce Currie, pioneers in applying concepts of homeorhesis to maternal physiology, used dairy cows and sheep to show that provision of growth hormone and prolactin to mothers results in increased milk production and weight gain to their offspring. 2 While the knowledge gained from these classical insights has been used in the livestock industry, little attention has been paid to the stresses homeorhetic processes exert on a mother during postpartum. In fact, people ask her when she is getting back to work and pretty much ignore the remarkable retransformation of her body after pregnancy-some of which is mediated by the process of breastfeeding and homeorhetic transfer to baby. 3 Four nicotinamide adenine dinucleotide (NAD) coenzymes [NAD + , reduced nicotinamide adenine dinucleotide (NADH), NAD phosphate (NADP + ), and reduced NAD phosphate (NADPH)] are the central mediators of metabolism, essentially catalyzing the conversion of everything we eat into everything we are and everything we do. 4 Quantitative targeted metabolomic technology 5 has allowed us to discover that the NAD metabolome is dysregulated in multiple conditions of metabolic stress including obesity and type 2 diabetes, 6 heart failure, 7 diabetic and chemotherapeutic neuropathy, 8,9 and central brain injury. 10 One of the things that should be appreciated about the metabolic stresses that disturb the NAD system is that if a Why Is Mom Stressed: Homeorhesis as the Potential Problem and Nicotinamide Riboside as the Potential Solution
Charles Brenner
Department of Biochemistry, Carver College of Medicine, University of Iowa, Iowa City, IA, USA.
ABSTRACT:
The remodeling of female mammalian physiology to support the development of a fertilized egg into an externally breathing individual and then to provide all the nutrition to this individual while remodeling back to nearly her pregestational state is without parallel in male mammalian physiological transitions. While it is common parlance to refer to postpartum depression as a not infrequent stress in women, the postpartum physiological changes after every birth constitute profound metabolic stresses that are understudied and have important nutritional, behavioral, and neurodevelopmental implications for the maternal and neonatal health of every mammalian species. We discovered that the postpartum liver of a lactating female mouse has a depressed nicotinamide adenine dinucleotide (NAD) metabolome linked to circulation of higher levels of NAD metabolites in support of a >20-fold increase in NAD coenzymes in the mammary. Furthermore, by supporting a new mother's apparent higher demand for NAD precursors, we increased circulation of prolactin, superinduced mammary biosynthetic programs, increased her time of arched-back nursing, enhanced mammary production of brain-derived neurotrophic factor, promoted postgestational weight loss, advanced the neurobehavioral development of her offspring, and allowed them to mature as stronger and more resilient adults with advantages in hippocampal neurogenesis and body composition. These results show that a new mother's capacity for biosynthesis and functionally important nurturing is apparently limited by NAD. Here, we discuss homeorhetic flow of resources from a new mother to her offspring in the context of NAD metabolism and suggest avenues for future investigation.
Journal of Experimental Neuroscience
tissue's NAD metabolites are tied up in a repair activity, then the tissue may be limited in the amount of fuel oxidation and/ or anabolic processes that can be catalyzed. For example, if DNA is damaged and poly(ADP-ribose) polymerase is activated, thereby degrading NAD + , there is less NAD + available for fuel oxidation. Similarly, if the hepatic NAD + pool is largely reduced to NADH by ethanol intoxication or the hepatic NADPH pool is challenged with a storm of reactive oxygen species, one would expect that tissue to be quite stressed. This is clearly the case in heart failure, 7 brain, 10 and peripheral nerve 9 injury in which particular metabolic stresses dysregulate the NAD metabolome.
The liver can be considered the most selfless organ in the body in the sense that it always does what is in the interest of other tissues, for example, glucose disposal after a meal but gluconeogenesis or ketogenesis in fasting states. Similarly, a new mother's metabolism is self-sacrificing in that she mobilizes protein, fat, and carbohydrate for transmission to her offspring. In response to growth hormone and prolactin, the new mother transmits macronutrients from her adipose and liver to her mammary to support milk production. 11 Thus, a new mother's liver is the most selfless organ in the most selfless of creatures.
We therefore considered whether a new mother's liver, the most self-sacrificing tissue in the most self-sacrificing creature, might be self-sacrificing not only in transmission of macronutrients but also in transmission of NAD metabolites. We discovered that at peak lactation (14 days after parturition), mouse mothers on healthy normal chow depress their liver NAD metabolome by ~1/3 and circulate a higher NAD metabolome in their blood by ~1/3 with respect to agematched virgin females. Remarkably, at the same time, postpartum females have a mammary NAD metabolome that is expanded by 20 to 30-fold. 12 When we supplemented new mother mice and rats with nicotinamide riboside (NR) 13 at a level sufficient to boost their hepatic NAD metabolome against the postpartum flow of NAD metabolites from liver to mammary, they increased expression and circulation of prolactin by approximately 6-fold; superinduced blood and mammary NAD metabolomes; increased mammary size; superinduced mammary biosynthetic programs for protein, fat, and carbohydrate; and increased the amount of milk produced as well as the time of arched-back nursing. 12 Not surprisingly, by boosting lactation, these mothers increased postgestational weight loss and produced pups that were larger at weaning. 12 Unexpectedly, we found that the offspring of NR-supplemented mouse mothers are more adventurous in an open-field test, that weanling mice have better glycemic control if their mothers were NR supplemented, and that the weanling rats of supplemented mothers are learningadvantaged on a rotarod. Consistent with the enhanced adventurousness of young mice and enhanced physical learning of young rats, we found that the offspring of NR-supplemented mothers have advanced pruning in their caudate-putamen. 12 Although the nutritional intervention only lasted for the 21 days in which new mothers nursed their pups, we discovered that the benefits of having a mother on NR last into mouse and rat adulthood. Specifically, we found that offspring of NR-supplemented mothers are less anxious, stronger, perform better on a beam walk, are more resilient to giving up in a forced swim, have better spatial memory, are advantaged in adult hippocampal neurogenesis, and have leaner body composition as adults. 12 Although we found that NR-supplemented mothers transmit more milk to their pups and transmit a higher level of NAD precursor vitamins in their milk, we note that sacrificing pups to overfeed them does not produce healthier offspring. 14 Instead, we hypothesized that there must be increased levels of bioactive factors in the milk of NR-supplemented mothers. Thus far, we discovered that the gene that encodes the brainderived neurotrophic factor (BDNF) precursor is specifically upregulated in NR-supplemented mammary epithelium, that there is a higher level of BDNF in the milk of NR-supplemented mothers, and that there is a higher level of BDNF in the hindbrain of their offspring. 12 The emerging picture is one in which a new mother is doing all she can for her new offspring in postpartum: nursing them, running her mammary biosynthetic programs for protein, fat and carbohydrate production, and depositing bioactives such as BDNF into the milk. These maternal activities are accompanied by a very significant redistribution of not only macronutrients but also NAD micronutrients from the liver to the mammary. And much like a hangover or a sunburn-both of which attack NAD homeostasis-can feel like a drain, we suspect that the mother's homeorhetic transfer of resources may feel like a drain to her, such that restoring her hepatic NAD with NR gets her whole system to work better.
We are currently testing whether NR is uniquely capable of boosting maternal functions and performance of offspring or whether other NAD precursors have equivalent activities. We also aim to use omic methods to identify the range of bioactive factors that are upregulated by NR supplementation. We are keen to identify the mechanisms by which NR induces prolactin expression and how enhanced prolactin and mammary NAD metabolites are responsible for boosting the lactation program as well as the expression of specific bioactive factors such as BDNF. Although the oral availability of molecules such as BDNF is known, 15,16 this is not well studied in this context, and there is nothing known about how dietary modulation of their expression could affect the development of offspring.
It has not escaped our notice that postpartum lactating women, particularly in conditions of suboptimum nutrition or other types of metabolic stress, might benefit from specific nutritional interventions that address the homeorhetic transfers we have characterized. Thus, the identification of Brenner 3 mechanisms and biomarkers from animal research has the potential to help us design clinical evaluation of safe molecules that will address postpartum metabolic stress, increase lactation, and potentially aid postgestational weight loss and neonatal development in human populations.
|
2019-08-28T13:05:38.665Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "05a36c4f797786fdb6d774276df79eadcbadfca3",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1179069519869679",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d2f063449f0bf9bffe09ae3c6809d7fa05f3566",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
259287443
|
pes2o/s2orc
|
v3-fos-license
|
Anisotropic scaling non-relativistic holography: a symmetry perspective
We study the holographic dual of the two dimensional non-relativistic field theory with anisotropic scaling from a symmetry perspective. We construct a new four dimensional metric with two dimensional global anisotropic scaling isometry. The four dimensional spacetime is homogeneous and is a solution of Einstein gravity with quadratic-curvature extension. We consider this spacetime dual to the vacuum of the boundary field theory. By introducing proper solution phase space, we find that the asymptotic symmetry of the gravity theory is the two dimensional local anisotropic conformal symmetry, which recovers precisely the results from the dual non-relativistic field theory side.
Introduction
An interesting feature of quantum field theory (QFT) with scaling symmetry in two dimensional (2D) spacetime is that the global symmetry can be enhanced to an infinite dimensional local symmetry.The best known example is revealed by Polchinski in [1] that a local, unitary Poincare-invariant 2D QFT with a global scaling symmetry and a discrete non-negative spectrum of scaling dimensions must have both a left and a right local conformal symmetry.More than ten years ago, Hofman and Strominger showed that, for a chiral situation, the local conformal symmetry is still implied [2] which leads to two kinds of minimal theories, namely the 2D conformal field theory (CFT) [3] or the 2D warped conformal field theory (WCFT) [4].The global symmetry of the WCFT is SLp2, Rq ˆUp1q and its local enhancement is the Virasoro-Kac-Moody algebra.The enhanced local symmetries have a clear dual interpretation from the gravity side in the context of AdS 3 /CFT 2 correspondence.They reveal the enhancement of the asymptotic symmetry from the isometry of the AdS spacetime.More precisely, for AdS 3 gravity, the asymptotic symmetry group under Brown-Henneaux boundary conditions contains two copies of Virasoro symmetry [5].Meanwhile, for waped AdS 3 case [6,7], the algebra of asymptotic symmetries is isomorphic to the semi-direct product of a Virasoro algebra and an algebra of currents under Compère-Detournay boundary condition [8]. 1ecently, the enhanced symmetry was revealed for 2D Galilean field theories with anisotropic scaling symmetry [10].The 2D Galilean field theories with global translations and anisotropic scaling symmetries are shown to have enhanced local symmetries which are generated by the infinite dimensional spin-k Galilean algebra.For 2D Galilean field theories with isotropic scaling symmetry, the dual gravity theory is proposed to be three dimensional and asymptotically flat [11,12] where the asymptotic symmetry is enhanced from the global 3D Poincare group to infinite dimensional Bondi-Metzner-Sachs (BMS) group.For the anisotropic case, the dual gravity theory is proposed [13] to be higher-dimensional Schrödinger geometry [14].But the asymptotic symmetry of these geometries has not been addressed.Whether it can recover the enhanced local symmetry from the field theory side is not yet known.
The main purpose of this work is to find the gravitational duality for the enhancement of symmetry in [10].We find a new 4D metric with the isometry group isomorphic to the global symmetry of the 2D Galilean field theories with anisotropic scaling, which presents a different realization of gravity dual for the 2D theory from the one in [13,14].This metric describes a homogeneous spacetime with constant curvature tensor.The 4D spacetime is supposed to be dual to the vacuum of the field theory.We show that Einstein gravity with quadratic-curvature extension admits the 4D spacetime as vacuum solution when the coupling constants of the higher derivative terms are specially adapted to the dynamical exponent z.The 4D spacetime, with restricted range of the dynamical exponent z, can also be obtained from the Einstein-Proca theory.Then, we find a solution phase space of the higher derivative theory, which admits the vacuum metric and yields precisely the infinite dimensional spin-k Galilean algebra in [10] as asymptotic symmetry.Since the duality is between 4D and 2D, we adopt a double expansion of inverse spatial directions.Correspondingly, the 2D Galilean theory is defined on the corner of the 4D spacetime boundary.
The organization of this paper is as follows.In Section 2, we present the vacuum solution and show that it has constant curvature tensor.In Section 3, we comment on the gravitational theory that admits the vacuum solution.In Section 4, we show a solution phase space and derive the asymptotic symmetry of the phase space, namely the most generic residual gauge transformations that preserve the solution phase space.In Section 5, we derive the asymptotic symmetry algebra.We conclude in the last section.
Spacetime with global anisotropic scaling symmetry
The global symmetry of 2D Galilean field theory with anisotropic scaling consists of translations along two directions, the Galilean boost and the dilations2 In [10], it is shown that symmetry of the 2D Galilean field theory with the above global symmetry is enhanced to an infinite dimensional spin-k Galilean algebra.In plane modes, the algebra is given by This algebra with k " 1 is precisely the BMS 3 algebra derived in [15,16].
From the holographic principle, the field theory is defined on the boundary of the dual gravity theory.The global symmetry of the field theory is the isometry of the bulk spacetime.For the 2D Galilean field theory with anisotropic scaling, the dual gravity theory is four dimensional.The 4D metric which we find with global anisotropic scaling isometry is The metric is invariant under the translations along t and x directions plus the following global transformations, Galilean embedding : x Ñ x ´vt , t Ñ t , y Ñ y ´2v , (2.6) The spacetime described by the metric (2.4) is homogeneous and has constant curvature tensor.For constant y, the metric is the AdS 3 in planar coordinate with 2D Minkowski boundary.The introduction of y serves the purpose of breaking the 2D conformal group to Gallilean symmetry with anistropic scaling.Our construction of the background metric (2.4) is very different from the proposal in [13], where an extra null Killing direction associated with the coordinate ξ was introduced.In our case, however, y is not a Killing direction, but is analogous to the radial coordinate r, which allows us to take an additional y Ñ 8 limit in the asymptotic expansion.This makes the analysis of the asymptotic symmetry simpler, well controlled by both the pr, yq coordinates.We shall return to this in the next sections.
The constant curvature tensor can be easily obtained from the vielbein formalism.A natural vielbein choice that respects the global isometry is e 0 " ℓdr r , e 1 " r z dy , e `" r 1´1 2 z dt , e ´" r 1`1 2 z pdx ´1 2 ydtq , (2.7) such that ds 2 " e 0 e 0 `e1 e 1 `2e `e´.Thus we have The spin connections are (2.9) We thus have the curvature tensor 2-form Θ a b " 1 2 R a bcd e c ^ed as Hence, the independent non-vanishing components of the Riemann tensor are The Ricci tensor is given by (2.12) and the Ricci scalar is (2.13) When z " 1, the metric (2.4) is just the 4D AdS spacetime.
As one can see from (2.6) that k " ´1 requires z Ñ ˘8.So the metric is not well defined for this particular choice.To include this limiting case, we make coordinate transformation and redefine parameter as follows, The new metric admits a z Ñ ˘8 limit, and we obtain ds 2 " l2 dr 2 r2 `r 2 dy 2 ´ydt 2 `2dtdx . (2.15) The vielbein, spin connections and curvature can simply obtained from the same treatment when taking the limit z Ñ ˘8.
Dual gravity theories
Einstein gravity with the most general quadratic-curvature extension in four dimensions is L " ?´gpR ´2Λ 0 `αR 2 `βR µν R µν q . (3.1) Since the Gauss-Bonnet term is a total derivative, we do not add the Riemann squared term.The metric (2.4) is a solution of the theory (3.1) when the coupling constants α, β and the cosmological constant Λ 0 are specially chosen with respect to the dynamical exponent z, z " 1 : Λ 0 " ´3 ℓ 2 , no constraint on α and β, Note that, when setting β " 0, the generic theory (3.1) is reduced to the ghost free theory L " ?´gpR´2Λ 0 `αR 2 q, which still admits the vacuum solution (2.4) and the coupling constant is completely fixed by It is also of interest to examine whether our metric (2.4) can arise from Einstein theory with minimally-coupled matter.The curvature tensor in the vielbein base implies that If the solutions are constructed from the Einstein theory with minimally coupled matter fields, then we can deduce that T ab tot " G ab .In the diagonal base, we have The null energy conditions (NEC) ρ `pi ě 0 imposes on the following conditions 2 ´z ´z2 ě 0 , zp1 ´zq ě 0 .
This requires that 0 ď z ď 1.As a concrete example, we consider Einstein gravity coupled to massive vector theory (Proca theory) To admit metric (2.4) as a solution, the cosmological constant Λ, the coupling constant µ, and the vector field A in the Proca theory should have the following forms, Thus, the reality condition requires that 0 ă z ď 1.
In fact, although the total energy momentum tensor in the Einstein-Proca-Λ theory satisfies only the NEC, but violates both the weak and strong energy condition, the energymomentum tensor of the Proca field satisfies all the energy conditions, namely strong energy condition and dominant energy condition, since 0 ă z ď 1.It is well known that the culprit of violating weak or strong energy condition in these cases is the cosmological constant.
Asymptotic symmetries
The complete set of gauge transformation of Einstein gravity with quadratic-curvature extension is generated by infinitesimal diffeomorphism,3 The asymptotic symmetry is the residual gauge transformation that preserves required gauge and boundary conditions.We will follow the Fefferman-Graham gauge, where A " py, t, xq.The residual gauge transformation preserving the Fefferman-Graham gauge can be solved as follows: • L ξ g rr " 0 ùñ ξ r " ´1 2 rΨpy, t, xq.
where g AB is the inverse metric.For a generic z, it is very hard to impose boundary conditions to study asymptotic symmetries in a unified way.Alternatively, we apply the solution phase space method [17][18][19] to investigate the asymptotic (symplectic) symmetry of the system.The solution phase space method was originally introduced for the Near-Horizon Extremal Geometries (NHEG) [17,18].Since the NHEG is absent of dynamical physical perturbations [20], it is naturally to consider the action of diffeomorphisms on the NHEG to construct the classical phase space.In our study, the theory (3.1) depends on the dynamical exponent z, namely for each choice of z, there are a corresponding theory and fall-off conditions.So the solution phase space method is particularly useful for us to study asymptotic symmetries for generic choice of z.
We find a solution phase space of the theory (3.1), which is given by where a prime denotes a derivative on t.There are four arbitrary functions of time t, namely, Φptq, f 1 ptq, f 2 ptq, f 3 ptq which represent four types of independent diffeomorphisms.They are dynamical fields of the phase space.However they only represent boundary dynamics as they are introduced from the action of diffeomorphisms on the homogeneous spacetime (2.4).There is no propagating degree of freedom in the phase space.The dual theory of the 2D Galilean field theory is from boundary gravity in the context of AdS/CFT [5,21].
The most generic residual gauge transformation preserving the phase space is characterized by where Y 1 and Y 2 are real constant.Note that we choose all symmetry parameters field independent.
To manifest the fact that the spacetime is scaling invariant asymptotically, we need to perform a double expansion in terms of both 1 r and 1 y in the region of large r and large y and set f 1 ptq " Φptq.The asymptotic form of the metric is The leading part of the metric is invariant under the scaling transformation, The condition f 1 ptq " Φptq will further yield that ψptq " pk `1qL 1 ptq .
To summarize, the asymptotic symmetry that preserves the scaling invariant phase space (4.3) is generated by (4.8)
Asymptotic symmetry algebra
The asymptotic Killing vectors (4.8) satisfy the standard Lie algebra (5.1) The algebra is closed and the Jacobi identity of the symmetry algebra is guaranteed by the Jacobi identity of the Lie algebra of three vectors.In terms of the basis vectors the asymptotic algebra is rL n , M m s " pkn ´mqM m`n , (5.4) rB, Bs " 0 , rD, Ds " 0 , rD, rB, L m s " 0 , rB, M m s " 0, ( rD, L m s " 0 , rD, M m s " 0 . ( The first three lines of above equations are precisely the symmetry algebra (2.3) derived from the dual 2D field theory side [10].We have two more generators D and B from the extra dimension y which commute all the modes L m and M m .
The Killing vector that generates Galilean transformation of the background metric (2.4) is (5.9) It is of course included in the asymptotic Killing vectors which in mode expansion (5.2) is given by The scaling symmetry in mode expansion (5.2) is given by
Conclusion
In this paper, we present an alternative realization of the holographic dual for the 2D Galilean field theory with anisotropic scaling.The isometry of the bulk spacetime is isomorphic to the global symmetry of the 2D Galilean field theory.And the bulk spacetime is a vacuum solution of the Einstein gravity with quadratic-curvature extension.For some restricted range of dynamical exponent z, the bulk spacetime is also a solution of the Einstein-Proca theory.We find a solution phase space which admits the vacuum solution.The residual gauge transformation preserving the solution phase space, namely the asymptotic symmetry of this system, recovers precisely the enhanced local symmetry of the 2D Galilean field theory in [10].Since the dual gravity theory is in 4D, our proposal is an example of a codimension-2 holography and we have two more symmetry transformations from the extra dimensions.We show that the algebra of the two extra symmetry generators is closed and the generators of the two extra symmetry commute with all the generators of the 2D enhanced local symmetry.In our construction, the extra dimension is not a Killing direction and there is no conserved quantity associated to this extra direction.So it can not be interpreted as particle-number circle, such as in the Schrodinger theories [22] arisen from TsT transformation [23] or Melvin twist of AdS space [24].The extra dimension y in our case is not compacted, so one can not get rid of it by dimensional reduction [25].However, in our construction, one can realize the enhanced local symmetry of the boundary field theory as asymptotic symmetry of the dual gravity system in a double expansion in large r and large y.Correspondingly the dual field theory is supposed to be defined on the corner of the 4D boundary, see, e.g., in [26] for a comprehensive introduction of the corner proposal and references therein.
As an ending remark, it is worth mentioning that our construction is temporarily restricted in 4D with a ground state.It is definitely of interest to generalize our construction to higher dimension or black hole solutions (thermal state) to incorporate various anisotropic invariant field theories in different perspectives, such as theories studied in [27][28][29][30][31][32].
.12) Two translations along t and x directions are L ´1 and M ´k.The four Killing vectors form a closed subalgebra of the asymptotic symmetry algebra, rG, Gs " 0 , r∆, ∆s " 0 , r∆, Gs " rG, M ´ks " 0 , r∆, M ´ks " ´k k `1 M ´k .(5.15)
|
2023-06-30T06:43:10.075Z
|
2023-06-29T00:00:00.000
|
{
"year": 2023,
"sha1": "52309f6106c5fa2568b603e5ba7e13fd3c216a70",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/15/8/1579/pdf?version=1691988737",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "52309f6106c5fa2568b603e5ba7e13fd3c216a70",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
240230505
|
pes2o/s2orc
|
v3-fos-license
|
The System Dynamic and Compram Methodologies for Modelling, Simulation and Forecasting of Road Safety of Uzbekistan
In Uzbekistan, about 2,000 people die every year as a result of a traffic accident. At the same time, according to the Pulitzer Centre on Crisis Reporting, the Republic has the lowest rate in road mortality among the countries in the Central Asian region - for every 100,000 people, it is 11.32 people. Losses from road accidents in Uzbekistan equivalent up to 2.8% of GDP that is also one of the lowest indicators. But according to traffic safety experts, the losses from accidents are greater than reported data. Nowadays there are a lot of methods to analyse and ensure road safety and traffic management on the roads. The authors believe that road safety is a complex societal problem not only in Uzbekistan but all over the world. One of these methods is System Dynamic (SD) and COMplex PRoblem hAndling Methodology (COMPRAM). In this work, the Vensim PLE SD software tool (it is one SD tool amongst many others) has been used to perform the SD modelling of the case study at hand. In the methods of system dynamics, a computer model is created using a graphical technique for constructing flow diagrams and causal relationships of the system under study and then simulated on a computer. COMPRAM allows us to figure out the way to handle complex societal problems while involving a System Dynamics (SD) simulation option. There are similarities between COMPRAM and the traditional way of analysing road safety. In traditional ways, each element or factor is studied as a separate phenomenon. These indicators are studied in the stages of COMPRAM. This article has been studied a different aspect of how road accidents happen. The developed a comparison (according to six criteria) of the different modelling paradigms which have been historically used to assess road safety. Also, the authors made a comparison of the COMPRAM methodology with the traditional road safety assessment approach to highlight similarities and differences.
Systems Dynamics based modeling to assess the effect of for modeling, simulation, and forecasting of road safety of Uzbekistan.
In this work, the Vensim PLE SD software tool (it is one SD tool amongst many others) has been used to perform the SD modelling of the case study at hand. This software is userfriendly simulation software which allows the development of any complex, dynamic and nonlinear systems with significantly less effort, more interaction, and conventional tools than using other traditional programming languages.
Objectives
System Dynamic method was developed by Jay Forrester Professor of the Massachusetts Institute of Technology [12]. System dynamics is a methodology and mathematical modeling technique to frame, understand and discuss complex issues and problems. Also, it is an approach to understanding the nonlinear behavior of complex systems over time [56] using stocks, flows, internal feedback loops, table functions, and time delays [1]. It can be used in variety sphere to show relationship multiple factors in a huge mechanism. The elements of system dynamics diagrams are causal loop diagram (feedback), accumulation of flows into stocks and time delays.
Dorien DeTombe is the founder of the field Methodology for Societal Complexity. She developed the Compram Methodology for political decision making on complex societal issues like sustainable development, terrorism, credit crisis and water affairs [9]. The conceptual model has been divided (in COMPRAM) in a seven-layer model. That seven layer model begins by describing the problem in text form as the first layer. Retrieved concepts and phenomena from the text constitute the second layer. A reflection is made on the knowledge status based on hypotheses, theories, experience, intuition or assumption through verbal description; it constitutes the third layer. A further step does explain the influence of the concepts and the phenomena or vice versa, and a graphical representation of the knowledge; this is performed in the fourth layer. In layer five, a semantic model does represent graphically the relations between the concepts and the phenomena. And in layer six a causal model is provided, which is the graphical representation of the causal relations from layer five. In the last layer seven, the system dynamical simulation of the problem related developed system-model is performed through some SD computer software tool such as Vensim/ Stella/Ithink/PowerSim [22].
Compram methodology consist of six steps. Each step is a group process of differently composed groups each separately guided by a facilitator. This process can take a long time depending on the urgency and the complexity of the problem. These six steps are not 'the seven steps to heaven'. Handling complex societal problems will always be difficult, never simple, and the outcome uncertain [9,10].
It is imperative to reduce the level of road accidents through some sort of advanced methodology since the conventional methods lack to prevent the accident occurrences and reduce the severity [36]. Hence the system dynamics (SD) methodology comes as a handy tool to reduce the accidents to ensure road safety [32,46]. The SD technique under the systems approach methodology presents the Planners and the Engineers a cohesive set of steps to be followed systematically by accounting the basic root cause of any problem under considerations. There are the host of factors causing accidents in any region or metropolitan cities [24]. There have been many different efforts to model the road safety problem [6,21,28,48]. For instance, Kelly investigated and discussed five common modeling approaches in road safety. Among their studied models, system dynamics (SD) was said to have several advantages, including providing useful learning tools to increase the general understanding of the system and system thinking, knowledge integration for modelers and end users, a distinction between true and perceived system conditions, a platform for policymakers, and more. The SD simulation approach provides a means to collectively analyze all of the factors involved in any given accident as well as the interactions between these factors [50].
A.K. Kazadi et al. [22] in their studies used these two methods in a complex were able to very well describe the simulation of the model of following the car for degraded roads. However, the discussion of whether these approaches have advantages over traditional oneby-one parameter models has been neglected. Traditional approaches do not have the ability to model interactions among parameters of the system, but in the SD approach this problem has been overcome [50].
COMPRAM is used to analyses globally this phenomenon. COMPRAM allows us to figure out the way to handle complex societal problems while involving a System Dynamics (SD) simulation option.
As can be seen in the following example ( fig. 1) shown simple causal loop diagram of road safety to represent dependents between each other. The constructed causal loop diagram consists of several balancing loop. These loops are indicated in the figure. B denotes the balancing loops. It is noteworthy to mention that all of the relationships between different parameters in this causal loop diagram are fundamental for road safety and the available literature in this field. In this figure, the authors have proven the relationship between specific parameters by logically discuss the existence of the available relation.
B1: Traffic intensity-Speed-traffic intensity:
The intensity of the movement of vehicles is the number of vehicles passing through the cross section of the road in a certain direction or directions per unit time (per day or per hour). The speed of the traffic flow is an indicator of the speed of the whole or a particular type of vehicle on a certain section of the road, measured in m/s or km/h. On the subject of the organization of road traffic, we know that with an increase in the intensity of the traffic flow, the flow speed decreases as the density increases.
B2: Speed-capacity of road: Capacity of road is the maximum traffic flow obtainable on a given roadway using all available lanes; usually expressed in vehicles per hour or vehicles per day. There are two types of capacity of road, theoretical and practical. With an increase in the flow rate, the capacity of the road increases. But this trend does not continue indefinitely. When the flow rate reaches the H number, the throughput decreases. This is explained by the fact that the gap between the cars increases.
B3: Speed-traffic accident-economy-road condition: One of the main factors of traffic accident is an increasing speed. According this we can claim that speed help to increase the volume of accident. Traffic accident always impact economy of country in bad side as the economy is losing money on the restoration of the victims and the dead. as the economy is losing money on the restoration of the victims and the dead. And then lead to a decrease in the country's budget. The consequence of this may be to reduce funding to support road infrastructure. And if the road does not respond with transport and operational qualities, then the traffic speed decreases.
Usually, the developed causal loop diagram will help to construct stock and flow diagram for different systems in the next step of the SD modeling process. The model of speed is developed in this paper, using the System Dynamics Simulation Software Vensim PLE. The Vensim is object oriented simulation software which allows the development of any complex, dynamic and nonlinear systems with significantly less effort than using traditional programming languages. It has a user-friendly graphical interface and supports modular program development.
Results
It has been a hundred years since the first attempts at explaining the different aspects of how road accidents happen. Within this time there have been many theories explaining why accidents happen. There are four periods of the history of road accident research. These are given in fig. 3. Each of these periods was dominated by one of four groups of road accident theories: stochastic, causal, systemic and behavioral [20]. Stochastic theories dominated road accident analyses in the first half of the previous century. Within this period road accidents are analyzed as random events and from the point of statistical accident theories. The main reason for this idea was in this time they have not so many vehicles and traffic accident, as a result, they could not define the relationship between accidents.
Causal theories of accidents claimed that only an exact knowledge of the real factors causing accidents can help to prevent them. We can distinguish two main trends in causal accident theories: deterministic (sequence of events) and probabilistic (set of factors) [18]. Heinrich is considered the precursor of the theory based on the sequence of events. He developed the "domino theory" which is based on the assumption that an accident consists of a single event with a cause. Consequently, better safety, according to this theory, requires that the cause of the accident is established and eliminated. The most developed theories are those of multi-linear event sequences, which assume that accidents are an element of a series of events and suggest a process approach to accidents [20].
Systemic theories. The theory of systems applied to road transport is designed primarily to eliminate accidents by modifying the technical elements of the transport system. The systemic theory is so far the best. The improvements in the roads system, traffic enforcement, and vehicle design have significantly reduced accident rates and casualties in western motorized countries [11]. Systemic theories and models are used to identify the relations and dependencies that have an effect on accidents (so called factors transferred in time and space) and factors that occur at the time and place of the road accident to build a system of road safety measures, monitoring and control of the dependencies and relations.
Behavioural theories. The last 15-20 years have shown that not even the systems theory can explain accidents. Could it be that accidents are an unsolvable problem? A new approach was put forward by in 1980 by Gerald Wilde giving the basis for behavioral theories. The basic assumption of all behavioral theories is how people assess risk and accept it as a very important determining factor of accidents [19]. Similarly to the previous theories, there are several groups of theories here as well: homeostasis of risk, behavioral adjustment, and change of health behavior. Wilde formulated a simple thesis which is that the only factor that causes sustainable changes in accident numbers long-term is when the public as a whole wants safety. He found that every community only has as many accidents as it wants to have and the only way to change this is by changing the desired risk level (desired level of safety) [20]. According to it the number of casualties or the likelihood of becoming a casualty in an accident depends on the following elements: health promotion (education, motorization, communication with the public, programs, policy, legal regulations, and organizational changes), human factors (local level, social level) and behaviors and the environment. The theory helps to explain which behaviors and environmental factors are responsible for increasing the number of casualties and suggest safety measures.
The authors argue that the time has come to introduce a new period of history explaining how to occur traffic accidents using system dynamics. Last 10 years scientist from all of the world started to use System Dynamics to understand verity factors to assess road safety and find the relationship between each other. Such scientists as N. Kumar [24], O.Tatari [50], J. Rasmussen [42], M. Alirezaei [4], D. Topolshek [51], M. Dolores Soto Torres [49], A.K. Kazadi [22], N. Minamy [38] and others used this method to study road safety. As the road safety studies involve various complex systems D-V-R-P-E that is driver, vehicle, road, pedestrian, environment, it is initial to develop a dynamic simulation model to understand the interactions between the various complex systems. This would evolve sustainable solutions towards ensuring road safety.
Grounded in the theory of nonlinear dynamics and feedback control developed in mathematics, physics, and engineering, system dynamics models are built to solve complex problems and to understand the nonlinear behavior of complex systems over time. Thus, in system dynamics models, human behavior, physical and technical systems are (can be) simultaneously considered as displaying an interdisciplinary characteristic. Components such as stocks, flows, converters, internal feedback loops, and time delays are used for system modeling and simulation. In system dynamics, a stock represents a part of a system whose value at any given instant in time depends on the systems past behavior. The value of the stocks at a particular instant in time cannot simply be determined by measuring the value of the other parts of the system at that instant in timethe only way you can calculate it is by measuring how it changes at every instant and adding up all these changes. Thus flows represent the rate at which the stock is changing at any given instant, they either flow into a stock or flow out of a stock. Converters either represent parts at the boundary of the system or parts of a system, whose value can be derived from other parts of the system at any time through some computational procedure [22].
Further, the comparison criteria for all the above methods for assessing road safety are determined: Criteria 1: Does the paradigm allow a causality analysis, i.e. relationship between causes and accidents or road safety related occurrences? Criteria 2: Does the paradigm allow a simulation of various scenarios and some form of sensitivity analysis? Criteria 3: Does the paradigm allow a forecasting of road safety related parameters or values?
Criteria 4: Does the paradigm allow a comprehensive consideration and integration of statistical data collected from the field? Criteria 5: Does the paradigm allows the consideration, in the model, of all relevant elements such as people, drivers, road infrastructure (+related parameters), environment, training (levels), enforcement, and policy measures/aspects? Criteria 6: Does the paradigm allow the integration of expert knowledge in the model?
Conclusion
COMPRAM allows us to figure out the way to handle complex societal problems while involving a System Dynamics (SD) simulation option. There are similarities between COMPRAM and the traditional way of analyzing road safety. In traditional ways, each element or factor is studied as a separate phenomenon. These indicators are studied in the stages of COMPRAM.
There are similarities between COMPRAM and the traditional way of analyzing road safety. In traditional ways, each element or factor is studied as a separate phenomenon. These indicators are studied in the stages of COMPROM.
The core difference from the traditional ways of analyzing road safety is that using this technique it is possible to comprehensively and completely study the problem of road traffic. Table 2 below shows the differences between COMPRAM Methodology with the traditional road safety assessment approach where explained each stage.
Since this is a preliminary study, the authors set themselves the task of creating a model for assessing road safety in cities using COMRAD and System Dynamics. The difference is that with traditional methods they study information in a narrow section than in comparison with COMPROM where all data are analyzed in a complex and influence on each other. System dynamic simulation model.
A system dynamic simulation model which is a graphic representation of the causal relations between the concepts, phenomena and actors based on differential equations.
Development of universal software for the analysis of accidents with the use of computers This is a strong side of COMPRAM compared to traditional methods, where it is possible to systematize and create a specific algorithm action where absent in traditional methods.
|
2021-10-31T13:12:08.327Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2ca972473a46ffd39b55d40c02ac0a142ca61a2c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2478/jok-2021-0033",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "47cf260e30319c9cce4fdd3ba1568484b31caf01",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
45417974
|
pes2o/s2orc
|
v3-fos-license
|
Retrograde intrarenal surgery in pediatric patients World Journal of Nephrology
Urinary tract stone disease is seen at a level of 1%-2% in childhood (< 18 years). In recent years, however, there has been a marked increased in pediatric stone disease, particularly in adolescence. A carbohydrate-and salt-heavy diet and a more sedentary lifestyle are implicated in this increase. Although stone disease is rare in childhood, its presence is frequently associated with metabolic or anatomical disorders or infectious conditions, for which reason there is a high possibility of post-therapeutic recurrence. Factors such as a high possibility of recurrence and increasing incidence further enhance the importance of minimally invasive therapeutic options in children, with their expectations of a long life. In children in whom active stone removal is decided on, the way to achieve the highest level of success with the least morbidity is to select the most appropriate treatment modality. Thanks to today’s ad-vanced technology, renal stones that were once treated only by surgery can now be treated with minimally invasive techniques, from invasion of the urinary system in an antegrade (percutaneous nephrolithotomy) or retrograde (retrograde intrarenal surgery) manner or shock wave lithotripsy to laparoscopic stone surgery. This compilation study examined studies involving the RIRS procedure, the latest minimally invasive technique, in children and compared the results of those studies with those from other techniques.
Abstract
Urinary tract stone disease is seen at a level of 1%-2% in childhood (< 18 years). In recent years, however, there has been a marked increased in pediatric stone disease, particularly in adolescence. A carbohydrateand salt-heavy diet and a more sedentary lifestyle are implicated in this increase. Although stone disease is rare in childhood, its presence is frequently associated with metabolic or anatomical disorders or infectious conditions, for which reason there is a high possibility of post-therapeutic recurrence. Factors such as a high possibility of recurrence and increasing incidence further enhance the importance of minimally invasive therapeutic options in children, with their expectations of a long life. In children in whom active stone removal is decided on, the way to achieve the highest level of success with the least morbidity is to select the most appropriate treatment modality. Thanks to today's advanced technology, renal stones that were once treated only by surgery can now be treated with minimally in-vasive techniques, from invasion of the urinary system in an antegrade (percutaneous nephrolithotomy) or retrograde (retrograde intrarenal surgery) manner or shock wave lithotripsy to laparoscopic stone surgery. This compilation study examined studies involving the RIRS procedure, the latest minimally invasive technique, in children and compared the results of those studies with those from other techniques. easier in children than in adults both play a role in this rapid response. SWL, which began being applied in the 1980s with the principle of the use of high-energy shock waves, represents a milestone in the treatment of stone disease in children [7] .
Gofrit et al [8] compared the results of pediatric and adult patients administered SWL for renal stones larger than 10 mm, and reported stone-free status levels of 95% in children and 78.9% in adults. Similar results were obtained from many subsequent studies. In a recent randomized prospective study Mokhles et al [9] compared the outcome of retrograde intrarenal surgery (RIRS) and SWL for stones 10 to 20 mm in preschool age children. They found that the overall stone-free rate was 93% and 96% for SWL and RIRS groups, respectively. SWL is therefore recommended as the first treatment option in children with stones of up to 20 mm (approximately 300 mm 2 ) in modern guidelines [10] . However, the fact that the procedure usually requires general anesthesia in children, the need for general anesthesia in repeat sessions, concerns over the possibility of long-term renal scarring, hypercalciuria, hypertension or chronic renal insufficiency and some stones (cysteine stones, etc.) not responding to the technique represent concerns over its use in children [10,11] .
Technological advances in recent years has permitted the miniaturization of endoscopic devices, as a result of which percutaneous nephrolithotomy (PNL) has become the first treatment option for stones larger than 2 cm in children [11] . Although the procedure was initially performed with adult-type devices, Jackman et al [12] described a "mini-perc" technique using a 7 Fr rigid cystoscope and 11 Fr vascular access. They emphasized that a smaller tract will lead to less tissue and nephron injury and that this is more significant in pediatric patients with small and delicate kidneys, citing the example of a 24 Fr access sheath used in an infant being equivalent to 72 Fr in an adult.
Desai et al [13] reported that intraoperative hemorrhage occurring during PNL is related to the number and diameter of tracts, for which reason tract diameter should not exceed 22 Fr. In the majority of subsequent pediatric PNL series, the risk of intraoperative complications has been shown to decrease with use of smallsize instruments [11,14] . Indeed, new PNL modifications aimed at reducing complication levels still further, such as tubeless PNL, ultramini-PNL and micro-perc, have been described [15][16][17] . However, despite all these modifications and high success rates, major complications such as neighboring organ injury, severe hemorrhage and urosepsis are still reported at levels of up to 10%, and the debate over whether the procedure is truly non-invasive continues [18,19] .
RIRS is a comparatively new concept in pediatric patients. Before embarking on the details of this method in children, it will be useful to briefly review the stages by which it arrived at its present-day position. Use of this technique for treating renal stones was first described in 1983, by Huffman et al [20] , when a large stone located in the renal pelvis was broken with the help of a ureteroscope with a rigid rod-lens structure and an ultrasonic lithotripter. Although the authors maintain that stones in the upper ureter and renal pelvis can be effectively and safely treated using small caliber rigid devices, the technique as it stands has not achieved popularity, due to its low success rate and high level of complications. Retrograde treatment of renal stones has been able to enter into widespread use only with the development years later of flexible ureteroscopes (f-URS) possessing fiberoptic technology and retrieval instruments with a nitinol structure and the simultaneous entry into use of Ho:YAG laser in intracorporeal lithotripsy [21] .
Following the first description of the pediatric ureteroscopy (URS) by Ritchey et al [22] in 1998, the development of URS decelerated due to concerns over existing instruments not being of suitable sizes for children, the inadequacy of optic imaging systems and development of complications post-URS in child patients, such as ischemia, injury, perforation, stricture and vesicoureteral reflux, and this delayed the use of RIRS in this patient population [22,23] . However, the development in subsequent years of more resistant and finer (< 8 Fr) ureteroscopes and auxiliary nitinol instruments, the improvement of optic system quality, the entry into use of Ho:YAG laser and, parallel, to all these technological advances, an increase in surgeon experience with flexible URS led to the technique also starting to be used in child patients.
The first wide series on the subject of pediatric RIRS was published by Cannon et al [24] in 2007. Twenty-one child patients (13 girls, 8 boys) administered RIRS due to lower pole renal stone and with a mean stone size of 12 mm were included in that study. After a mean 11 mo of follow-up, stone-free status was achieved at a level of 76%, and no intra-or postoperative complications were reported in any patient. Passive dilatation was applied using preoperative stent in 38% of patients, while a ureteral access sheath was used in 43% (Table 1). However, the upper age limit was set at 20 (mean 15.1) in that publication reporting a pediatric series and a great many cases were postpubertal (67%) patients.
A 100-case series was published by Smaldone et al [25] in that same year. Although 37% of the stones in that series were intrarenal (renal pelvis 6%, upper pole 10% and lower pole 17%). Mean stone size was 8.3 mm and mean patient age was 13.2 years, with 49% of cases being prepubertal children. Passive dilation was applied in 54% of cases, ureteral active dilatation with a coaxial dilator to 70% and ureteral access sheath to 24%. Stonefree status was achieved in 91% of patients, while ureteral perforation developed in 5 and ureteral reimplantation was required due to stricture in the late period in one. However, no correlation was reported in that study between the complications that developed and use of ureteral access sheath or ureteral dilation.
In a study from 2008, Tanaka et al [26] published the results from 50 pediatric patients with a mean age of 7.9 (1.2-13.6 years) and receiving RIRS due to renal stone. Mean stone size was 8 mm (1-16) mm; 58% of cases remained stone-free at long-term follow-up with a single procedure, while an additional procedure was required in 36%. Success rate was correlated with stone size (P = 0.005), while additional procedure requirement was correlated with both stone dimension (P = 0.002) and patient age (P = 0.04). However, the text refers to procedures being performed for stones as small as 1 mm. Kim et al [23] reported the experience with flexible URS of the Philadelphia Children's Hospital, announcing the results of 170 procedures performed on 167 pediatric patients with a mean age of 62.4 mo (range, 3-218). Mean stone dimension was 6.1 mm (range, 3-24), with stones in 60% of cases being intrarenally located (28% upper ureter stone, 12% upper ureter stone). Access to the ureter could not be established in 57% of patients, for which reason a stent was inserted and left to passive dilatation. Ureteral access sheath was only used in cases with a heavy stone burden or receiving passive dilatation, although no level of use was cited. Following surgery lasting a mean 107 min (range, 72-196), 100% of patients with stones smaller than 10 mm achieved stonefree status, and 97% of those with stones larger than 10 mm. No intra-or postoperative complications were reported in this series.
Unsal et al [27] examined the reliability of this procedure in pre-school children, evaluating 16 child patients with a mean age of 4.2 years (range, 10 mo-7 years). Mean stone dimension was 11.5 mm (range, [8][9][10][11][12][13][14][15][16][17]; 37.5% of patients received double-j stent (passive dilatation), active dilatation was performed on 29.4%, and ureteral access sheath was used in 17.6%. One hundred percent of patients with stones smaller than 10 mm and 81% of those with larger stones achieved stone-free status. Ureteral perforation developed during ureteral dilatation in one case. That study showed that RIRS can successfully be used in infants aged under 1 year, describing the youngest (10 mo) case treated using the procedure in the literature. Subsequently, Erkurt et al [28] showed with a wider case series that the procedure can be safely used in pre-school age children. In that study, a ureteral access sheath was used in each case, and complication rates of 27% and stone-free status of 93% were reported.
In a study evaluating the efficacy of RIRS in prepubertal children Abu Ghazaleh et al [29] reported the results from 56 children (age 6-14) with stones less than 15 mm in size. Pre-procedural passive dilatation was performed in all cases, and electrohydraulic lithotripsy was used for stone breaking. At the end of 34-mo follow up, 100% stone-free status was reported and no intraoperative complication developed, although urinary infection was reported in 3 patients in the postoperative period and macroscopic hematuria in one. The use of a lithotripsy technique that has been abandoned due to high complication levels, each patient being subjected twice to anesthesia with the application of passive dilatation and stones inside the renal pelvis being broken with rigid URS represent question marks in that study, despite such high success rates.
In a multi-center comparative analysis (Table 2), Resorlu et al [30] compared the outcomes of patients with renal stones 10-30 mm in size treated with mini-perc (n = 106) or RIRS (n = 95). Stone-free status levels were 84% for RIRS and 86% for mini-perc, while complication levels were 8.4% for RIRS and 17% for mini-perc. All complications in both groups were minor (Clavien Ⅰ -Ⅱ), and no major complications (Clavien Ⅲ-Ⅳ) were observed. However, transfusion requirement at a level of 6% was reported in the mini-perc group. In addition, exposure to fluoroscopy, length of surgery and length of hospital stay were all lower in the RIRS group. Although RIRS appears to offer more advantages than mini-perc, when preoperative factors were assessed, there was a significant difference between the two groups in terms of stone size (23.7 mm vs 14.3 mm), and this was cited as a significant limitation in the text. When the groups 195 November 6, 2014|Volume 3|Issue 4| WJN|www.wjgnet.com Table 1 Outcomes of pediatric retrograde intrarenal surgery procedures in published series Ref.
|
2018-04-03T06:01:23.891Z
|
2014-11-06T00:00:00.000
|
{
"year": 2014,
"sha1": "11722229464a88a66f59aca918c3fae2eecbbc97",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5527/wjn.v3.i4.193",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "049b421a584c39009ed46e921808673c17d509a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.